BMC Software | Blogs https://s7280.pcdn.co Wed, 30 Apr 2025 22:36:26 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png BMC Software | Blogs https://s7280.pcdn.co 32 32 Do Your Developers Have the Droids They Are Looking For? https://s7280.pcdn.co/tools-help-developers-deliver-quality-code/ Wed, 30 Apr 2025 22:36:26 +0000 https://www.bmc.com/blogs/?p=55050 In the Star Wars movies, starfighter pilots are commonly assisted by astromechs, a type of repair droid that serves as an automated mechanic. Astromechs have many appendages — tools that can do almost anything. Pilots rely on their astromech copilots to control flight and power distribution systems while also calculating hyperspace jumps and performing simple repairs. The best example of this […]]]>

In the Star Wars movies, starfighter pilots are commonly assisted by astromechs, a type of repair droid that serves as an automated mechanic. Astromechs have many appendages — tools that can do almost anything. Pilots rely on their astromech copilots to control flight and power distribution systems while also calculating hyperspace jumps and performing simple repairs. The best example of this is the loyal R2-D2, always there for Luke Skywalker.

When problems arise, and they always do, the astromech droid is there to fix the problem. Couldn’t developers benefit from similar automated assistants as they work on code? Luckily, such tools do exist.

Here are some examples of BMC AMI technology that assists developers in delivering quality code – fast.

  • When confronted with code they do not understand, a developer working in BMC AMI DevX Code Insights merely highlights a section of the code, right-clicks, and selects “Explain.” BMC AMI Assistant returns a short, artificial intelligence (AI) generated summary of the business logic and a detailed description of the code’s logic flow. Developers now have an easily available way to work with confidence on the code using the BMC AMI DevX Workbench Editor in Eclipse or VS Code. They can also see charting to understand the structure of the program and the flow of the logic, and trace data from its arrival to its departure—right from the editor they use every day.
  • Another assistant is the Runtime Visualizer in BMC AMI DevX Code Insights, which enables developers to visualize their applications in real time. With it, developers can quickly see exactly how the application works.
  • When entering code in the BMC AMI DevX Workbench Editor, a type-ahead feature anticipates an automatically completes reserved words, allowing the developer to select one, saving time and avoiding typos.
  • When it comes time to debug a batch program using BMC AMI DevX Code Debug, developers can right-click the JCL member, select ‘Debug as’ and they’re good to go, with the configuration already filled out. Also, the configuration settings are all visible in one dialog, and if important information is missing, the dialog will point it out.
  • Creating test data can sometimes be difficult, but in BMC AMI DevX Workbench, developers can use the ‘Copy To’ function of the Host Explorer. Being able to copy multiple files and rename them, and even copy them to another LPAR with no shared DASD is a big help.
  • When there is a compile error, developers can use ‘Show Compile Diagnostics’ in BMC AMI Workbench Host Explorer or BMC AMI DevX Code Pipeline. It takes them straight to the line(s) in their program that caused their compile to fail. This capability saves having to page through the compiler output and then go open the program and locate the line(s) that caused the issue.

Whether you’re a starfighter in a galaxy far, far away or a developer working on mainframe applications, it’s best not to go it alone. Thanks to these tools, developers have their own faithful assistants to help them reach their goals.

To learn more about these features and how to use them, turn to the BMC Education Courses for AMI DevX.

]]>
New observability, discovery and agentic AI insights prevent major incidents and optimize applications https://www.bmc.com/blogs/agents-observability-and-discovery-prevent-incidents-and-optimize-apps/ Wed, 30 Apr 2025 18:15:01 +0000 https://www.bmc.com/blogs/?p=55033 Optimizing application performance takes a significant amount of work behind the scenes—from quickly resolving and proactively preventing issues to understanding what could be causing bottlenecks and latency to making sure the large language models (LLMs) in generative AI (GenAI) applications are performant and optimized – to managing containerized software. The latest BMC Helix 25.2 ITOM […]]]>

Optimizing application performance takes a significant amount of work behind the scenes—from quickly resolving and proactively preventing issues to understanding what could be causing bottlenecks and latency to making sure the large language models (LLMs) in generative AI (GenAI) applications are performant and optimized – to managing containerized software.

The latest BMC Helix 25.2 ITOM release helps IT teams optimize application performance, resolve issues faster, and more easily manage containerized software with the following new AI agents and capabilities:

  • BMC HelixGPT Post Mortem Analyzer—Get a detailed review after an incident occurs that documents root cause, impact, and actions taken to resolve it so that you can prevent it from reoccurring.
  • BMC HelixGPT Insight FinderUse a natural language chat interface to instantly create dashboards that monitor issues impacting service health. Check to see whether an issue is being worked on, see what the timeline for fixing it is, and find out its root cause.
  • Application observability with Open Telemetry logsImprove the diagnosis of application performance issues by correlating traces with span logs to enhance root cause isolation.
  • LLM observability—Improve LLM model quality and efficacy with a dashboard that provides insights on model accuracy, optimize LLM application performance and model training costs with a dashboard that provides metrics including resource utilization, and request processing performance.
  • Deep-container discovery—More easily manage licenses and security patches for containerized software with new discovery capabilities that show you which software is running in your containers.

BMC HelixGPT Post Mortem Analyzer provides in-depth insights after incidents

After an incident occurs, site reliability engineers (SREs) often need to spend additional days or weeks dissecting issues to gain insights into why they occurred, how they impacted operations, and most importantly, how to prevent them in the future. BMC HelixGPT Post Mortem Analyzer saves SREs time by providing insights into the root cause, its impact on operations, and proactive measures to prevent them from happening in the future.

BMC HelixGPT Post Mortem Analyzer

Figure 1. BMC HelixGPT Post Mortem Analyzer.

BMC HelixGPT Insight Finder automatically creates custom dashboards by chat

Tracking the status of issues can be difficult and complex—such as knowing what the timeline is for fixing them, understanding their underlying root causes, and comprehending how they’re impacting service health. It usually takes in-depth research and an examination of multiple data sources to find the answers to these questions, and development teams to build customized dashboards with this information. BMC HelixGPT Insight Finder automatically creates visualizations and reports for easy sharing, without the need to know any query language.

With BMC HelixGPT Insight Finder, IT teams can use a natural language chat interface to ask questions such as how many incidents there currently are, how many closed events have occurred in a given period, grouped by severity, and get historical data such as how often an issue has occurred in the past.

Figure 2. BMC Helix GPT Insight Finder.

Application observability with Open Telemetry logs helps improve application performance

It’s important to analyze application performance to make sure that there are no bottlenecks, and to give end users of applications a responsive and glitch-free experience.

BMC Helix has expanded its Open Telemetry data ingestion capabilities from traces to include log span trace data. By correlating span log data with traces, BMC Helix captures latency, response time, duration, and error rate to enhance root cause isolation of application performance issues.

Figure 3. BMC Helix application observability with Open Telemetry logs

LLM observability helps improve LLM application efficiency, quality and accuracy

LLM applications require an exponentially higher amount of compute power and consume more energy than traditional applications, so it’s important to optimize performance and costs. Additionally, measuring the efficacy and quality of LLM models ensures model accuracy and reliability to produce valid responses for application users and reduce AI hallucinations.

BMC Helix’s new LLM observability functionality provides LLM application workflow tracing and dashboards to help data scientists and AI engineers monitor model quality and efficacy while enabling IT and developers to better understand LLM application performance and behavior.

BMC Helix displays LLM training metrics that help improve LLM model accuracy, detect drift, and measure and reduce hallucinations. Key signals that the BMC Helix LLM observability dashboards reveal include success rate of generative AI (GenAI_ requests, LLM request rate, query processing rate and efficacy, and LLM latency. The BMC Helix dashboard is configurable and allows new LLM training metrics to be added as needed.

Figure 4. LLM model quality and evaluation metrics dashboard.

BMC Helix LLM Observability also helps track resource utilization to optimize costs by measuring token usage and how much GPU processing power was utilized to train an LLM model with metrics like power usage, memory, and temperature.

Figure 5. LLM GPU utilization and cost dashboard.

Deep container discovery provides visibility to manage containerized software

While containerized deployments bring efficiency to application development, it’s difficult to keep track of software versions running in containers.

BMC Helix Discovery now provides visibility into software running in deployed containers by extending ssh-based worker node scans into containers and using existing discovery patterns to discover containerized software details.

This visibility helps you manage containerized software licenses and to stay up to date with security patches while giving you visibility into the overall containerized software lifecycle.

Figure 6. BMC Helix container discovery diagram.

New automated capacity reporting simplifies report creation and distribution

Now with BMC Helix Dashboards, you can automate the creation, distribution, and archiving of capacity reports:

  • Build a customized capacity reporting template in BMC Helix Dashboards.
  • Manage basic and advanced reports in one place and create user access controls for retrieving archived reports.
  • Easily manage and update the email distribution list with the ability to add, delete, or edit individual emails and grant them access to archived reports.

To learn more about how these new BMC Helix  capabilities can help you transform your IT operations, contact us for a consultation.

]]>
Change Control Board vs. Change Advisory Board: What’s the Difference? https://www.bmc.com/blogs/change-control-board-vs-change-advisory-board/ Tue, 29 Apr 2025 00:00:53 +0000 https://www.bmc.com/blogs/?p=15873 Technology is best described by the adage from Greek philosopher Heraclitus: The only constant thing is change. In this age of digital transformation, customers and competitors can find solutions and alternatives at the touch of a button. This speed means that service providers stay ahead only by embracing and executing change quickly, yet maintaining sufficient […]]]>

Technology is best described by the adage from Greek philosopher Heraclitus: The only constant thing is change. In this age of digital transformation, customers and competitors can find solutions and alternatives at the touch of a button. This speed means that service providers stay ahead only by embracing and executing change quickly, yet maintaining sufficient control to manage risk.

In change management and execution, there are two key factors to your company’s success: your technology and your decision-making processes. We’ll explore two structures, or bodies, that focus on change-related decision making—the Change Control Board and the Change Advisory Board. The biggest difference between these two bodies is their scope:

  • The CCB primarily handles changes within projects
  • The CAB covers all changes related to the service lifecycle, including emergency changes

Let’s explore in more detail what they do and how they differ. We’ve also included additional resources at the end of this article.

Change management and decision making

When it comes to management and control of changes to services and service components, one of the biggest challenges is determining who has the authority to make change decisions.

IT service management has long suffered from bureaucratic approaches and general risk aversion—which results in layers of approvals, development delays and confusion, and, ultimately, failure to deliver value to customers in an agile manner. This situation is exacerbated in companies with legacy systems and structures that prohibit the flexibility for change that digital transformation requires.

The Change Control Board and the Change Advisory Board are similar organizational structures that play vital roles in decision making. Both are comprised of teams whose role is to collectively help the organization make the right decisions of balancing need and risk of changes to technology that supports business processes, but they’re not the same.

What is a Change Control Board?

A Change Control Board (CCB), also known as the configuration control board, is a group of individuals, mostly found in software-related projects. The group is responsible for recommending or making decisions on requested changes to baselined work. These changes may affect requirements, features, code, or infrastructure.

Poor change control can significantly impact the project in terms of scope, cost, time, risk, and benefits. Therefore, it is crucial that the CCB members are sufficiently equipped with information, experience, and support necessary to make the best decisions.

Creating a Change Control Board

The CCB is created during the project’s planning process. The most successful CCBs include representatives for both the project implementer and the customer, whether they are from a single organization or different ones. In a software project, these would include management or leads from the following units:

  • Program or project managers
  • Product managers
  • Business analysts
  • Developers
  • QA
  • Operations

The Change Control Procedures and the Change Control Plan, established during the planning phase, will define the role of the CCB, which can range from simple recommendations to more holistic decision making. The procedures and plan documentation will also include:

  • The structure and authority of the CCB, such as:
    • Which levels of change require CCB approval
    • Whether lower levels of change can be made by the project manager, for instance
    • Whether higher levels of change must be elevated to a higher governing body, such as the board of directors
  • The frequency of meeting
  • The rules for decision making
  • Communication methods

Best practices for CCB process

To ensure that the CCB can be effective in its reviewing and approving of proposed modifications in a strategic, organized, and coordinated way, consider adopting the following best practices:

  • Diverse representation: Have a diverse set of members, including IT team members, customers, suppliers, and potentially others, depending on your situation.
  • Clearly defined responsibilities: Specify clear membership roles and the authority to approve or reject changes, to ensure that the board has the agility to respond to change quickly, and that both business and technical expertise are put to best use.
  • Comprehensive charter: Spell out in a written document the purpose, scope of authority, membership criteria, member responsibilities, operating procedures, and process for making decisions that the CCB will use.
  • Documented decision-making process: Spell out the decision-making process up front to prevent conflict and speed the function of the CCB. Decide what constitutes a quorum, define the powers of the board’s manager, and establish what decisions need to be ratified by a higher authority.
  • Effective communication: Organize the communication channels the CCB will use and set the update frequency. One member should be responsible for maintaining a single repository of up-to-date information and coordinating to keep stakeholders in the loop.
  • Scope-creep management tactics: Balance change requests against project timeline and costs to keep the project on track and to stop scope creep.

How many Change Control Boards is enough?

Organizations may choose to have a single CCB handling change requests across multiple projects. For larger projects, you may want a dedicated CCB. Some projects might require multi-level CCBs. A low-level CCB could handle lower priority change requests, for instance non-customer-facing features or changes with low/no cost impact. A higher-level CCB could tackle major change requests that have significant impact on costs or customers.

What is a Change Advisory Board?

Mostly involved in decision making for deployments to IT production environments, the Change Advisory Board (CAB) is a body constituted to support the authorization of changes and to assist change management in the assessment, prioritization, and scheduling of changes.

The authority of the CAB can vary across organizations. Usually, if top leaders or C-suite executives sit in the CAB, then it has the highest authority. The organization’s change management policy will define the CAB’s constitution and its scope, which can include anything from proposals and deployments to changes to roles and documentation.

Creating a Change Advisory Board

In most organizations, the Change Manager chairs the Change Advisory Board. The CAB will have a pre-determined schedule. Depending on the typical activity in your IT department, your CAB may meet as often as twice weekly. No matter the frequency of meetings, the Change Manager should communicate the scheduled change required well in advance of meetings, so individuals on the CAB are prepared to make the best decisions.

Best practices for CAB process

For your CAB to function effectively in providing oversight and guidance, consider these best practices:

  • Assess existing CAB gaps. Take a strategic look at your current CAB structure and processes to get a good idea of what is working and what needs to be improved.
  • Gain support for CAB improvement or creation. Communicate with leaders in your organization and those who are involved in decision-making to capture their opinions on what could improve an existing CAB or what would contribute to successfully creating one.
  • Identify the CAB owner. As with any organization, someone needs to lead operations and own the outcomes of the CAB. Make sure responsibility is clearly assigned.
  • Create a standard CAB agenda. Plan a program of topics to discuss and decisions to make before each meeting. A regular agenda lets attendees know what to expect and structures meetings so you stay focused on important decisions.
  • Determine the meeting cadence. Schedule regular meetings as frequently as you need, depending on the rate and kinds of changes expected. Make sure that stakeholders carve out time to devote to the CAB.
  • Host CAB meetings, improve processes, and iterate. Capture feedback after every meeting so you can improve processes and agenda items, adapting to changing situations as they arise.

How a Change Advisory Board makes decisions

A Change Advisory Board typically makes decisions in three major areas, which we’ll review below:

  • Standard change requests
  • Emergency changes
  • Previously-executed change audits

Standard change requests. At every meeting, the Change Advisory Board reviews requested changes using a standard evaluation framework. That framework should consider all dimensions of the change, including service and technical components, business and customer alignment, and compliance and risk. The CAB must also look for conflicting requests—these cases in particular require CAB members to maintain holistic, business-outcomes views that don’t favor the particular team or individual seeking the change.

Emergency changes. Occasionally, the Change Advisory Board must handle an emergency change. This might happen when your company experiences:

  • A service outage (actual or potential)
  • A security breach
  • A compliance requirement

In these cases, an Emergency Change Advisory Board (eCAB) can be formed as a temporary subset of the routine CAB. The eCAB may include some or all individuals from the CAB, and this group will meet outside the normal schedule to review the necessary emergency change(s).

Previously-executed change audits. The CAB can also meet to review previously executed changes particularly those that were unsuccessful or unauthorized, as well as plan the forward schedule of future changes particularly with regard to projected service outage and customer/business plans.

Comparing CCB vs CAB

The Change Control Board and Change Advisory Board share a similar focus of reviewing and making decisions for change requests, though their scopes vary widely. Regardless of differences, the structure for both change bodies must be clear, effective, and efficient. Without these components, companies will fall behind competitors who make changes quickly and safely. Companies must continuously ensure the CCB and CAB succeed, and they’ll also want to prevent change-related bottlenecks, particularly when it makes sense to cascade ownership from decision teams to the teams that own solutions, such as product teams or joint technical teams.

Learn how ServiceOps can help you predict change risks using service and operational data, support cross-functional collaboration to solve problems, and automatically recommend problem resolutions.

]]>
How to Write an SOP (Standard Operating Procedure) https://www.bmc.com/blogs/sop-standard-operating-procedure/ Wed, 23 Apr 2025 00:00:39 +0000 https://www.bmc.com/blogs/?p=16672 I often help customers automate their business processes with Server Automation. I start by gathering information about the process. In many cases, people do not know how the entire process works. They will know their step in the process very well, and everything goes smoothly—until it doesn’t. For example, an employee leaves for vacation. Will […]]]>

I often help customers automate their business processes with Server Automation. I start by gathering information about the process. In many cases, people do not know how the entire process works. They will know their step in the process very well, and everything goes smoothly—until it doesn’t.

For example, an employee leaves for vacation. Will their stand-in be able to complete the tasks? A new worker joins the company. How will they learn the process? A standard operating procedure, or SOP, makes it possible for work to continue smoothly in these scenarios. An SOP is also a go-to resource for when questions arise.

Businesses and teams of all types regularly find themselves in need of writing an SOP, or standard operating procedure. In this article, we’re exploring the basics of SOPs, including:

What is a standard operating procedure (SOP)?

A standard operating procedure, or SOP, comprises the specified tasks, processes, and order of completion to be followed to consistently and efficiently produce high-quality outputs.

When automating a business process, one of the first things to look for is an SOP. IBM defines an SOP simply as “a set of instructions that describes all the relevant steps and activities of a process or procedure.” It’s crucial that organizations know what is needed to complete certain tasks or processes, and an SOP offers that guidance.

An SOP lays out the tasks and roles needed to achieve a policy outcome. This removes the reliance on one person to know how to complete a task, or a set of related tasks. Anyone can consult a single SOP or a group of related SOPs to determine what steps are needed.

What’s the purpose of an SOP?

Having an SOP serves multiple purposes. The clarity and consistency of a thoughtful and well-documented SOP provides efficiency, speeds things up, prevents errors, reduces costs, and, in some situations, enhances safety. Workers avoid stress, confusion, and burnout.

Used correctly, an SOP describes in detail the implementation of a business policy. Typically, it:

  • Establishes a purpose and goal for the SOP
  • Specifies the scope of the work
  • Fulfills policy requirements such as regulatory policies, internal standards, and/or industry best practices
  • Maps the applicable policy, standards, and practices to an explicit, step-by-step set of actions
  • Defines the goals that the process will accomplish, and breaks these into individual steps to achieve that goal
  • Assigns the roles responsible for carrying out each step
  • Includes documentation, forms, tools, and equipment

It is just as important to understand what an SOP is not: SOPs are not work instructions (WI), which contain specific directions for using specific tools. Work instructions flow from the steps of an SOP — an SOP describes what needs to be done, and a WI describes how it is done.

Benefits of standard operating procedures

Benefits of implementing standard operating procedures.

Broadly speaking, an SOP takes organizational policies, goals, missions, and visions and turns them into actions. The SOP gives tangible, real-world guidance to help put appropriate policies into place. In addition, it helps to provide consistency in how tasks are executed throughout an organization.

More specifically, there are a number of organizational benefits that SOPs offer:

  • Help your organization meet compliance standards
  • Adhere to schedules
  • Simplify and maximize production/output
  • Set safety standards
  • Support training efforts
  • Ensure business activities have no adverse environmental impacts
  • Prevent organizational failures

The business value of an SOP library

As you create SOPs, you are also creating a library — a place to reference all of your operating procedures. A single SOP is useful to the people directly involved in that task, but an entire body of SOPs provides high-level context and shows how tasks are related. A library of SOPs provides many benefits, such as:

  • Helps workers define and learn their role and responsibilities
  • Promotes consistent outcomes
  • Delivers a key step in the implementation and verification of business policies

The SOP guides employees in learning what is expected of them and how to fulfil their responsibilities. It also helps other groups and roles understand their actions in the larger context of the company. By assigning a role to each step, the SOP ensures the correct people/teams are completing the right steps toward a goal.

Finally, an SOP library ensures that business policies are implemented and tasks are performed according to the work instructions. SOPs support a cycle from policy to SOP to WI, and, ultimately, output of the policy.

What should an SOP include?

Key parts and components of a typical SOP include:

  • Title. A concise and descriptive title reflects the SOP’s purpose and makes it easy to find. You may catalog it with an ID number and add a creation and revision date. Additional helpful information includes division, department, role, and possibly information about the author and person approving it.
  • Table of contents. Make it easy for the reader to find information, with a guide to each section.
  • Purpose. Explain the goal of the SOP and desired results.
  • Materials and equipment. Specify the hardware, software, and other materials that may be necessary to the task.
  • Glossary. Define terms, jargon, acronyms, and abbreviations to enhance clarity.
  • Responsibilities. Specify the personnel, skills, and roles needed for the procedure.
  • Tasks and steps. This section is where the ordered work steps are specified, along with notes about dependencies, precautions that may be needed, and other such details.
  • Documentation. Include details about record keeping, retention times, and necessary paperwork.
  • References. Complicated or regulated procedures may need standards and guidelines. Health and safety warnings may also be required.

The process for writing an SOP

The process for writing an SOP is unique for each organization, so it’s important to find a drafting process that fits your team’s specific needs. Rather than getting overwhelmed by the idea of this process, utilize these tips and steps to put together a plan. Once you’ve developed a thorough plan and system, trust that system to help you prepare an effective SOP that your team can utilize.

Of course, there are a few things that you should always consider when starting to write your SOP:

  1. First and foremost, make your SOP easy to read, understand, and use. If not, it won’t be utilized and, thus, won’t be effective.
  2. Make your SOP actionable. Your audience should know exactly what actions to take to meet the specific task or goal.
  3. Make your SOP specific and measurable. This will ensure that you can evaluate the effectiveness of a process while adjusting as necessary.

Guided by these three SOP principles, here are the steps you can follow when preparing your SOP:

Keep the end objective in mind

When you’re ready to draft your SOP, start with the end in mind. Consider what you want the SOP to achieve and all activities, from beginning to end, that are necessary to meet this objective.

To help, consider mapping out the activities with a flowchart or diagram to help understand every aspect of the process. Doing so should give you an outline, scope, and sequence of the relevant procedure.

Balance depth and usability

From your outline, you should have a clear understanding of each step of the process. It’s important, however, to consider which steps can be combined, which can be eliminated, and which ones need to be expanded.

One of the most challenging aspects of drafting an SOP is ensuring that it has enough detail to be usable by your audience, but not so much detail that your audience doesn’t want to use it. Experts recommend having 5-7 steps per SOP. Before writing your SOP, work to ensure that your outline has adequate detail but not so many steps or so much depth that it’s overwhelming and unusable.

Draft the SOP

Once you’ve revised your outline and any applicable flow charts or diagrams, it’s time to draft the SOP.

Consider the list of typical SOP sections and decide which ones are necessary for the procedure. The more complicated or mission-critical the SOP, the more detail to provide.

The next step is to decide on an SOP format. Typical SOP formats lay out the SOP step-by-step, creating a hierarchy of steps, or laying out a flowchart of steps. Each option has its strengths and weaknesses and no single style is perfect for all procedures.

7 tips for writing an effective SOP

What makes a good SOP? To fully reach its potential, an SOP should have the following characteristics:

Tips for writing an effective SOP.

1. Focus on the process—not the tools

An SOP should focus on the process, so try to be tool- or software-independent. As we stated earlier, SOPs are unlike work instructions (WI). Work instructions are tool-specific instructions, whereas our SOP is the process.

For example, a customer wants to automate the process of promoting software to a testing environment for proper testing. An SOP may have these steps:

  1. Developers notify release engineers when code is ready to be tested.
  2. Release engineers copy code to the test environment from the development environment.
  3. Release engineers notify QA team that the code is ready to be tested.

The WIs that would flow from this SOP will be instructions around using tools such as FTP to transfer the code, and so on. To automate this, I follow the same process and use Server Automation where possible. The process stays the same, but the WIs will be different, as they’ll reflect the new tools.

2. Be concise

An SOP should be short, readable segments that describe how to accomplish a specific task. If there are too many steps, consider splitting sub-tasks into separate SOPs that reference each other. This results in SOPs that are easier to read and understand, and you’ll already have a working SOP library.

For example, a customer may have an SOP that describes the entire software development lifecycle process, starting with initial development through testing and deployment to production. This is a large process with many steps. Realistically, a reader seeking guidance on promoting code to production would not be interested in initial code development. A targeted SOP on migrating code to production would let the reader stay focused on this specific task.

Some logical break points in this software development process example could be:

  1. Develop code in development environment.
  2. Promote code to testing environment for proper testing.
  3. Promote code to production.

Each of these break points could be covered by its own shorter, easier to digest SOP.

3. Write for your audience

For your SOP to serve its purpose, it must be relevant, helpful, and usable for its audience. So write your SOP for your audience. Consider factors like:

  • Prior knowledge
  • Whether you’re addressing new employees
  • Relevant factors or context
  • Industry-specific language

The readers of the SOP are looking for guidance and direction, so they may not be familiar with specialized terms associated with the steps of the process. With this in mind, limit the use of jargon or other specialized terminology that may confuse or mislead the reader.

If you need to use specific acronyms or jargon, consider adding a glossary section to clarify meaning. For example, the phrase “smoke test” refers to “…the preliminary check of the software after a build and before a release to find basic and critical issues in an application before critical testing is implemented.” However, it can also be used for testing how much load can be supported by a given hardware. Instead of using the phrase, the SOP should be more explicit to avoid any potential misunderstanding.

4. Clearly define steps and roles

The SOP should have explicit steps to provide clear direction to the reader. Each step should also note the role responsible for carrying out the step. Explicit, actionable steps make it clear what needs to be done. Mapping each step to a responsible party clarifies who should do the work.

In addition, it’s important to identify the role(s) responsible to avoid situations where people think somebody else is responsible for a task. The SOP should always describe what needs to be done and who should do it.

5. Seek input from relevant team members and stakeholders

Seeking additional input isn’t necessary for all SOPs, but it’s worth considering especially because a successful SOP is used by the applicable teams. Talk to them before, during, and after the drafting process for input and feedback.

6. Test your SOP

Once drafted but before it’s put in place, test the SOP to ensure that it is accurate and usable. Further, have other team members test it too. This will help you identify and deal with any problem areas before it’s put into action.

7. Review regularly

Policies and processes change over time. IT businesses change rapidly. The SOP itself may be unclear or incorrect. How do you prevent your SOPs becoming obsolete?

SOPs should be updated as changes occur, and reviewed on a regular basis for clarity and correctness. When I automate a customer’s existing process, we often find ways to improve them. Discussing and reviewing a process often results in finding steps that are redundant or obsolete and can be simplified.

Schedule a review of the SOP at least every 6 to 12 months. This will enable you to identify and update any obsolete areas, using your specific and measurable goals, to ensure that the SOP continues to be relevant and helpful to the people who use it.

Industries that use SOPs

While any organization in any industry can benefit from developing SOPs, they are most common in the following fields:

  • Healthcare. When complexity, patient safety, and compliance with regulations matter, SOPs can help tame complications, reduce errors, and lead to better care and outcomes.
  • Pharmaceuticals and biotech. As in healthcare environments, providing high-quality outputs and complying with regulations is paramount. SOPs are helpful for research and development, testing, production, quality control, and more.
  • Manufacturing and construction. SOPs help standardize how things are made, how machines are maintained, and how safety is achieved, among other things.
  • Food service. Preparing and handling food must be done safely while also maintaining freshness, presentation, taste, and nutrition. SOPs are invaluable in this field.
  • Financial services. When money is involved, protecting assets, documenting transactions, clearing audits, complying with laws and regulations, and providing exceptional customer care are necessities.
  • Aerospace. Safe flight operations, equipment maintenance, documentation, and regulation all lend themselves to SOPs.

SOPs for IT processes

As you can see, SOPs play an important role in IT. By following the best practice outlined above, the SOPs will have the right level of detail around what needs to be done. They will ensure that operations remain consistent even when people are on vacation or are new to the team. Lastly, an SOP library will help safeguard your operations and business from costly downtime or other issues.

If you need assistance with developing SOPs for your operations, fill out our form and an expert will contact you to see how we can help.

Additional resources and SOP examples

For examples of real SOPs, see these articles:

 

]]>
Unlocking Efficiency with Control-M Automation API https://www.bmc.com/blogs/unlocking-efficiency-with-controlm-automation-api/ Tue, 22 Apr 2025 13:51:49 +0000 https://www.bmc.com/blogs/?p=55002 Introduction In the rapidly evolving digital world, businesses are always looking for ways to optimize processes, minimize manual tasks, and boost overall efficiency. For those that depend on job scheduling and workload automation, Control-M from BMC Software has been a reliable tool for years. Now, with the arrival of the Control-M Automation API, organizations can […]]]>

Introduction

In the rapidly evolving digital world, businesses are always looking for ways to optimize processes, minimize manual tasks, and boost overall efficiency. For those that depend on job scheduling and workload automation, Control-M from BMC Software has been a reliable tool for years. Now, with the arrival of the Control-M Automation API, organizations can elevate their automation strategies even further. In this blog post, we’ll delve into what the Control-M Automation API offers, the advantages it brings, and how it can help revolutionize IT operations.

What is the Control-M Automation API?

The Control-M Automation API from BMC Software demonstrates how developers can use Control-M to automate workload scheduling and management. Built on a RESTful architecture, the API enables an API-first, decentralized method for building, testing, and deploying jobs and workflows. It offers services for managing job definitions, deploying packages, provisioning agents, and setting up host groups, facilitating seamless integration with various tools and workflows.

With the API, you can:

  • Streamline job submission, tracking, and control through automation.
  • Connect Control-M seamlessly with DevOps tools such as Jenkins, GitLab, and Ansible.
  • Develop customized workflows and applications to meet specific business requirements.
  • Access real-time insights and analytics to support informed decision-making.

Key Benefits of Using Control-M Automation API

  • Infrastructure-as-Code (IaC) for Workload Automation
    Enables users to define jobs as code using JSON, allowing for version control and better collaboration. It also supports automation through GitOps workflows, making workload automation an integral part of CI/CD pipelines.
  • RESTful API for Programmatic Job Management
    Provides a RESTful API to create, update, delete, and monitor jobs from any programming language (Python, Java, PowerShell, etc.). It allows teams to automate workflows without relying on a graphical interface, enabling CI/CD integration and process automation.
  • Enhanced Automation Capabilities
    By leveraging the Control-M Automation API, organizations can automate routine tasks, decreasing reliance on manual processes and mitigating the potential for human error. This capability is particularly valuable for managing intricate, high-volume workflows.
  • Seamless Integration
    By serving as a bridge between Control-M and external tools, the Control-M Automation API enables effortless integration with CI/CD pipelines, cloud services, and third-party applications—streamlining workflows into a unified automation environment.
  • Improved Agility
    Through Control-M Automation API integration, organizations gain the flexibility to accelerate application deployments and dynamically scale operations, ensuring responsive adaptation to market changes.
  • Real-Time Monitoring and Reporting
    The Control-M Automation API provides real-time access to job statuses, logs, and performance metrics. This enables proactive monitoring and troubleshooting, ensuring smoother operations.
  • Customization and Extensibility
    The API provides the building blocks to develop purpose-built solutions matching your exact specifications, including custom visualization interfaces and integration with specialized third-party applications.

Use Cases for Control-M Automation API

  • DevOps Integration
    Integrate Control-M with your DevOps pipeline to automate the deployment of applications and infrastructure. For example, you can trigger jobs in Control-M from Jenkins or GitLab, ensuring a seamless flow from development to production.
  • Cloud Automation
    Leverage the Control-M API to handle workloads across hybrid and multi-cloud setups. Streamline resource provisioning through automation, track cloud-based tasks, and maintain adherence to organizational policies.
  • Data Pipeline Automation
    Automate data ingestion, transformation, and loading processes. The API can be used to trigger ETL jobs, monitor their progress, and ensure data is delivered on time.
  • Custom Reporting and Analytics
    Extract job data and generate custom reports for stakeholders. The API can be used to build dashboards that provide insights into job performance, SLA adherence, and resource utilization.
  • Event-Driven Automation
    Set up event-driven workflows where jobs are triggered based on specific conditions or events. For example, you can automate the restart of failed jobs or trigger notifications when a job exceeds its runtime.

Example 1: Scheduling a Job with Python

Here is another example of using the Control-M Automation API to define and deploy a job in Control-M. For this, we’ll use a Python script (you’ll need a Control-M environment with API access set up).

Scheduling a Job with Python

Output of the code shows the successful deployment of jobs/folder in Control-M.

Output of the code

And the folder is successfully deployed in Control-M which can be checked in GUI.

folder is successfully deployed in Control-M

Example 2: Automating Job Submission with Python

Here’s a simple example of how you can use the Control-M Automation API to submit a job using Python:

automating-job-submission-with-python-new

Execute the above Python code and it will return the “RUN ID”.

The folder is successfully ordered and executed which can be checked in Control-M GUI.

Getting Started with Control-M Automation API

  • Access the API Documentation
    BMC provides comprehensive documentation for the Control-M Automation API, including endpoints, parameters, and examples. Familiarize yourself with the documentation to understand the capabilities and limitations of the API.
  • Set Up Authentication
    The API uses token-based authentication. Generate an API token from the Control-M GUI and use it to authenticate your requests.
  • Explore Sample Scripts
    BMC offers sample scripts and code snippets in various programming languages (Python, PowerShell, etc.) to help you get started. Use these as a reference to build your own integrations.
  • Start with Simple Use Cases
    Begin by automating simple tasks, such as job submission or status monitoring. Once you’re comfortable, move on to more complex workflows.
  • Leverage Community and Support
    Join the BMC community forums to connect with other users, share ideas, and troubleshoot issues. BMC also offers professional support services to assist with implementation.

Conclusion

The Control-M Automation API is a game-changer for organizations looking to enhance their automation capabilities. By enabling seamless integration, real-time monitoring, and custom workflows, the API empowers businesses to achieve greater efficiency and agility. Whether you’re a developer, IT professional, or business leader, now is the time to explore the potential of the Control-M Automation API and unlock new levels of productivity.

]]>
How to Build a CI/CD Pipeline Using Jenkins https://www.bmc.com/blogs/ci-cd-pipeline-setup/ Tue, 22 Apr 2025 00:00:19 +0000 https://www.bmc.com/blogs/?p=49386 An effective continuous integration (CI) and continuous delivery (CD) pipeline is essential for modern DevOps teams to cope with the rapidly evolving technology landscape. Combined with agile concepts, a fine CI/CD pipeline can streamline the software development life cycle resulting in higher-quality software with faster delivery. In this article, I will discuss: What to know […]]]>

An effective continuous integration (CI) and continuous delivery (CD) pipeline is essential for modern DevOps teams to cope with the rapidly evolving technology landscape. Combined with agile concepts, a fine CI/CD pipeline can streamline the software development life cycle resulting in higher-quality software with faster delivery.

In this article, I will discuss:

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

What to know before building your CI/CD pipeline

What is a CI/CD pipeline?

The primary goal of a CI/CD pipeline is to automate the software development lifecycle (SDLC).

The pipeline will cover many aspects of a software development process, from writing the code and running tests to delivery and deployment. Simply stated, a CI/CD pipeline integrates automation and continuous monitoring into the development lifecycle. This kind of pipeline, which encompasses all the stages of the software development life cycle and connects each stage, is collectively called a CI/CD pipeline.

It will reduce manual tasks for the development team which in turn reduces the number of human errors while delivering fast results. All these contribute towards the increased productivity of the delivery team.

(Learn more about stages in a CI/CD pipeline, deployment pipelines and the role of CI/CD.)

What is Jenkins?

Jenkins is an open source server for continuous integration and continuous deployment (CI/CD) which automates the build, test, and deploy phases of software development. With numerous plugins you can easily integrate, along with choices of tools, programming languages, and cloud environments, Jenkins is highly flexible and makes it easier to efficiently develop reliable apps.

Why use Jenkins for CI/CD

You might wonder why Jenkins is a good option for building your CI/CD pipeline. Here are some of the reasons it is popular:

  • Extensive plugin support: Whatever you want to do, there is likely already a plugin for it, which speeds and simplifies your work.
  • Active open source community: Being free of licensing costs is just the beginning. It is supported by an active community, contributing solutions, advice, ideas, and tutorials.
  • Platform independent: You are not tied to a specific operating system.
  • Scalable: You can add nodes as needed and even run builds on different machines with different operating systems.
  • Integratable: Whatever tools you are using, you can likely use them with Jenkins.
  • Time-tested: Jenkins was one of the first CI/CD tools, so it is tried and true.

Jenkins CI/CD pipeline example

What does a CI/CD pipeline built using Jenkins look like in action? Here is a simple web application development process.

CI/CD pipeline Jenkins example.

Traditional CI/CD pipeline

  1. The developer writes the code and commits the changes to a centralized code repository.
  2. When the repo detects a change, it triggers the Jenkins server.
  3. Jenkins gets the new code and carries out the automated build and testing. If any issues are detected while building or testing, Jenkins automatically informs the development team via a pre-configured method, like email or Slack.
  4. The final package is uploaded to AWS Elastic Beanstalk, an application orchestration service, for production deployment.
  5. Elastic Beanstalk manages the provisioning of infrastructure, load balancing, and scaling of the required resource type, such as EC2, RDS, or others.

The tools, processes, and complexity of a CI/CD pipeline vary from this example. Much depends on your development requirements and the business needs of your organization. Typical options include a straightforward, four-stage pipeline and a multi-stage concurrent pipeline — including multiple builds, different test stages (smoke test, regression test, user acceptance testing), and a multi-channel deployment (web, mobile).

8 steps to build a CI/CD pipeline using Jenkins

In this section, I’ll show how to configure a simple CI/CD pipeline using Jenkins.

Before you start, make sure Jenkins is properly configured with the required dependencies. You’ll also want a basic understanding of Jenkins concepts. In this example, Jenkins is configured in a Windows environment.

Step 1: Install Jenkins

Download Jenkins from the official website and install it. Install Jenkins using the following Docker command:

 docker run -d -p 8080:8080 jenkins/jenkins:lts

Install Jenkins by downloading it from the website.

Step 2: Configure Jenkins and add necessary plugins

Configuring Jenkins is a matter of choosing the plugins you need. Git and Pipeline are commonly used tools you might want to add from the start.

Configure Jenkins CICD pipeline.

Step 3: Open Jenkins

Login to Jenkins and click on “New Item.”

Open Jenkins and create an item.

Step 4: Name the pipeline

Select the “Pipeline” option from the menu, provide a name for the pipeline, and click “OK.”

Name your CI/CD pipeline.

Step 5: Configure the pipeline

We can configure the CI/CD pipeline in the pipeline configuration screen. There, we can set build triggers and other options for the pipeline. The most important section is the “Pipeline Definition” section, where you can define the stages of the pipeline. Pipeline supports both declarative and scripted syntaxes.

(Refer to the official Jenkins documentation for more detail.)

Let’s use the sample “Hello World” pipeline script:

pipeline {
    agent any

    stages {
        stage('Hello') {
            steps {
                echo 'Hello World'
            }
        }
    }
}

Configure Jenkins pipeline.

Click on Apply and Save. You have configured a simple pipeline!

Step 6: Execute the pipeline

Click on “Build Now” to execute the pipeline.

Execute the pipeline.

This will result in the pipeline stages getting executed and the result getting displayed in the “Stage View” section. We’ve only configured a single pipeline stage, as indicated here:

Pipeline stages getting executed.

We can verify that the pipeline has been successfully executed by checking the console output for the build process.

Verify that pipeline was executed.

Step 7: Expand the pipeline definition

Let’s expand the pipeline by adding two more stages to the pipeline. For that, click on the “Configure” option and change the pipeline definition according to the following code block.

pipeline {
    agent any

    stages {
        stage('Stage #1') {
            steps {
                echo 'Hello World'
                sleep 10
                echo 'This is the First Stage'
            }
        }
        stage('Stage #2') {
            steps {
                echo 'This is the Second Stage'
            }
        }
        stage('Stage #3') {
            steps {
                echo 'This is the Third Stage'
            }
        }
    }
}


Save the changes and click on “Build Now” to execute the new pipeline. After successful execution, we can see each new stage in the Stage view.

Adding stages to CI CD pipeline.

The following console logs verify that the code was executed as expected:

Checking that stages were added to a pipeline.

Step 8: Visualize the pipeline

We can use the “Pipeline timeline” plugin for better visualization of pipeline stages. Simply install the plugin, and inside the build stage, you will find an option called “Build timeline.”

Build timeline of the CI/CD pipeline.

Click on that option, and you will be presented with a timeline of the pipeline events, as shown below.

View of the CI CD pipeline events.

 

That’s it! You’ve successfully configured a CI/CD pipeline in Jenkins.

The next step is to expand the pipeline by integrating:

  • External code repositories
  • Test frameworks
  • Deployment strategies

Good luck!

Cloud-based Azure CI/CD pipeline example

With the increased adoption of cloud technologies, the growing trend is to move the DevOps tasks to the cloud. Cloud service providers like Azure and AWS provide a full suite of services to manage all the required DevOps tasks using their respective platforms.

The following is a simple cloud-based DevOps CI/CD pipeline entirely based on Azure (Azure DevOps Services) tools.

Example of Azure CI/CD pipeline.

Cloud-based CI/CD pipeline using Azure

  1. A developer changes existing or creates new source code, then commits the changes to Azure Repos.
  2. These repo changes trigger the Azure Pipeline.
  3. With the combination of Azure Test Plans, Azure Pipelines builds and tests the new code changes. (This is the Continuous Integration process.)
  4. Azure Pipelines then triggers the deployment of successfully tested and built artifacts to the required environments with the necessary dependencies and environmental variables. (This is the Continuous Deployment process.)
  5. Artifacts are stored in the Azure Artifacts service, which acts as a universal repository.
  6. Azure application monitoring services provide the developers with real-time insights into the deployed application, such as health reports and usage information.

In addition to the CI/CD pipeline, Azure also enables managing the SDLC using Azure Boards as an agile planning tool. Here, you’ll have two options:

  • Manually configure a complete CI/CD pipeline
  • Choose a SaaS solution like Azure DevOps or DevOps Tooling by AWS

CI/CD pipelines minimize manual work

A properly configured pipeline will increase the productivity of the delivery team by reducing the manual workload and eliminating most manual errors while increasing the overall product quality. This will ultimately lead to a faster and more agile development life cycle that benefits end-users, developers, and the business as a whole.

Learn from the choices Humana made when selecting a modern mainframe development environment for editing and debugging code to improve their velocity, quality and efficiency.

Related reading

]]>
MongoDB Indexes: Top Index Types & How to Manage Them https://www.bmc.com/blogs/mongodb-indexes/ Mon, 21 Apr 2025 00:00:30 +0000 https://www.bmc.com/blogs/?p=14008 MongoDB indexes provide users with an efficient way of querying data. When querying data without indexes, the query will have to search for all the records within a database to find data that match the query. In MongoDB, querying without indexes is called a collection scan. A collection scan will: Result in various performance bottlenecks […]]]>

MongoDB indexes provide users with an efficient way of querying data. When querying data without indexes, the query will have to search for all the records within a database to find data that match the query.

In MongoDB, querying without indexes is called a collection scan. A collection scan will:

  • Result in various performance bottlenecks
  • Significantly slow down your application

Fortunately, using MongoDB indexes fixes both these issues. By limiting the number of documents to be queried, you’ll improve the overall performance of the application.

In this tutorial, I’ll walk you through different types of indexes and show you how to create, find and manage indexes in MongoDB.

(This article is part of our MongoDB Guide. Use the right-hand menu to navigate.)

What are indexes in MongoDB?

What are indexes in MongoDB?

MongoDB indexes are special data structures that make it faster to query a database. They speed up finding and retrieving data by storing a small part of the dataset in an efficient way — you don’t have to scan every document in a data collection.

MongoDB indexes store the values of the indexed fields outside of the data collection and keep track of their location in the disk. The indexed fields are ordered by the values. That makes it easy to perform equality matches and to make range-based queries efficiently. You can define MongoDB indexes on the collection level, as indexes on any field or subfield in a collection are supported.

You can manage the indexes on your data collections by using either the Atlas CLI or the Atlas UI. Both make query execution more efficient.

Why do we need indexes in MongoDB?

Indexes are invaluable in MongoDB. They are an efficient way to organize information in a collection and they speed up queries, returning relevant results more quickly. By using an index to group, sort, and retrieve data, you save considerable time. Your database engine no longer needs to sift through each record to find matches.

What are the disadvantages of indexing?

Indexing does have some drawbacks. Performance on writes is affected by each index you create, and each one takes up disk space. To avoid collection bloat and slow writes, create only indexes that are truly necessary.

How many indexes can you use?

MongoDB indexes are capped at 64 per data collection. In a compound index, you can only have 32 fields. The $text query requires a special text index — you can’t combine it with another query operator requiring a different type of special index.

Working with indexes

For this tutorial, we’ll use the following data set to demonstrate the indexing functionality of MongoDB:

use students
db.createCollection("studentgrades")
db.studentgrades.insertMany(
    [
        {name: "Barry", subject: "Maths", score: 92},
        {name: "Kent", subject: "Physics", score: 87},
        {name: "Harry", subject: "Maths", score: 99, notes: "Exceptional Performance"},
        {name: "Alex", subject: "Literature", score: 78},
        {name: "Tom", subject: "History", score: 65, notes: "Adequate"}
    ]
)
db.studentgrades.find({},{_id:0})

Result

Data set to demonstrate indexing functionality of MongoDB.

Are MongoDB indexes unique?

When creating documents in a collection, MongoDB creates a unique index using the _id field. MongoDB refers to this as the Default _id Index. This default index cannot be dropped from the collection.

When querying the test data set, you can see the _id field which will be utilized as the default index:

db.studentgrades.find().pretty()

Result:

The _id field is the Default _id Index.

How to create an index in MongoDB

To create an index in MongoDB, use the createIndex()method using the following syntax:

db.<collection>.createIndex(<Key and Index Type>, <Options>)

When creating an index, define the field to be indexed and the direction of the key (1 or -1) to indicate ascending or descending order.

If you use a descending single-field index, it can reduce performance. It is better to use ascending single-field indexes instead, if you want optimal results.

Another thing to keep in mind is the index names. By default, MongoDB will generate index names by concatenating the indexed keys with the direction of each key in the index using an underscore as the separator. For example: {name: 1} will be created as name_1.

The best practice is to use the name option to define a custom index name when creating an index. Indexes cannot be renamed after creation. The only way to rename an index is to first drop that index, which we show below, and recreate it using the desired name.

createIndex() example

Let’s create an index using the name field in the studentgrades collection and name it as student name index.

db.studentgrades.createIndex(
{name: 1},
{name: "student name index"}
)

Result:

Creating an index called student name index.

Finding indexes in MongoDB

You can find all the available indexes in a MongoDB collection by using the getIndexes() method. This will return all the indexes in a specific collection.

db.<collection>.getIndexes()

getIndexes() example

Let’s view all the indexes in the studentgrades collection using the following command:

db.studentgrades.getIndexes()

Result:

getIndexes() example in MongoDB.

The output contains the default _id index and the user-created index student name index.

How to list indexes in MongoDB

You can list indexes on a data collection using Shell or Compass. This command will give you an array of index documents:

db.collection.getIndexes()

An alternative is to use MongoDB Atlas UI. Open a cluster and go to the Collections tab. Select the database and collection, then click on Indexes to see them listed.

Lastly, you can use the following command in MongoDB Atlas CLI to see the indexes:

atlas clusters index list --clusterName <your-cluster> --db <database> --collection <collection>

How to delete indexes in MongoDB

To drop or delete an index from a MongoDB collection, use the dropIndex() method while specifying the index name to be dropped.

db.<collection>.dropIndex(<Index Name / Field Name>)

dropIndex() examples

Let’s remove the user-created index with the index name student name index, as shown below.

db.studentgrades.dropIndex("student name index")

Result:

Example of how to delete a MongoDB index with a name.

You can also use the index field value for removing an index without a defined name:

db.studentgrades.dropIndex({name:1})

Result:

Example of how to delete a MongoDB index without a name.

The dropIndexes command can also drop all the indexes excluding the default _id index.

db.studentgrades.dropIndexes()

Result:

Example of how to delete all MongoDB indexes.

What are the different types of indexes in MongoDB?

The different types of indices in MongoDB.

MongoDB provides different types of indexes that can be utilized according to user needs. Here are the main index types in MongoDB:

  • Single field index
  • Compound index
  • Multikey index

In addition to the popular Index types mentioned above, MongoDB also offers some special index types for targeted use cases:

  • Geospatial index
  • Test index
  • Hashed index

Single field index

These user-defined indexes use a single field in a document to create an index in an ascending or descending sort order (1 or -1). In a single field index, the sort order of the index key does not have an impact because MongoDB can traverse the index in either direction.

Example

db.studentgrades.createIndex({name: 1})

Result:

Creating a single field index in MongoDB.

The above index will sort the data in ascending order using the name field. You can use the sort() method to see how the data will be represented in the index.

db.studentgrades.find({},{_id:0}).sort({name:1})

Result:

Use sort() method to see data.

Compound index

You can use multiple fields in a MongoDB document to create a compound index. This type of index will use the first field for the initial sort and then sort by the preceding fields.

Example

In the following compound index, MongoDB will:

  • First sort by the subject field
  • Then, within each subject value, sort by grade
db.studentgrades.createIndex({subject: 1, score: -1})

MongoDB compound index example.

The index would create a data structure similar to the following:

db.studentgrades.find({},{_id:0}).sort({subject:1, score:-1})

Result:

Index data structure.

Multikey index

MongoDB supports indexing array fields. When you create an index for a field containing an array, MongoDB will create separate index entries for every element in the array. These multikey indexes enable users to query documents using the elements within the array.

MongoDB will automatically create a multikey index when encountered with an array field without requiring the user to explicitly define the multikey type.

Example

Let’s create a new data set containing an array field to demonstrate the creation of a multikey index in MongoDB.

db.createCollection("studentperformance")
db.studentperformance.insertMany(
[
{name: "Barry", school: "ABC Academy", grades: [85, 75, 90, 99] },
{name: "Kent", school: "FX High School", grades: [74, 66, 45, 67]},
{name: "Alex", school: "XYZ High", grades: [80, 78, 71, 89]},
]
)
db.studentperformance.find({},{_id:0}).pretty()

Result:

Creating a multikey index dataset in MongoDB.

Now let’s create an index using the grades field.

db.studentperformance.createIndex({grades:1})

Result:

Creating a multikey index in MongoDB.

The above code will automatically create a Multikey index in MongoDB. When you query for a document using the array field (grades), MongoDB will search for the first element of the array defined in the find() method and then search for the whole matching query.

For instance, let’s consider the following find query:

db.studentperformance.find({grades: [80, 78, 71, 89]}, {_id: 0})

Initially, MongoDB will use the multikey index for searching documents where the grades array contains the first element (80) in any position. Then, within those selected documents, the documents with all the matching elements will be selected.

Geospatial Index

MongoDB provides two types of indexes to increase the efficiency of database queries when dealing with geospatial coordinate data:

  • 2d indexes that use planar geometry which is intended for legacy coordinate pairs used in MongoDB 2.2 and earlier.
  • 2dsphere indexes that use spherical geometry.

Syntax:

db.<collection>.createIndex( { <location Field> : "2dsphere" } )

Text index

The text index type enables you to search the string content in a collection.

Syntax:

db.<collection>.createIndex( { <Index Field>: "text" } )

Hashed index

MongoDB Hashed index type is used to provide support for hash-based sharding functionality. This would index the hash value of the specified field.

Syntax:

db.<collection>.createIndex( { <Index Field> : "hashed" } )

MongoDB index properties

You can enhance the functionality of an index further by utilizing index properties. In this section, you will get to know these commonly used index properties:

  • Sparse index property
  • Partial index property
  • Unique index property

Sparse index property

The MongoDB sparse property allows indexes to omit indexing documents in a collection if the indexed field is unavailable in a document and create an index containing only the documents which contain the indexed field.

Example

db.studentgrades.createIndex({notes:1},{sparse: true})

Result:

Example of sparse index property in MongoDB.

In the previous studentgrades collection, if you create an index using the notes field, it will index only two documents as the notes field is present only in two documents.

Partial index property

The partial index functionality allows users to create indexes that match a certain filter condition. Partial indexes use the partialFilterExpression option to specify the filter condition.

Example

db.studentgrades.createIndex(
{name:1},
{partialFilterExpression: {score: { $gte: 90}}}
)

Result:

Example of partial index property in MongoDB.

The above code will create an index for the name field but will only include documents in which the value of the score field is greater than or equal to 90.

Unique index property

The unique property enables users to create a MongoDB index that only includes unique values. This will:

  • Reject any duplicate values in the indexed field
  • Limit the index to documents containing unique values

Example

db.studentgrades.createIndex({name:1},{unique: true})

Result:

Example of unique index property in MongoDB.

The above-created index will limit the indexing to documents with unique values in the name field.

Indexes recap

That concludes this MongoDB indexes tutorial and guide. You learned how to create, find, and drop indexes, use different index types, and create complex indexes. These indexes can then be used to further enhance the functionality of the MongoDB databases, increasing the performance of applications which utilize fast database queries.

Related reading

]]>
ChatOps Explained: How ChatOps Supports Collaboration https://www.bmc.com/blogs/chatops/ Thu, 17 Apr 2025 00:00:49 +0000 https://www.bmc.com/blogs/?p=12720 Conway’s Law: Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.— Melvin E. Conway The goal of your company is to continue to advance—yet many organizations cite situational awareness as a primary concern. If success means finding 100 ways a light bulb cannot […]]]>

Conway’s Law: Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.— Melvin E. Conway

The goal of your company is to continue to advance—yet many organizations cite situational awareness as a primary concern. If success means finding 100 ways a light bulb cannot be created, then you don’t want every team working on the same number 12.

Information silos exist between development teams and operations, operations support, and personnel. These silos occur when one team in an organization has lots of information on which to act, while others, who are making similar decisions, do not have access to that information.

Progress will come sooner when the teams communicate what they have done, and what their results were. This is where ChatOps comes in. Let’s explore:

What is ChatOps?

Sharing is at the heart of ChatOps. ChatOps is a method of accessing and distributing in ways that are immediate and easy—no more chain-of-command information processing.

Information silos can occur because some teams outperform others. Some teams may happen to work with a client with a very unique set of problems, so they work through and create a unique solution. The silo happens because information and experiences are not shared among teams.

Despite the abundance of collaboration platforms and project management tools, manual efforts to improve cross-departmental workflows are not enough to decrease the size of the information silos. This results in manual information processes that can:

  • Slow down work
  • Provide the wrong answers or context
  • Negatively affect decision making
  • Reduce overall productivity
  • Waste money and time

If one research team is conducting trials and learns a test does not work, it benefits the entire company to share that information so other teams are not making the same mistakes or wasting money conducting the same trials.

How ChatOps works

In software development and IT operations, ChatOps helps devs, ops, and support staff collaborate effectively. ChatOps uses tools and automation that promote easy communication within context, which is essential for successful collaboration.

When information is stored in chats, users don’t need to communicate the status or context of the operational tasks with their team—the information is already visible in the chatroom. Every member can view the shared context of the situation, making decisions and taking actions accordingly.

The communication platform logs the progress as teammates work, and that progress is visible in real-time. It eliminates the silos and barriers between team members and does the same for cross-functional departments who work on the same projects.

ChatOps architecture

ChatOps is a transparent collaboration model that connects the software development tasks of:

  • Communication
  • Execution
  • Operations

Think of ChatOps as a system that integrates the personnel, the existing work and technologies, processes, and communications into a unified conversational environment. The ChatOps communication structure allows users to execute actions via internal robots on tools that are integrated within the communication platform.

(If we look at the ChatOps architecture, it’s loosely similar to the structure of cluster orchestration that Kubernetes is built on.)

Key features of ChatOps

Numerous features make ChatOps an invaluable tool for software developers. For example, users value its:

  • Real-time collaboration: Teams can easily communicate within their group and with other teams through chat tools integrated with operational workflows. People can converse, make decisions, and take action in real time.
  • Automation of tasks: Repetitive software development tasks, like deploying code, monitoring operations, and sending out patches and issue resolutions, are automated for speed and reduced errors.
  • Command execution from chat: People can use the chat interface to directly run commands, enjoying immediate access to scripts and updates on status, along with system actions within the chat context.
  • Integration with DevOps tools: ChatOps creates a centralized operations hub by integrating with Jenkins, GitHub, Nagios, and other IT management and DevOps tools.
  • Incident management and alerts: Real-time incident detection gives teams an immediate alert when issues arise so they can effectively collaborate around a fast response.
  • Transparency and visibility: Teams have a clear view of operations so they can document conversation logs, decisions, and actions taken. Tracking progress and following up on tasks is easier.
  • Collaboration beyond IT: ChatOps supports cross-functional operations to improve customer support, marketing, and other organizational functions.

Popular ChatOps tools

Common ChatOps tools include:

  • Wiki pages
  • Chat platforms, like Slack and Keybase
  • Chatbots
  • Automated notifications

Wiki pages are a great way for teams to contribute what they have learned to a single source. They have group knowledge, shared knowledge, and searchable knowledge. Each of these increases communication and transparency among teams. When a team member hypothesizes a new way for the light bulb not to work, they can research and find someone who has already discovered the method.

(ChatOps also supports and benefits from your knowledge management processes.)

Chatbots in ChatOps

Chatbots are configurable through custom scripts and plugins to automate tasks. These chatbots can:

  • Send messages when a process has completed
  • Request information from other people
  • Keep a channel of communication alive by highlighting when no one has chatted in the room some time

The collaboration and communication that takes place are the response to the operational tasks that are tracked and visible within the communication platform.

Hint: Create automated chat messages when new posts are made to certain high-value wiki articles.

Benefits of ChatOps

There are many reasons to implement ChatOps. Here’s the five best reasons.

1. Automation

Set up conversations to automatically trigger actions. Instead of forcing every team member to maintain a repository of manual scripts, store and automate the code from a well-documented and centralized communication platform.

Configure the chatbot to:

  • Pick up commands in real-time
  • Execute actions
  • Update the console to keep every team member aligned

Replace the error-prone practice of manual code execution and progress tracking with effective automation capabilities of intelligent chatbots. This saves users the time and hassle of executing repetitive manual operations tasks—an important step towards truly collaborative, effective teamwork.

2. Contextual collaboration

Silos are created when team members fail to communicate information in the right context. Due to the proliferation of tooling and complex processes involved in software development and IT operations, extracting and presenting accurate context with every conversation is a near-impossible task.

Stop relying on multiple channels to receive contextual notifications. Stop manually connecting the dots to make sense of the available information. Let ChatOps introduce information into an always up-to-date environment. This allows users to communicate constructive, concrete, and information-driven feedback and actions between other users—without relying on assumptions.

3. Workplace transparency

Honest and transparent communication is critical in incident management situations. In tense situations, organizations struggle to:

  • Prevent the blame-game among employees
  • Encourage them to collaborate effectively

In situations when team members are open and honest in their communications, even the correct and straightforward statements—“Well, it works on my machine”—do no favor to progress.

ChatOps allows the conversations to align with the reality of the project situations. If a build or feature is running on one system, not on others, the default reaction of blaming other users is replaced by collaborative efforts to identify the issues that prevent consistent results across all users and machines. This practice is easier when the associate information is accessible via a centralized and common communication platform that all team members can see and trust.

4. Productivity

ChatOps supports collaboration between distributed teams by making contextual information available to all users in real-time. Without a common platform to visualize and discuss project progress, users with limited access to siloed tools and technology processes may not be able to communicate the necessary information.

Since ChatOps operates as an automated environment to execute commands and trigger actions, users no longer have to rely on time-consuming manual scripts to perform the same actions. The time savings translate to convenience and resources for more meaningful job tasks.

5. Employee engagement

When all users are on the same page, it is easier for everyone to contribute to project discussions:

  • Information-driven collaboration removes the bottleneck and delays in access to the right information to perform the appropriate actions.
  • Users gain confidence as the build-measure-learn-iterate process keeps everyone informed regarding potential issues during the development lifecycle or project progress.
  • Since users are relying on intelligent bots to share accurate information, the psychological barrier in requesting and reminding other team members for appropriate information/action is removed organically.

Ultimately, ChatOps enables team members to communicate on more important matters, such as devising strategies and making collective decisions—instead of requesting data and communicating incomplete information.

From a security and compliance perspective, ChatOps offers the added advantage of documenting IT Ops tasks and establishing a communication mechanism for proactive issue resolution.

ChatOps is a cultural shift

Adopting ChatOps requires external tools to automate operational tasks—but that isn’t nearly enough. Successful ChatOps requires cultural changes. Users across departments, functions, and teams must embrace the mechanism that involves real-time collaboration and progress on mutual project responsibilities.

Here’s what to expect in the ChatOps culture shift:

  • A trustworthy structure. Trust in the communication technology is required if it is to be accepted as a gateway to operational features that are previously only open to individual users in siloed console environments.
  • Additional investments in technologies. You may need extra tooling or solutions to connect and integrate the distributed siloed tools in secure collaboration environments.
  • Humor as diffuser. Humor may emerge as an integral part of the conversation, especially for operational tasks that were previously communicated via formal email and direct message statements. You’ll start using emoticons, short statements, and emoji reactions to lighten the formal tone, ask questions, and project your understanding and awareness of the message.
  • Goodbye to tense war rooms. Since any project’s progress is carefully tracked within the communication console, the chatroom won’t turn into a war room. All necessary information is documented, updated, and accessible to the team members in real-time. The possibility of debates heating up to the point of no-return becomes less likely.

Chatbots are not ChatOps

Chatbots and centralized communication platforms have existed for years. These alone, of course, are not enough to ensure the benefits of real time context and collaboration.

Embracing ChatOps as a culture promises that information sharing will eliminate the barriers between siloed Devs, Ops and IT Support environments.

Additional resources

For related reading, explore these links:

]]>
What is Azure DevOps? A beginner’s guide https://www.bmc.com/blogs/azure-devops/ Thu, 17 Apr 2025 00:00:44 +0000 https://www.bmc.com/blogs/?p=13158 DevOps has paved the way for faster and more agile software development processes by unifying teams, processes, and technologies to create an ever-evolving software development lifecycle (SDLC). This has led to more robust and efficient SDLCs, now capable of handling any user request, market demand, or technological issue. A range of tools is available in […]]]>

DevOps has paved the way for faster and more agile software development processes by unifying teams, processes, and technologies to create an ever-evolving software development lifecycle (SDLC). This has led to more robust and efficient SDLCs, now capable of handling any user request, market demand, or technological issue.

A range of tools is available in the market to facilitate DevOps, such as CI/CD tools, version control systems, artifact repositories, IaC tools, and monitoring tools. With the increased demand for cloud-based technologies, DevOps tools have also transitioned to cloud offerings. These cloud offerings can be used by teams spread across the world with nearly unlimited scalability and efficiency.

In this article, we will explore such a cloud-based DevOps service offered by Microsoft called Azure DevOps.

(Explore our DevOps Guide, a series of articles & tutorials.)

Azure Devops 5 Services

What is Azure DevOps used for?

Azure DevOps is a service offered by Microsoft based on the Azure cloud computing platform that provides a complete set of tools to manage software development projects. It consists of:

  • Five key services
  • An extensive marketplace that contains extensions to further extend the Azure DevOps platform and integrate with third-party services

Azure DevOps core services

Core Azure DevOps services include:

  1. Azure Boards
  2. Azure Pipeline
  3. Azure Repos
  4. Azure Test Plans
  5. Azure Artifacts

Azure DevOps comes in two variants:

  • The cloud-based Azure DevOps service
  • The Azure DevOps Server

The Azure DevOps Server, previously known as the Team Foundation Server (TFS), is a DevOps server solution that is targeted for on-premise deployments. It consists of all the tools available in the cloud-based Azure DevOps service to power any DevOps pipeline.

This also server offers a free variant called Azure DevOps Server Express, aimed at individual developers and small teams of up to five team members. It can be installed in any environment.

Azure guarantees 99.9% availability for all the paid DevOps services including paid user-based extensions. Moreover, it provides 99.9% availability to execute load testing and build and deploy operations in paid Azure Test Plans (Load Testing Service) and Azure Pipelines.

Azure DevOps pricing

The cost will be one of the primary concerns when considering any DevOps solution.

The cloud-based Azure DevOps services come as both free and paid options. Additionally, the service offerings are provided in two varieties as individual services and complete service bundles.

Azure DevOps Comparing Services & Pricing

In addition to the above, there are special pricing options for open-source projects and Visual Studio subscribers to get free access to the Azure DevOps services depending on the subscription level.

(Visit the Azure DevOps pricing page for details & up-to-date pricing.)

Azure DevOps registration

Registering for Azure DevOps is a simple and straightforward process that requires only a Microsoft account. Simply visit this page and click on “Start for free.”

When registering, you will need to provide some additional information such as organization name, project name, version control type (repo), etc.

  • Organization refers to the Azure DevOps account name. The organization can contain multiple projects.
  • Projects allow users to separate projects, control access, and split the code, tests, and pipelines to keep them within the assigned projects. A project can be either public or private, with Git or Team Foundation server as the version controlling system. Additionally, projects can be configured with a work item process like Agile or Scrum that will be used in Azure Boards to manage the project.

Once the registration is complete, you will gain a dedicated organization URL in the following notation:

https://<organization name>.visualstudio.com

Users can manage all their projects and use the DevOps services by visiting this URL.

Azure DevOps Services

Azure DevOps consists of five services—which we’ll explore in this section. All these services can be grouped under individual projects so that users can have proper isolation between different projects using different technologies and catering to different needs.

Project summary view:

Azure DevOps Project summary view

Azure Boards

The Boards service in Azure DevOps is the management hub of the project.

Boards can be used to plan, track, and collaborate between team members. With Azure, the Boards team can create Work items, Kanban boards, backlogs, dashboards, and custom reports to track all aspects of the project.

You can also customize boards to suit the exact workflow requirements and gain meaningful insights through built-in reporting and monitoring tools. Additionally, Azure Boards comes with first-party integrations with services like Microsoft Teams and Slack, which enables efficient ChatOps.

Azure Repos

The Azure Repos are code repositories that enable users to manage their codebases. These are private and cloud-based repositories that support both Git and TFVC version control systems.

Azure DevOps Repos

Azure Repos can support projects of any scale, from individual hobby projects to enterprise developments. They also consist of the following features:

  • Support for any Git client (IDE, Text Editor, CLI)
  • Semantic code search
  • Collaboration tools to interact with other team members
  • Direct integration with CI/CD tools
  • Branch Policies to enforce code quality standards

Platform-agnostic services like Azure allows repo users to use any IDE or tool they are familiar with to interact with the Azure Repos in any operating system.

Azure Pipelines

Pipelines are the CI/CD tool that facilitates automated building, testing, and deployment. Azure Pipelines supports any programming language or platform which enables users to create pipelines that support Windows, Linux, and macOS using cloud-hosted agents.

Azure DevOps Pipeline

These pipelines are easily extensible through the extensions available in the marketplace. Besides, they support advanced workflows that can be used to facilitate:

  • Multi-phase builds
  • Test integrations
  • Custom reporting functions

On top of that, Azure Pipelines provide native container support, enabling them to push containers to container registries from the pipeline directly. The pipelines offer flexibility to deploy to multiple environments from Kubernetes clusters to serverless functions and even deploy to other cloud providers such as AWS or GCP.

Azure Test Plans

Test Plans is the Azure DevOps service that allows users to integrate a cloud-based testing platform to manage all the testing requirements such as:

  • Planned manual testing
  • User acceptance testing (UAT)
  • Exploratory testing
  • Gathering feedback from stakeholders

Azure Test Plans allow users to create test plans and execute test cases within a pipeline. This can be combined with Azure Boards to create a test that can be executed from the Kanban boards and plan and author tests collaboratively.

Test Plans support creating UAT plans for user acceptance testing and assign users from the DevOps platforms. It also supports the Test and Feedback browser extension to easily enable exploratory testing for interested parties without utilizing third-party tools. Furthermore, Test Plans enable users to test on any platform while having end-to-end traceability and powerful data gathering tools to diagnose any remedy identified issues.

It is the only service in Azure DevOps with no free tier due to its rich toolset that is only accessible for commercial users.

Azure Artifacts

This is the artifact library service by Azure DevOps that can be used to create, store, and share packages (development artifacts). Azure Artifacts enable users to integrate fully featured package management functionality to CI/CD pipelines.

Moreover, Azure Artifacts enable users to manage all package types like npm, Maven, etc., and keep them organized in a central library scoped only to the specific project.

Azure Cloud Services

Azure DevOps is one of the leading cloud-based DevOps services that offer a robust and feature-rich toolset to create and manage a complete DevOps process. It enables users to:

  1. Cater to any DevOps need regardless of the programming language, technology, or the targeted platform.
  2. Deploy anywhere from containers to third-party clouds.

Azure DevOps facilitates all these with unparalleled scalability and availability without the hassle of maintaining specific software to carry out separate DevOps tasks.

Azure DevOps vs. GitHub

Should you use Azure DevOps instead of GitHub? The differences between GitHub and Azure DevOps mean each offers something distinct to your situation. The choice depends on your situation and the capabilities and benefits each one brings. In considering Azure DevOps and GitHub, both support Git and collaborative software development in both public and private modes.

Azure DevOps is an enterprise-level software development management tool with an integrated build server and comprehensive tools that support project creation, software development and testing, and ongoing management and maintenance. It also offers advanced security and compliance features, along with governance capabilities.

It contrasts with the lightweight, small-team-friendly, open-source option that is GitHub. The open-source nature of GitHub means it has broad community support, built-in social features, is developer-friendly, and has a large and active user base. The community around Azure DevOps is smaller, mostly Microsoft-focused enterprise users.

Azure DevOps vs. Jira

The Jira software development tool, available as SaaS or on-premises, is another option for those evaluating the Azure DevOps development platform. Jira shares some features with Azure DevOps, such as extensibility, Scrum and Kanban boards, customizable workflows, roadmaps for project management with dashboards and reporting, version control automation and orchestration, and repository management.

Jira is different from Azure DevOps in that its core strength is supporting Agile project management, cross-team collaboration, and tracking issues across multiple development platforms. It has different versions for software, business, and IT teams, with an available mobile app. It offers advanced search for finding code issues and best-practices playbooks.

Azure DevOps is a complete solution with built-in CI/CD, along with Git and TFVC support, code repositories, and testing tools. It includes Agile tools and deep DevOps integration, including integrated Azure Pipelines. With Jira, you need third-party CI/CD and testing tools, as well as those for version control code repositories.

In choosing between Jira and Azure DevOps, you will need to look at functionality, integration, performance, and available support.

Related reading

]]>
Change Management Job Description: Roles and Responsibilities https://www.bmc.com/blogs/change-management-roles/ Wed, 16 Apr 2025 00:00:29 +0000 https://www.bmc.com/blogs/?p=16823 Change enablement, also known as change management, is at the core of ITIL® service transition. The maturity of an organization depends on how well it facilitates change requests (CR) in response to end-user, technical, functional or wider business requirements. Careful change management helps reduce the risk exposure and disruption proactively when new changes are instituted […]]]>

Change enablement, also known as change management, is at the core of ITIL® service transition. The maturity of an organization depends on how well it facilitates change requests (CR) in response to end-user, technical, functional or wider business requirements.

Careful change management helps reduce the risk exposure and disruption proactively when new changes are instituted within your organization’s operations and technologies.

ITIL provides an effective framework guideline to conduct change enablement and management activities. In this article, we will discuss the key roles and responsibilities involved in change management according to ITIL guidelines. Even if you don’t adhere to the ITIL framework, these roles help clarify your change management processes.

We’ll look at:

Change manager job description

Change managers are employees leading the change management programs. These leaders have a background in conducting structured change efforts in organizations.

A certification verifying change management skill is typically desired for a change manager, who will be involved in the following key activities:

  • Leading the change management activities within a structured process framework.
  • Designing the strategic approach to managing change and support operations that fall within the domain of change management.
  • Evaluating the change impact and organizational readiness to limit potential risk.
  • Supporting training and communication as part of change management. Activities may include designing or delivering specialized training resources to the appropriate userbase.
  • Evaluating the risk of change and providing actionable guidelines on reducing the impact.
  • Evaluating resistance in adopting the change at the user, process, and technology level.
  • Managing the change portfolio, which allows the organization to prepare for and successfully adopt the change.
  • Authorize minor change requests and coordinate with the Change Advisory Board for changes presenting higher risk.
  • Conduct post-implementation reviews to assess the decisions and performance related to the change request.

Change Advisory Board (CAB)

This is the team that controls the lifecycle of change across all processes as specified within ITIL Service Transition function. The Change Advisory Board involves high-level members from different domains, including information security, operations, development, networking, service desk, and business relations, among others.

Together, the CAB is responsible for the following activities:

  • Supporting the change manager in decisions for major changes.
  • Evaluating Requests for Change (RFCs), the available resources, impact of change, and organizational readiness.
  • Validating that appropriate tests and evaluation are performed before high-risk changes are approved.
  • Documenting relevant processes and activities.
  • Supporting the design of change implementation scheduling.
  • Reviewing a change implementation process.
  • Supporting the design and approving new change process models.
  • Using the diverse knowledge base, skills, and expertise of each CAB member to provide a unique perspective before a decision is finalized.

Challenges of traditional CABs

A CAB can face numerous criticisms, threats, roadblocks, and problems. Change is not easy and often is not welcome. Rather than be frustrated, you can disrupt, disarm, and plan an effective response.

Don’t create a CAB that has too many stakeholders, too many meetings, or that does not prioritize efficiency. You don’t want to create more bureaucracy and conflict points.

Ensure the CAB area of responsibility is not overly broad. If you have to manage too many areas, you will diffuse your effectiveness. Have a clear focus area and resist attempts to expand or blur it.

Be careful how your CAB looks at risk. Risks associated with change need to be balanced against the risks of not changing or delaying change. Consider risks to customers, your competitiveness, and future innovations. Rather than considering risk as a stop sign, look for ways to mitigate risk.

Emergency Change Advisory Board (ECAB)

The ECAB is a smaller body within the CAB that deals specifically with emergency changes. (Emergency changes are one of three change types according to ITIL.) When the emergency change request is raised, the change manager must conduct a thorough analysis and evaluation before finalizing a decision together with the CAB.

A dedicated ECAB body ensures that the necessary resources and expertise within the CAB is available to make the right decision at the right time. The ECAB is responsible for performing activities similar to the CAB but focused primarily on emergency changes. These include:

  • Assessing the relative importance of the emergency change request.
  • Supporting the change manager during impact and risk assessment.
  • Reviewing the change request, risk analysis, and impact evaluation before the decision is finalized.
  • Approving or rejecting an emergency change.
  • Evaluating the efficacy of the emergency change implementation process.

Change process owner

The change process owner can have overlapping responsibilities with the ITIL Process Owner, specifically within the function of change management. (For this reason, a separate change process owner may not be required for small and midsize business organizations.)

The change process owner is responsible for defining and supporting the overall process involved in change management. The activities include:

  • Devising the process, in support with the change manager and CAB.
  • Communicating the guidelines to appropriate stakeholders.
  • Facilitating cross-departmental collaboration necessary for change management.
  • Evaluating and improving the change management process.
  • Reporting on the performance of the process to CAB and change manager.
  • Initiating process improvements.

The change management team

Change management functions are distributed in teams across departments and ITIL functions. Individuals within these teams may be responsible for managing change within a specific organizational unit considering their expertise, skills, and background.

Specific change management teams may consist of three roles:

  • Change requestor. The individual responsible for initiating, preparing, and submitting a change request. This person may support collection of the necessary business information and engage with the concerned stakeholders before the change request is assigned to the change tester. Additionally, the change requestor also works with the change management team to support impact assessment by collecting data and communicating with other stakeholders.
  • Change owner/assignee/implementor. The individual is deemed as an owner of the CR throughout the request lifecycle. The change tester may also take the role of the Change Requestor and support the process for creating and submitting a change request. The change owner ensures that the necessary tests have been performed so that the change request is followed up by appropriate urgency. The change owner would also document the process across the request life cycle.
  • Change approver. The individual responsible for the initial approval of a change request before it is sent to the change manager and CAB for a final decision. The change approver would communicate with other stakeholders and support the documentation before the request is sent to the change manager. This role is also generic and may be occupied by different individuals at various hierarchical levels of the change management framework. At each level, the Change Approver ensures that the change request has reached the necessary standard of readiness to warrant a decision by the change manager and the CAB.

Difference between a change manager and a project manager

The roles of a change manager vs. a project manager are distinct, focusing on different facets of supporting improvements and organizational change. That said, they are often complementary.

A change manager focuses on people and teams and how they can move from the current state to one that is better for the future. They support people, understand potential impacts of change, and develop communications, training, and support to facilitate adapting to what is new. Ultimately, they promote the adoption of valuable changes with minimal resistance, good communication, and positive outcomes.

As the change manager focuses on the who and why, project managers are concerned about the what and how. They focus on the practical issues of project goals, scope, schedules, budgets, standards, and resources. They manage stakeholder expectations and coordinate people and teams to achieve success on a specific project.

Additional resources

]]>