How to target DevOps bottlenecks in enterprise software delivery

The beauty of DevOps is that it eliminates the bottleneck between code commit and deploy, accelerating software delivery speed. Building and deploying products has never been faster.

Yet there is also a beastly truth about DevOps: ‘build and deploy’ is just one stage of the software delivery process. Unless you connect all the teams and tools in the planning, building, and delivery of software at scale, it’s likely you’ve only moved any bottleneck upstream.

And somewhat ironically, without an integrated an end-to-end value stream, you won’t be able to see where that bottleneck is. To be 100 percent certain, you need to be able to accurately see and trace the product journey from the customer’s lips to their computer screen.

You need an omniscient view of this value stream so you can truly measure, analyse and optimise the flow of work. Only then can you begin sharing the beauty of DevOps with the wider business.

Getting the right metrics

While best-of-breed tools make practitioners more productive, it also creates data silos. Vital collaborative information (artifacts such as product features, epics, stories and defects) cannot flow between tools because these disparate systems do not naturally integrate.

Consequently, there’s no automated way to get the real-time end-to-end data required to produce meaningful reports for targeting bottlenecks. You can track technical metrics such as number of open defects and story points delivered, which are useful for go/no go decisions and improving a given team’s performance.

But they don’t show the whole picture – it’s a bit like measuring a sports team based on number of hours practiced rather than how many wins they’ve had. Instead, it’s much more powerful to measure outcomes based on four categories:

  • Cost
  • Time
  • Quality
  • Productivity

This way you begin to collect higher-level business-value metrics in addition to technical metrics.

Since this article is about bottlenecks, we will focus on time. Or to be more specific, flow time across all teams and stages from customer request to delivering business value. Think of it as a relay race with a complex series of handoffs.

Step one: understand and connect your value chain

To track flow time from beginning to end, you need to consider how value flows through the system and the dates associated with various milestones. For example, say you use:

  • ServiceNow to manage and approve product/feature/fix
  • Jira to manage technical components and artifacts
  • Microsoft TFS to build, verify and deploy

Traceability in these tools is generally manual, and therefore problematic. For example, once a product/feature request is approved in ServiceNow, somebody has to create a new feature in Jira to be tracked on that side. Not only does that admin take time, but what if the request changes? Will somebody change that request in Jira? If not, won’t the business expect one thing from what’s defined in ServiceNow, while developers build some else based on old information in Jira?

Step two: automate traceability

To ensure information is always up-to-date in disparate tools, we need to automate the flow of information to provide end-to-end traceability through Value Stream Integration – a process that Tasktop understands and addresses.

Now, when a request is created and approved in ServiceNow, Tasktop Integration Hub automatically generates a new feature in Jira – including any future change. This way everyone knows exactly what to develop and test against. From there, developers can break down the feature into epics, which can be tied to the demand right through to release and deployment.

Step three: product relevant reports

Once you have a defined value stream and have built automated connections with Tasktop, you can extract the data you need to measure flow time.

It’s not just about how long it takes from customer request to production, you need to know other key dates along the way, such as how long:

  • Until the product request was approved?
  • Until the product was broken down into epics?
  • Until the epics were approved and scheduled?
  • Until development to deploy?
  • Until the product was deployed in production?

By looking at the data at each stage to see where the time lags are, you can begin to dig into potential reasons for long flow time. To do this you must look at the artifacts in each tool, and access the details to build appropriate reports, such as:

  • What ID and names are associated with this particular epic?
  • What were the key dates and deadlines?
  • Were those deadlines met?
  • Was this particular deployment successful?
  • Was the deployment delivered on time?

Ultimately, it’s all about getting information to flow across the value stream and aggregating that data into a database. Tasktop does this automatically for you. From there, you can use your analytics tools to produce actionable reports. If a request took six months from request to development, you can look into where the hold up was.

Once you know the ‘where’, you can begin to work on the ‘why’. Once you know the ‘why’ and have fixed the bottleneck, you can be confident that your DevOps team is building and deploying a product that is accurate, on time and delivering business value.

To learn more, download Tasktop’s white paper on the topic. And why not request a customised demo that visually demonstrates how the company’s Value Stream Integration technology can optimise your DevOps transformation and enterprise software delivery.

More
articles

Menu