Sustaining high mainframe code quality while improving velocity & efficiency

Traditional mainframe culture is rooted in slow, waterfall processes where meeting quality standards have always been, above all, the most important aspect of software development. But that culture was established when frequent code drops and rapid responsiveness to digital requirements were inessential to survival.

Mainframe code quality is still an essential underpinning of most large organisations, considering 72% of customer-facing applications are completely or very dependent on mainframe processing, according to a new study of enterprise mainframe users. These organisations rely on the mainframe to support their high-performance (fast, reliable) applications because of the sheer number of transactions the platform is capable of processing.

Mainframe code quality is not something an organisation should sacrifice as it works to increase velocity and efficiency. But how does an organisation maintain its required quality standards while meeting these new requirements? I identified this as a key challenge back in 2014, when I led Compuware’s own transition from Waterfall to agile software delivery. Compuware is the world’s largest mainframe software vendor, but we also use mainframes ourselves to support our product development effort. We successfully accelerated our delivery cycle from a 12-18 month cadence to 90 days – this was essential to our ability to provide new software that addressed customers’ quickly shifting needs. However, as we were making this change I realised our sharp focus on maintaining, measuring and improving quality must not waver, with leading corporations around the world relying on mainframe code to run their most mission-critical applications.

At the highest level, bringing the mainframe into the DevOps fold – and achieving a higher degree of roll-out speed – requires integrations with other tools to create a modern cross-platform DevOps environment. While our development focus is solely on mainframe software, we recognised the importance of integrating with non-mainframe systems to help customers support hybrid applications that interact with both systems of engagement and systems of record. Compuware Topaz is our company’s suite of mainframe development and testing tools, and to address quality specifically, we rely on a combination of our own products as well as Topaz integrations to enable code editing and application understanding; code validation and debugging; test automation; performance management and fault detection.

These quality-focused initiatives have allowed us to achieve the sometimes conflicting ends of rolling out software more quickly, while also increasing quality.  Today, we are delivering two times the amount of code per developer, deploying new software at the speed our business requires and identifying 25% more trapped defects while decreasing the number of escaped defects. Based on these experiences and that of other mainframe-based organisations, here are three specific ways mainframe users can maintain and improve mainframe code quality while increasing their development velocity and efficiency.

Automated unit testing

Automated unit testing was one technique I relied on to help drive our company’s results, and it can play a significant role in maintaining code quality and delivering new updates with confidence. Unit testing is a software development process in which the smallest testable parts of an application are individually and independently scrutinised for proper operation. The greatest advantage of unit testing is that it enables problems to be identified earlier, thus preventing bugs from being baked into software and becoming more costly and time-consuming to resolve. The earlier a problem is identified, the fewer compound problems occur.

However, creating unit tests is often a manual process in mainframe development. Given how tedious this process is, most developers—who already have limited time to complete important development tasks on deadline—don’t unit test mainframe code as often as they would like, or they neglect the task altogether.

Mainframe code handles the actual transaction-processing component of modern applications, so it is vital to end-to-end application performance. Omitting it from unit testing can be a huge oversight.  If mainframe code breaks under pressure, an entire application could stop working.

Automated unit testing is key to ensuring mainframe code can be integrated deeper into a DevOps toolchain—the strategic combination of tools that support a DevOps team’s development, delivery and management of modern applications—without sacrificing quality. Automated unit testing tools for mainframe code offer several sophisticated features, including:

  • Automated triggering of unit tests
  • Identification of mainframe code quality trends
  • Sharing of test assets
  • Creation of repeatable tests
  • Enforcement of testing policies

COBOL code coverage

Including mainframe code within DevOps, code coverage initiatives should also become a standard practice for mainframe teams balancing code quality demands with new velocity and efficiency. Code coverage metrics provide insight into the degree to which source code is executed during a test—that is, which lines of code have or have not been executed and what percentage of an application has or has not been tested. These measurements allow IT teams to understand the scope and effectiveness of their testing as code is promoted towards production.

Code coverage is such an integral part of continuous code quality that today’s DevOps practitioners almost take it for granted. However, code coverage on the mainframe has traditionally been a challenge, since metrics would identify when mainframe code was tested but not drill down to provide data on the actual percentage/portions of mainframe code tested. This more specific, direct capture allows mainframe developers to more quickly and accurately spot areas of uncovered code that need attention, just as they do in Java. I have used code coverage to capture our own mainframe code execution statistics for quick assessment of test-related risk and testing documentation.

Code quality management across platforms is extremely valuable to mainframe users since their ability to bring new digital deliverables to market is often contingent upon simultaneously updating code across both back-end mainframe systems of record and front-end mobile and web systems of engagement. Incorporating mainframe code into cross-platform code quality initiatives brings a higher degree of confidence and rigour to end-to-end testing processes.

Quality KPIs

Decision makers at mainframe-based organisations consider mainframe code quality to be of the greatest priority today, according to the previously mentioned study; however, this is also “potentially a relic of a mindset focused on relatively bug-free, but long waterfall release cycles.”

Organisations must begin placing equal focus on improving quality, velocity and efficiency through a robust program of modern key performance indicators (KPIs). Leveraging a program like this will help organisations measure the right metrics for quality in balance with velocity and efficiency metrics.

New approaches today leverage machine learning and analytics to uncover correlations between mainframe developer behaviours—including good and bad habits—and quality outcomes. This data can guide managers’ decisions that drive the continuous improvement of development quality, as well as greater confidence and effectiveness for developers.

Initiating a KPI program of this calibre through a machine learning solution ensures the insights gleaned improve over time as the system captures more data from more sites and continually refines its analytic models to produce more accurate outputs. This will become increasingly critical to large enterprises as customer-facing mobile and web applications drive intensifying workloads to back-end systems of record, namely the mainframe.

DevOps teams are realising they are only as effective as their weakest link. In spite of an increasing number of mainframe-DevOps success stories, the platform is still frequently omitted from DevOps toolchains, and deeper integration continues to be needed—from source code management and continuous mainframe code delivery to the full span of testing and quality assurance initiatives.

The mainframe is not going away and a focus on quality will help mainframe DevOps teams consistently roll out high-performing, modern transactional applications while continuing to leverage the mainframe’s unique scalability, reliability and security advantages. Interestingly, Compuware’s own ability to deliver new, innovative mainframe software for our customers required us to improve both the speed and quality of our own work on the mainframe, essentially morphing these systems into a self-perpetuating platform for innovation.

Written by David Rizzo, Vice President, Product Development, Compuware

More
articles

Menu