Why the sheer scale of DevOps testing now needs machine learning 

The sheer scale of DevOps has taken test volume and complexity to a whole new level.  For example, one of the world’s largest airlines runs 1.3 million test executions for every version cycle, in a complex, multi-faceted test environment, covering many different types of tests, frameworks, and engineers.   Nor is this airline an isolated example: the volume and cadence of testing are escalating across the board.

So, it is no surprise that interpreting the implications of million-plus test execution is a challenge, even for the most skilled team.  Sure, they can – and almost certainly have – embraced automated continuous testing (CT) to a greater or lesser extent, but in many instances, that may not be sufficient. There is still a huge gap in implementing test automation that runs in a reliable, stable and efficient way throughout the DevOps pipeline.

To put some context around that, anecdotal evidence suggests that more than 40 percent of failed automation attempts were a direct result of scripting issues, thus undermining the value of CT. In other words, scripting is probably the single biggest cause of automation failure, and without automated testing, DevOps-at-scale is hard, arguably impossible, to achieve.

This is why more organisations are now looking at how using machine learning can help make sense of all the noise that test results create, and understand the real business impact, is the only way to deal with the scale of test data that modern software development lifecycles inevitably require.

Machine learning

Machine learning (ML) has become one of the most popular tech terms on the planet, something that even individuals with no involvement in the technology industry or its use are aware.   Like something out of science-fiction, it has caught the imagination of people worldwide, with both positive and negative connotations.

That aside, when it comes to the latest techniques and tools for automated testing, machine learning is beginning to make waves and, together with other elements of smart analytics, are becoming an essential part of the root cause analysis (RCA) toolbox.  On a very practical level, ML can eliminate the need to write and maintain test scripts.  Instead, changes to test scripts are made agnostically, so scripts continuously run and self-heal, all without getting in the way of other aspects of the development pipeline.

The potential value of machine learning is particularly evident in mobile and web app testing because these are very fragmented and complex platforms to handle and understand.  What ML can do in this context is to keep all those platforms visible, connected, and in a ready-state mode.  In a test lab, ML helps to surface when something is outdated, disconnected from WiFi, or another problem – and moreover, help understand why that has happened.

Another way in which ML helps is through showing trends and patterns, helping to not only visualise all that data but provide further insight and make sense of what has happened over the past weeks or months. For instance, it can identify the most problematic functional area in an application, such as the top 5 failing tests over the past 2-3 testing cycle, or which mobile/web platforms have been most error-prone over the past cycles.   Was a failure caused by the lab, was it a pop-up, or a security alert?

This really matters.  Teams invest time, resources and money in automating test activities, but where all this really has an impact and add value is at the reporting stage.  Up until that point, those tests are being executed behind the scenes and so it is hard to assess whether they are identifying real issues or not.  Test analysis at scale is when teams really understand what is happening within their software, whether those tests are of any value (and if not, why not).  Test analysis at this level of sophistication and at scale is the point at which most organizations will definitely realize the value of test automation while providing the data to identify how it can be improved.

Of course, this all ties in with – and supports – continuous integration, which is both the engine and the glue from coding to test to release to production.  As we all know, historically testing can get in the way of release deployment, so the greater the visibility into the CI, the lesser that risk.  In fact, this is how ML-based testing can actually expedite up release cycles, rather than be a roadblock, by providing that insight, reducing risking and supporting continuous improvement.

Banking example

Let’s take an example in the banking sector because there are so many unique scenarios against which to test, including smart authentication on different mobile devices, the ability to deposit a check through the camera, or find a branch through location services.  When the user accesses the camera for the first time, he or she will probably see a pop-up asking for permission to use the camera from the application.

What ML-based testing can do is discover early that the pop-ups were not handled properly, and that will save a lot of future debugging and hence possible delay in execution.  ML testing can identify during runtime that something was neglected around those pop-ups, or perhaps the security alerts connected to user permissions created an issue.

Cultural challenges

However, tools are on their own are not enough: just as important is adopting the right cultural attitude within the organization to modern test processes.  ML has had a bad rap in some quarters of the popular press for how it might take away people’s jobs in the future.  Is this true in testing?  What we are actually seeing right now from early adopters is that ML is helping test engineers evolve their careers.  Freed from writing test scripts, they can focus on more complex tasks that require human brainpower.  ML also helps ‘teach’ good testing practice, and also brings it within reach of more individuals.  Test or QA managers can expand their teams more easily, with more seasoned test professionals providing mentoring alongside being able to focus on more rewarding daily tasks, such as those tests that do not yet lend themselves to full automation.

It is early days for ML in testing and there are various levels of adoption, and maturity within the ML testing solutions offer.  Despite these variations, one thing is clear: ML-based testing is here to stay and it will continue to develop over the next couple of years until a stable level of autonomous test automation can be reached and maturity of ML achieved.  In the meantime, ML-testing can already start to have a positive impact on CT in large-scale DevOps environments, removing unnecessary resources being spent on mundane tasks, reduce risk, and accelerate release cycles.

Written by Eran Kinsbruner, Chief Evangelist at Perfecto (by Perforce)

 

More
articles

Menu