Putting the customer experience first in testing

Huw Price, VP Application Delivery, CA Technologies, discusses putting customer experience at the forefront in testing.

In 1970, Winston W. Royce described the sequential development model, but noted an inherent weakness in it: there was a lack of feedback at each stage. Now known as the waterfall model, this weakness is generally acknowledged and there is an awareness that waiting too long for feedback can drastically reduce the likelihood that software will deliver the experience which the user actually wants.

This has been brought into sharp focus with the additional pressure on IT to deliver software which reflects user expectations. These are now set higher than ever, and yet are changing faster. Most organisations have therefore at least partially adopted new methodologies.

‘Information hops’ and the waterfall model

The nature of the waterfall model means that you design a system, you develop it, and then you test it – in that order. Each of these linear stages involves taking information in one form, such as a requirements document, and manually converting it into another format, such as code, test cases, and test plans.

These conversions will be called ‘information hops’. Each can be immensely time-consuming, while potentially introducing misunderstandings and errors at each stage.

Take test case design as an example. As requirements in a pure waterfall model are often stored as large, written documents, there is generally no automated way to derive test cases from them. Instead, tests are manually derived.

This can be hugely laborious, and a complex system usually has more paths through its logic than even the most talented tester can come up with in their heads. Manually derived test cases therefore typically leave the majority of a system exposed to defects; one team we worked with spent six hours creating 11 test cases with just 16% coverage, for example.

As a consequence, many defects go undetected, while the linearity of the waterfall model means that testing starts late and the time between working with the user to identify their needs and delivering them software to review can also be lengthy.

When a defect is finally discovered, it might have been perpetuated throughout the system and is therefore costly and time-consuming to remedy. Often, errors might simply ne left in once a system had fallen over the ‘waterfall’, and a change to the user’s desired experience might have to be ignored.

New models, same issue

‘Agile’ has arisen partly to shorten the feedback loop, placing the desired user experience at the forefront of testing and development. The focus has changed to working iteratively with the user to understand their needs upfront, while incremental development promises to make changes easier to implement.

The goal has become to deliver a ‘minimal viable product’ with each iteration, so that there could be constant reviews. The ‘shift left’ further aims to do the hard thinking about what users want earlier, while also involving testers far earlier to avoid late rework.

However, for many teams, the linear stages of designing, developing, and testing remain, albeit under the guise of a sprint. These ‘mini-waterfalls’ carry over many of the challenges of previous models, and changing user needs still do not often work their way throughout the software lifecycle.

Many manual information hops remain, and the requirements gathering process has not essentially changed, even if the supposedly complete specification of the waterfall model has been replaced with fragmentary user stories and change requests.

The stories are still blocks of text from which tests must be manually and unsystematically derived. Coverage remains low, and manual test case design and maintenance are still too slow to reflect changing user needs.

Moreover, user stories are still written in ambiguous natural language, far removed from the logical steps of a system which need to be developed. This reduces the likelihood that test cases will reflect the actual desired user experience, so that defects still go undetected until it’s too late to fix them.

Collapsing the linear stages with ‘active’ requirements

In order for testing to confirm that the desired user experience has been delivered, manual information hops must be reduced, and a greater degree of systematism and automation introduced.

Assuming that knowledge of the desired user experience has been successfully elicited by business analysts, it must be documented in a format which can be used directly in testing and development, to avoid information hops.

Formally modelling requirements offers one way to document the desired functionality in a way that reflects the logic of a system, and which can be used directly by testers and developers. However, it has historically been viewed as something which is too hard for anyone but the most adept techies, introducing a further information hop as the Business Analyst’s requirements need to be modelled.

Fortunately, flowcharting offers a format with sufficient formality, and is already used by BAs. BAs already create high level functional models, and testers and developers can then overlay the additional functional logic and data they need. If sub-flows are used to embed sub-processes within higher level master flows, abstraction and traceability is also achieved, while non-technical stakeholders can work with only as much detail as they need to fulfil their roles.

The detailed functional models can be used to automatically generate optimized test cases. As a flowchart is essentially a directed graph, automated homotopic analysis can be used to identify every path through it. These paths are equivalent to the test cases needed to cover all the functionality set out in the requirements. Optimization algorithms such as All Pairs can further be applied, to reduce the paths down to the smallest set needed to retain functional coverage.

Testing becomes fully automated

Testing then becomes a largely automated comparison of the requirements to the system that’s been developed, especially if test execution is also automated. However, the biggest pay off comes when the requirements change. Because tests are traceable back to the model, they can be updated automatically when the model changes. Testing can thereby react to changing user needs, validating within the same sprint that the desired change has actually been delivered.

 

Edited for web by Jordan Platt

More
articles

Menu