Exploring the last bastion of manual testing

Quality has always been the driving factor behind the delivery of any software product. Ensuring quality delivery with minimal or no defects usually leads to a satisfied customer and drives business growth. Since quality is such an intrinsic factor in the development of the software product – that can result in making it or breaking it in the market – it is essential for the product team to perform rigorous quality checks before releasing it to the customer base.

Software testing methodologies have proposed various approaches to verify the product as per its specifications. These, however, can be executed in any one of these styles, namely scripted or exploratory and can be implemented at different phases during the software development lifecycle. Also, testers need to be included in the software development process, beginning with the requirements gathering phase, which will result in a better analysis of requirements and help to develop better test plans. The benefit of creating the test plan is to cover all scenarios which improve test coverage; early identification of risks, listing out-of-scope scenarios, defining the testing strategies and so on.

The traditional approach requires the tester to plan, write scenarios, review them and then move on to the test execution phase. On the contrary, a freestyle testing approach can help save this time and effort, which can then be devoted to other parts of testing such as test execution, defect tracking and retesting.

Exploratory testing mainly targets the functionality of the application with the intent of finding hidden bugs and ensuring the application works as per the required specifications. It is sometimes also referred to as ‘ad-hoc testing’, since it doesn’t require a standard structure for test execution, generates less documentation and doesn’t always facilitate an easy way to reproduce the defects.

A tester not only needs to know how the software works but also needs to apply their cognitive skills to generate ideas to break the application. Exploratory testing, as an approach, encourages the tester to apply their creativity, analytical skills and past experience in exploring all facets of the application with the intent of finding the defects.

Scripted vs. exploratory testing

With the scripted approach, we invest time early on, planning all scenarios, creating relevant test data, documenting test cases, reviewing and finalising all steps that will help us during the actual test execution. This brings structure and order in the process of testing, which results in a detailed understanding of the workflows, expected results and a clear path for reproducing defects. With the availability of tracking and retracing mechanisms, the scripted approach enables the creation of a requirements traceability matrix. Documentation, also, is essential in training new testers in the team and also helps in auditing functions.

Exploratory testing is a completely different beast that requires the tester to be disciplined in their approach rather than being bound by the standards and rules dictated by the approach itself. This approach focuses on validating the behaviour of the application under test in a limited time to ensure critical functions work as expected and no unusual behaviour or unseen defects are observed. The tester uses the test charter as a guide to decide on the key areas to test and develop ideas to explore edge cases. With time being limited and the focus intensive on discovering defects, it is challenging to reproduce defects once they are discovered as the tester doesn’t spend time in recording the steps followed for each test execution.

Product and project managers require artifacts to manage expectations around the probability of introducing defects during each development activity that directly supports product release decisions. This is easily satisfied if the testers take the scripted approach. Traditional testing tools are all capable of generating such reports and usually have such functionality built in since the workflows are structured. However, exploratory testing is limiting in a way since it reports on descriptive metrics like the number of defects found, fixed and retested, with just a checklist of scenarios to support tracking and retracing. Also, since it relies on the tester to maintain order, the onus is on the tester to analyse the application under test, based on their domain knowledge and level of expertise, to generate ideas and prioritise them.

Is there a hybrid approach?

Project delivery has to adhere to four major constraints: scope, time, budget and quality. Testers have to optimise their activities for every product release based on these constraints, since there is a time limit on the number of scenarios that can be verified while ensuring the maximum number of defects are identified and fixed, as the cost to fix bugs is higher after release. Such constraints are ideal to introduce exploratory testing to augment the scripted approach. This will help testers to create and prioritise all required scenarios and perform exploratory testing as they learn more about the application. It gives them more flexibility to verify all parts of the application while diving deep on newly introduced changes. This hybrid approach would result in revealing exact steps where the application breaks, identifying hidden defects and producing proper documentation in the form of formal test execution reports at the end. Thus, with the scripted approach, testers perform a gap analysis between the specifications and actual code in the early stages of the test preparation phase, while introducing exploratory testing in the test execution phase, to concentrate more on complex areas for which fewer scenarios have been identified. Hence a tester’s adaptive ability will help them to generate more complex scenarios that have not been identified previously while performing rigorous testing on selected functionality, for which test cases have been already identified. Thus, the focus changes from the quantity of documentation produced to the quality of documentation necessary to track testing activities.

What the future holds

Most activities in software testing are repetitive and have already been targets for successful automation. With AI slowly, but surely, coming of age, industries in all sectors are trying to find applications to automate and optimise operations. We have seen the death of manual testing being discussed quite a few times in the past decade. Manual testers, however, are still a quintessential part of the software team since, until only very recently, cognitive automation has been a challenge. We might have heard of how an AI from DeepMind (AlphaGo) can now beat human experts in the game of Go, or of the feats of an AI from OpenAI that has defeated human players in a multiplayer online strategy game. In each of those examples, the AI explored innovative ideas that had not been thought of previously by human players.

Exploratory testing has been difficult to automate since it requires creativity, understanding, analysis and the application of that knowledge for it to be applied effectively as a testing strategy. In essence, a lack of cognition and intuition has been the barrier for AI in software testing. This, however, is changing with each advancement in machine learning, especially in the application of reinforcement learning. Testers might need to start rethinking the way they need to add value to the software team; an example would be human testers managing multiple software projects while the AI performs the actual functions of the current day tester. The role of the software tester might thus evolve to be more managerial along with the need to be more technical to be able to find root causes for the defects and support software developers in fixing bugs. As with automation in any industry, a trainable employee is bound to survive. Hence it is imperative for all testers to improve domain knowledge in their field and learn new skills in software debugging and maintenance to add value to their organisation.

Written by Afsana Atar, Scrum Master at Susquehanna International Group

More
articles

Menu