The software testing techniques used can make or break whether quality assurance spend will fall in line with project estimates or come in over the expected target. Selecting a strategy and reading a choose your own adventure novel are one in the same. Why? Specific factors determine the route taken. There are options for the speed demons and the risk-averse. We explore.
Software test automation — the process through which automated tools run tests that repeat predefined actions. They compare a developing program’s expected and actual outcomes. If the program expectations and outcomes align, the project is behaving appropriately. If the two don’t align, the automation script has identified an issue that needs further work. The testers analyze the code, alter it, and continue to run tests until the actual and expected outcomes align. The automation of tests is initially associated with increased effort, but the related benefits quickly pay off.
Risk-based testing — by comparison, this leans hard on your team’s intuition about where problems have lurked throughout development and how this might impact the new features. It also relies on their ability to synthesize the change analysis from the source code and prioritize testing before the release of a solution.
The following questions play a large role in the path test teams choose:
1. When is the project delivery date?
2. Do we anticipate multiple releases over a lengthy period of time?
Answering these two questions is critically important in choosing from the top software testing techniques. Why?
For the speed demons: A case for risk-based software testing
Say the project has a limited MVP. The core objective is to deliver a clickable prototype to vet internally or share with investors. The delivery timeline is short and multiple releases are not estimated as a part of the scope. Risk-based testing makes more sense in this scenario. The cost to set-up the automation scripts is not justified. It’s more logical in this case to determine priorities and work through that checklist before launch.
For the risk-averse: A case for test automation in software
Now consider an alternate scenario. Your team is about to embark on an enterprise-grade project. You anticipate upwards of 30 sprints. There are a variety of user security issues to consider, and the tool must play well with a suite of legacy systems. You also anticipate multiple releases, and a variety of stakeholders throughout the company have an interest in the project. They will likely require detailed reports during each phase — maybe even more frequently. The amount of regression needed to execute a project like the one detailed above justifies the effort it requires to set-up automation scripts. But once they’re in place, they can be run often (even nightly) to ensure new releases don’t interfere with previously validated functionality.
Exploring top software testing techniques
Industry veterans know teams can never test everything, and software can never be released completely bug-free. Testers and their development counterparts will always find more to validate and improve. But top teams can get close. The goal of both testing techniques is to identify the most impactful defects first and then work through the test cases in a descending order — from testing critical functionality all the way down to the nice-to-haves.
Software testing technique 1: Automation
Many software testing tasks are too laborious and expensive to justify paying the rates of a quality engineer to complete. For example, low-level interface regression testing. Why complete this manually? Beyond the time considerations, human error plays a role here. A manual approach might not always be effective in finding certain classes of defects. Test automation offers the opportunity to perform these types of testing more effectively.
Once automated tests have been developed, they can be run quickly and repeatedly. For projects with a lengthy maintenance life, once the automation test scripts are in place, they are run as your solution continues to scale without ramping up your software testing team. Even minor patches over the lifetime of the application can cause existing features to break. Though note — the goal of automation testing is to reduce the number of test cases run manually and not eliminate it altogether.
For teams a bit hesitant to make the philosophical shift to automation (less manual testing, more strategic foresight), the benefits are often enough to sway the naysayers when comparing between other software testing techniques.
Test effort scalability
Automation testing allows teams to create a library of test scripts that can be maintained and scaled to continue testing known functionality even as project goals pivot and additional features are added. For testers in an Agile environment, the ability to quickly react is paramount. New test cases can be generated continuously and can be added to existing automation in parallel with the development of the software itself. Once established, these tests can be reused — good news for testing teams performing regression on constantly changing code.
Automated tests run fast and frequently. Once established, the scripts can be reused against different platforms. Automated regression tests ensure the continuous system stability and functionality after changes to the software are made. This means shorter development cycles coupled with better quality software.
Even the largest software departments cannot perform a controlled web application test with thousands of users. Automated testing can simulate tens, hundreds or thousands of virtual users interacting with network or web software and applications. Lengthy tests that are often avoided during manual testing can be run unattended. The options are immense. Automated software testing can be programmed to explore an application and “view” memory contents, data tables, file contents and internal program states in determining if the product is behaving as expected. Automation can easily execute thousands of different complex test cases during every test run providing coverage impossible to replicate with manual testing. Testers, now freed from repetitive manual tests, have more time to create new automated software tests to validate complex features.
Less variability due to human error
Even the most conscientious tester will make mistakes during monotonous manual testing. Automated tests perform the same steps precisely every time they are executed and never forget to record detailed results.
In prioritizing test cases to automate, assess their value vs. the effort to create them. Test cases with high value and low effort should be automated first. Beyond the initial low-hanging fruit, we recommend automating test cases with frequent use, changes and past errors.
Helpful criteria for selecting automation candidates
- High risk – business critical test cases
- Test cases that are executed repeatedly
- Test cases that are very tedious or difficult to perform manually
- Test cases which are time-consuming
- Test cases that are newly designed and not executed manually at least once
- Test cases for which the requirements are changing frequently
- Test cases which are executed on an ad-hoc basis
Software testing technique 2: Risk-based testing
If your client or project team can’t afford to invest in automation testing, but can afford to run Agile and involve your quality team at the beginning of the project, risk-based testing offers another respected option.
Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution. It offers a way to lessen the test load by finding the most important functional areas and product properties in high-risk areas of the product.
Take this scenario: You currently have 100 features and 1,000 possible regression test cases.
Next release, you add 10 more features. This adds another 100 regression test cases to the pool. An Agile risk-based testing strategy doesn’t treat all the regression test cases as equal. Instead, your strategy would treat the new features and their associated interactions (with another, say, 20-30 features) with much higher priority.
Consider these areas when building your risk-based testing strategy.
- Technical complexity
- Changed areas
- Incorporation of new technology, solutions, methods
- Impact of the number of people involved
- Impact of turnover
- Impact of time pressure
- Areas which needed optimizing
- Areas with many defects before
- Geographical distribution
- History of prior use
When building a risk-based testing plan, balance time and budget through prioritization to accelerate efforts. Trust your intuition. If risk analysis isn’t properly completed, time can be lost on non-critical tests while critical defects slip through the gap.
Start off on the right track. Run an early test to find defect-prone areas. The next series of tests should concentrate on them. The first test should cover the whole system, but be very shallow, including typical business scenarios and a few important failure situations. When a test fails, we look why the test failed. If the problem is with the test itself, we modify the test. If not, the test has found a defect, and we log that defect.
You can then find where there was most trouble, and give priority to these areas in the next round of testing. The next round will then do deep and thorough testing of prioritized areas.
Other ways to save time for teams considering risk-based testing over other noted software testing techniques like automation testing.
- Prune low-risk combinations from the regression plan
- Prune low-risk features from the regression plan
Other noteworthy defect trends to track
When the weekly number of “Closed” issues exceed the weekly number of “New” issues. This is an early stage indication of stability before production release.
Zero bug bounce (ZBB)
The first time the number of critical bugs reaches zero. This is late stage indication of stability before production release.