Software testing in 2016 has evolved far beyond repetitive keystrokes and long hours checking one stream of outcomes against another. The ability to automate large chunks of the testing frees your team to focus on the human elements and elevate the user experience of your solution.
Device cabinets no longer limit the form factors and operating systems on which new software can be tested. Cloud testing platforms like Xamarin Test Cloud offer options that allow for more rigorous and automated quality checks.
This isn’t an easy change for many quality practices. Now more than ever, development teams need quality analysts who philosophically embrace the capabilities offered by this new world of software testing best practices.
1. Don’t pass the buck. Employ a team (or partner) dedicated to software quality.
Without testers committed to validating the logic and experience of the solution from the outset, the role will fall to your PMs and BAs (or even your customers), who already have full plates. Fitting testing into an already full role isn’t doing their sanity or the quality of your project any favors.
Testers should be involved at the beginning of the project in a solely dedicated capacity. Rather than managing people or processes — they should focus on identifying logical gaps in development thinking to bypass trouble later in the life cycle.
2. Avoid chaos. Define your test process from the beginning.
Depending on the type of project, technical requirements of the solution and project needs, a variety of different tests may be used to assess quality.
Story-based test cases are based on positive and negative user story scenarios. They include the: steps and actions, expected result, type of execution (manual/automated), test importance (high, medium, low), sprint when the test case was added and browser execution (IE/Fx/Chrome/Tablet). This could extend to the OS as well, though it’s less important now than it was 15+ years ago. Story-based testing can be used throughout the software life cycle.
Using this method, tests are categorized by their relative level of importance.
High. This functionality is critical for the software. If it doesn’t work, the user will not be able to operate the system. Some examples include: Logging into the system, forgotten password and adding new enrollees.
Medium. Tests classified “medium” validate functionality that is important to users, but doesn’t prevent them from operating the system. Examples: Filtering, negative scenarios and edge cases.
Low. Test cases in this category are assigned low priority. The functionality being tested rarely changes or is static, for example: contact us information.
This method analyzes the risks to code based on bug fixes and user stories implemented during each sprint.
The methodology for sprint testing first involves an assessment of development changes. These might detail how new development is impacting surrounding functionality or whether the development will result in new changes to the user interface. After establishing an understanding of development changes, testers follow up with development team to discuss any potentially affected areas.
Test cases are selected based on the above determinations and executed during the sprint before code freeze. The goal? To discover any potential issues during the sprint testing, prioritize and address them before code freeze.
Feature and code freeze happen in the later stage of the software life cycle. During this phase, no further modifications will be made related to new feature implementations. Features are frozen typically one sprint before deployment to production. Though, the length of the freeze period depends on project complexity and size. Adding new features right up until the release represents a huge risk for quality.
Similarly, implementing new user stories during this time increases the risk of introducing regression defects your testers may miss in the haste of the release.
During the feature freeze phase defects are retested and the regression testing begins. It is good strategy from all test cases to choose:
- Frequently used functionality
- Functionality that has shown many bugs in the past
- Complex functions
- Functionality that has changed several times during development
Test cases are then selected from the above criteria and executed before code freeze.
Regardless of the testing strategy your team settles on it could be a blend (we often use a blend of all three), establish roles and responsibilities straightaway, along with clear guidelines for testing in each environment.
3. Continuity always wins. Regression test throughout the software lifecycle.
Regression testing falls in the category of critical software testing best practices. It should be completed from the beginning to the end of the project depending on the availability of resources and budget to execute.
The only way to avoid the nightmare scenario where a defect with implications for a variety of functionality requiring weeks of work to remedy is found just days before deployment — is to assess and fix identified bugs all along. Automation streamlines this process and is especially effective to report on regression impact daily or weekly.
4. Don’t procrastinate. Complete performance testing as soon as possible before the production release.
Before jumping immediately into testing, define the success requirements and metrics. Load scenarios must also be set, defining the scenario under test and the expected users engaging with each interaction.
Tests should be executed in an exact copy of the production environment. After executing the scenarios, analyze the results to determine where (if at all) the system is failing or needs optimization. Then, make adjustments to the server or code as needed.
Types of performance tests
Load testing. Putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
Stress testing. Normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system’s robustness in terms of extreme load. It helps application administrators determine whether the system will perform sufficiently as long as the current load goes well above the expected maximum.
Speed/effectiveness testing. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.
5. Opening the channels of communication isn’t just necessary. It determines success.
Whether you’re working with an external partner or an internal department, the success of your project and your ability to refine processes that aren’t working depends on a keen understanding of overall project quality by all stakeholders. The best software testing teams deliver a defect trends report weekly. It should call out found and closed issues, helping your project team visualize and gauge the overall stability of development.
Especially when working with a partner, share test plans, so everyone is working from the same base knowledge.
Finding a balance between embracing the latest software testing best practices and relying on battle-tested methods is the key to managing software development project cost while transitioning to more efficient, comprehensive methods.