Understanding and prioritizing QA is one of the biggest differentiators between leading software and their less elegant counterparts. Why? One defect can cost your business its reputation which can translate into millions of dollars lost. To take your usability and software quality to the next level, use these six QA best practices.
1. Don’t wait until the end of your project to involve your QA team.
Quality assurance analysts bring a unique perspective to each project team. Throughout the course of their careers, they have been intimately involved in testing complex solutions. This perspective gives them the foresight to anticipate and search for bugs/defects before the solution is even implemented. Involving your QA team even in early meetings with the client allows them the opportunity to look for logical holes in any requirements saving time and money down the road. Even before writing test cases, executing them and assisting in prioritizing fixes, your QA team’s experience is its biggest asset.
2. Track the quality of your project’s development over time and report on it.
Since I joined MentorMate, five years ago we have grown our QA practice from 1 to approximately 58 talented analysts (with 80% ISTQB certified). One way we maintain QA best practices is to hold our team members and projects accountable and measure the quality of the project multiple times a week. This allows us to track project quality over time. To do this, we look at defect trends. We classify the defects so we are better able to prioritize issues and assess success over time. Then we report on the status and success fixing the issues on a near daily basis. Our detailed reports give transparency to testing and help the customer make the right decisions during and at the end of the development process.
Here are some of the defect trends we watch for:
Parallel lines: An equal number of “new” and “closed” issues that the QA team found and developers fixed. This is an indication of a stable project during a development sprint.
Bug convergence: The daily or weekly numbers of “closed issues” should exceed the daily or weekly number of “new” issues. This is an early indication of stability during feature freeze phase.
Zero bug bounce: When the number of issues reaches zero for the first time, this is late stage indication of stability during the feature freeze phase.
A few others include: defects by functions, cumulative bugs over time, new defects added over time and closed defects over time.
Tracking the success of a development happens throughout the course of the project. High-quality projects are predicated on the usability of the solution. Software is never perfect. It is continually evolving with new releases and fixes. But if the software is easy to work with and serves users, we have achieved high quality. As an additional note, most quality measures are taken during the regression and release phase of the project cycle.
3. Understand the particularities of the frameworks being used to build your solution.
Our QA process involves the creation of test cases, regression testing, defect management, acceptance testing and story-based testing. Even beyond this practiced method of QA best practices, it’s important to know whether the frameworks or technologies being used to build your solution require extra effort in a particular project phase. For instance, iOS and Android are less risk-based. But when developing with PHP/LAMP, more regression defects tend to appear. In this way, an experienced QA team should adjust project timetables to match.
4. Maximize testing time even if a part of the project is behind schedule using simulation.
When creating mobile solutions, integration testing must be performed using several different components including third party software and hardware. During an effective release cycle, two main obstacles appear:
1. Difficulty determining the component where a defect is hiding
2. Development of one component is behind schedule and testing needs to wait.
We resolve the first obstacle using sniffing tools. Typically Fiddler and Charles for debugging and sniffing the traffic between components. They expedite defect resolution by giving clear info where defects are hiding.
To explain QA best practices to handle the second obstacle, let’s take an example. Say your development team is building a mobile app that will partner with a server your client is tasked with building and maintaining. Bottom-up or bottom-down integration testing (depending on the component that is completed first) removes the need to wait for both components to be finished and ready before testing can begin and makes your testing process even more efficient.
5. Automate to save sanity.
As those versed in QA process know, automation affords confidence that when you release new components of your solution, it will not “break” existing components already developed and functioning properly.
When testing in an Agile environment, new features are implemented every sprint, thus increasing the amount of regression testing time. There are two options when the system is growing, and we use manual testing:
- Keep high regression coverage and increase the time for testing
- Keep the same amount of time spent on testing and reduce the regression coverage.
With automated regression testing, there is no need to reduce testing coverage to expedite a release timeline. Automated tests are fast and can be run frequently, which is cost-effective for software products with a long maintenance life.
A good automation testing strategy begins with automation tools evaluation and analyses of the return on investment using automated tests. Then comes the development of framework based on the goals that need to be achieved, example goals might be:
- test to have support by manual QAs
- achieve automation coverage for very short deadlines
- reusable automated tests for different clients
- and many more…
Here are some of our favorite tools broken down by the type of automation they help to complete:
Web automation: Selenium WebDriver and IDE, CodedUI (Visual Studio), Sikuli
Mobile automation: Appium, Robotium, Calabash
Behavior driven testing: SpecFlow, Cucumber
Services automation: SoapUI, HTTPWebRequest
Performance: jMeter, LoadUI, Visual Studio
6. Take into consideration what is important for your app when you prioritize QA time.
Beyond just team size and deadline, the purpose of your app determines how much testing your app should undergo.
Understand necessary quality levels and determine the critical areas of the software by figuring out the most frequently used parts, most visible and important for the software usage.
Prioritize testing in these areas and be careful about the parts that have been frequently changed, had a lot of defects in the past or are very complex and may hide some risks.