From simple daily chores like setting your phone’s alarm clock before you go to sleep to some colossal operational projects like launching a rocket into open space, developing the appropriate software to support it all is no walk in the park. To lay out the technological web of systems and ensure their ultimate program purpose is fulfilled is to know the sheer scope and volume of the assignment at hand. But, how do you even begin to size it up? When do you tackle determining the effort, schedule, staff, and the myriad of other project-related metrics? And, how do you then comprehend the magnitude and the extent to which you will carry it all out? It’s an open window to a star cluster of variables. Uncertainty on any of them could leave anyone hanging in the air.
The solution to this overwhelming knot of unknowns is planning.
Tending to the volume of demands for software services nowadays takes more than a handful of steps. Cutting corners to breeze through a list of any project’s specific goals, deliverables, tasks, cost, and deadlines is also not a viable option. The asking price for that is much greater and you’re looking towards a way more thorough approach. Then again, the first strides to opening the gates towards final-day success prove fairly straightforward. The first stepping stone in that regard is estimation.
Evaluation and Planning at the Heart of the Software Development Cycle
Despite the lack of pre-determined uniform methodology for defining the estimation to each and every project, the general rule applies that its first two steps – the evaluation and planning – are your bare minimum essentials. So how does that work?
By default, the act of estimation points out at finding a certain estimate, or approximation of a value that, even genuinely hypothetical, provides a much-needed reference point that could, in turn, be used for a certain purpose – be it cost, investment, scheduling or any other estimate tangible to the success of the project. That could be carried out despite the level of completeness, certainty or stability of the input data. Keep in mind that this estimation would vary depending on the specifications of different projects, but would ultimately set your direction for the track of development of the entire project.
Formulating such an estimation value to kick-start the testing process goes through making an educated guess for a figure that is somewhat approximate to the outcome value. Your safest bet for determining that value would be the use of the pool of project information at your disposal. Before immersing into any further project evaluation however it is paramount that the one performing the testing has a justified and fair assessment of all the project phases. No workload is sweeter than the one where you are familiar with every step of the way. Once that is guaranteed the general practices in QA state that you could proceed with both manual and automation testing. And then, as the cloud of unknowns slowly clears out, it’s time that the automation estimation enters its step-by-step life cycle of factors that help calculate the project’s upturn and feasibility.
The Five Checkpoints in the Test Automation Estimation
1) Scope of the Automation
Hold on before you begin to engage in the application manipulation as we are encouraged to find the right test cases for automation first. That we carry out by following the general rule of thumb – split the application into smaller chunks, or modules and further break it down to sub-modules. It then becomes obvious that each module is different than the rest given their individual functionalities. This is a de facto ‘divide and rule’ strategy that is put in use to help assess each of them and pinpoint all fitting test cases for automation.
On another relevant note, the realistic estimate could seamlessly be provided based on the already defined test suites. But don’t turn a blind eye here as their ultimate selection should still undergo scrutiny. It’s oftentimes the case that even QAs mistakenly select inappropriate automation test cases. Such faults may be incredibly crucial as they could result in a failure of the entire automation testing project.
With no ready-make guidelines available for a standard procedure in determining all right test cases, generating a good automation solution may prove a very gruesome objective to reach. The deciding factor that tips the scales between successful and unsuccessful automation for this is the Return on Investment (ROI) coefficient. To calculate it, simply examine all test cases by carefully analyzing each of them and apply the ROI formula, which considers cost savings, increased efficiency, and quality. Perform that and you’ll only be left with the selection of the top results, otherwise known as candidates, for your automation.
Here are further step-by-step clues that could make the identifying of these cases, which are bound to indicate a positive ROI, even more trouble-free:
Step #1 – Keep in check the parameters on which you will base your test case as a candidate for automation. Normally, these will vary based on your AUT(Application Under Test) and will be affected by the following considerations:
Cases executed with different browsers, devices, environments, sets of users or data
Cases that have any dependencies or complex preconditions
Cases that require special data and complex business logic
Cases with a low or high frequency of execution
Step #2 – Once the test case parameters have been accounted for, the next step is to break your AUT into smaller functional modules. Remember – we split them down to better assess their integrated functionalities. Based on these parameters, our output would be a sum of the identified test cases which we deem fit to be selected for automation. Ideally, this list of test cases will vary based on the purpose, functions and overall goal of each different project.
Y – Yes / High frequency
N – No / Low frequency
Let’s refer to the table contents above that narrates an example of verifying the data for each test case. Here we perform an analysis of whether any given test case meets a certain set of requirements and denote each that is deemed appropriate for automation. Generally, this procedure is applied to all modules.
Step #3 – Then we proceed to consolidate all the test cases for each module that are ready for automation. This is performed with the intention for improved grouping of modules and better test case visibility. Its purpose is to quantify the measurements and establish a rough estimate that would finish this manual testing.
Step #4 – At the end, we take these estimates to greater detail and scheme them out as shown below. Depending on the level of information about each AUT and its capabilities we could then determine in rough approximation the execution time for any given test case.
At last, we have the right cases at our disposal and it’s high time we move on to calculate the ROI for each one. Then again don’t forget that you could only continue to utilize any of these cases if and only if their ROI measurement is positive and given that they meet a certain project’s specifications.
But conducting this measurement using a ROI calculator has one major flaw. While it considers the hours saved and multiplies them by the hourly rate, thus measuring the displacement of manual efforts in producing savings, the missing piece remains the cost of production. Why is it important? Due to what it factors in – the spending of money, effort, human and technological resources, time, risks incurred, delivery dates and other considerations. Besides, the automation process isn’t free and further lists the following attributes that also help determine your ROI, such as:
- Ownership and license cost of the automation tools.
- Approximate time to develop and maintain the scripts.
- Approximate time to analyze the results.
- Approximate time and cost to equip the resources.
- Approximate time for setting up test environments.
- Time for developing from scratch or customization of an existing test framework.
- Handling of management overheads.
2) The complexity of the application
A good reference point for how to determine the level of complexity of any given application is to take note of its size. That we estimate by bearing in mind the complexity of the AUT, the test scope, the business logic implemented into the software and the number of features that the application handles. These variables undoubtedly indicate the level of effort to test-automate. As you may have guessed, the larger and more elaborate they appear, the more timely and complex the entire automation process becomes.
But how do we define what constitutes a small, medium or large application size? Is there a guideline that could help us classify our system? The only certainty in answering this is by looking at the size and amount of the test suites part of our automation process. We also disregard the type of the AUT and oversee how complex the comprising modules and functional elements are.
According to the character of the suites, 300 to 500 test cases to automate may be considered a relatively small-size application. On the other hand, an application with 1500 cases could be assumed to be comparatively complex. Yet, the above-mentioned factors – test scope, business logic and amount of features – could render all these estimations misleading since the different considerations for separate applications vary greatly and thus don’t allow for one uniform judgment criteria. In any sense, a greater number of modules and functional elements means a greater amount of case automation therefore creating a demand for more time.
In order to evaluate the application against such approximations, we are obliged to get acquainted and go over all AUT components, functions, requirements, dependencies and features before making the final verdict. This review should also encompass the hypothesis for expected results from each case and the relative need to optimize them according to the demands of our project. This list of pre-emptive actions also includes executions and assertions of the application such as navigation, dropdowns tapping, mouseover and typing, as well as more manifold actions such as connecting to a database, API calls, processing a flash or AJAX, interactions with tables, Google Maps integrations, usage of Oracle Forms and more.
With all these actions in your arsenal you then move on to evaluate the development time for each by spreading them across the test cases, thus coming up with a crude estimate for the time needed to calculate all of them. The analysis that follows will make use of less development time for each test due to the code reuse and your accumulated experience thus far.
3) Test automation tools
The bulk of the process of successful test automation relies heavily on the selection of the most appropriate testing tools. Yet, they need to check off the boxes for both being relevant to the project and being available on the market. Sifting through all tools to evaluate and find the most relevant ones for your task at hand may prove a tad bit too time-consuming, but putting the long hours into this one-time identification exercise benefits your project in the long-term. Here are the criteria used to help you finalize your tool selection.
- Supported platforms – Be it Windows, macOS, Unix, Linux, web, mobile or yet another platform, your tool will ideally fully integrate into the given software on which your system runs. The execution of your test is fully dependent on this decision.
- Testing requirements – What types of testing does your project really necessitate? From load testing alone to a combination of Web UI, Mobile, and API testing, it’s always wise to pick a tool or a combination of tools that support the widest range of test types.
- Project environment and technologies – Oftentimes allowing the selected tool to identify all objects within the application may be obstructed by the small test size. That is why your go-to tool should be reinforced to support all elements of AUT, regardless of the technology used in the project’s creation.
- Scripting languages – The greater and more flexible scripting options, the wider freedom for the testing teams to write their scripts in their most favorable programming languages.
- Support – Is the tool actively maintained, upgraded, and supported? An attentive community for sharing and resolving problems may just be the backing you need.
- Usability – Find out if your tool is reliable, robust and efficient. Preferably, it should also be easy to understand and work with its features at the starting line of your testing, as more advanced functionalities will soon be of need.
4) Implementing the Framework
As you could probably imagine a single tool cannot help build the entire automation solution. The daunting number of requirements to reach that solution – writing test automation scripts, reading and inputting data from a file, creating reports, tracking test results, logging, Setup, TearDown scripting, scheduled test execution, test environment management – requires a framework as a structure that considers all these parameters. Think of this framework as a puzzle, which jigsaw pieces need to be put in a particular order. It is no wonder that this framework in itself is time-consuming to structure. The list of its characteristics for any project’s needs is as follows:
- Portable, extendable and reusable across and within projects.
- Ability to be loosely coupled with the test tool wherever possible.
- Easier debugging and customized reporting facilities of scripts provided by the logging and error feedback.
- The smoothness of the data-driven test to the scripts and their loose coupling.
- Easier control and configuration of the test scope for every test run.
- Simplified and easier integration of test automation components as part of the test management, defect tracking and configuration management tools.
Note that these variables range widely for each framework. Their size and scope depend largely on the nature of the application, its size and complexity. A helpful tip is to modify the framework to serve your initial needs and further customize and fine-tune it as the project demands increase in number.
Also, take into account the duration of the project. Whether you foresee that a short or long-term solution is on the horizon depends on how fast you are pushed to produce results. Rushing into a quick results strategy has its merits if the commitment you’ve made asks for a short deadline execution. That may require a greater spend on maintenance and refactoring the framework in the long-term. Pick an up-front design – framework architecture, third-party libraries, test structure, and design pattern selection – and you are ensured with long-term productivity, thus minimizing maintenance and re-work on extensions. There is no right or wrong approach as long as the goals and circumstances of the project are clearly outlined.
5) Learning & Training
Tools could also be split into two other categories, based on whether your intention leans towards a learning or training-oriented effort. These two tool types fall on the track of either a robust automation tool like Selenium WebDriver or a record-and-playback testing tool like Selenium IDE.
This grouping between the two efforts could be performed for any testing tool. What most often decides in favor of one or the other is the tool’s capabilities and the level of skill required for their full usage.
The creation of the second tool type – record-based tools – is by default considerably quicker, due to the record-and-playback functionality. Moreover, it is effortless to install and easy to learn, which explains its appropriateness for the more uncomplicated, short-term testing projects. On the con side, these tools could not be utilized for more complex application automations, multi-browser, and parallel testing, combining different types of testing, etc.
On the other hand, the robust automation tools, or code-based tools, require a substantially greater effort to engage. The asking is for understanding the general-purpose programming language. Yet, what makes it particularly practical is that it provides support for interacting natively with almost all browsers, performing conditional and looping operations, test types and platforms. You can generally benefit from their service in advanced test scripting in long-term projects.
Let’s take the automating with Java for instance. The greater the know-how for other technologies for building(ANT/Maven), module test structure(TestNG/jUnit), logging(Log4J) or reporting(ReportNG), that complement one’s mastery over the framework’s complexities, the more timely and on-hand use there will be for getting a grasp on that list of technological utilities and plug-ins.
6) Setting Up the Environment
There is a cluster of configurable environments to pick and choose from as we delve into the specifics of our project. The selection of the most appropriate one is a continuous delivery process called environmental provisioning. The concept is simple – one couldn’t build, test and deploy the application code alone if there is no underlying application environment.
Three main areas help determine the environment:
An environment that is deemed well defined and fairly predictable is one where the reliable infrastructure would require no routine maintenance. Let’s assume for a moment that we are using a hypothetical environment that is defined by test beds, or slots, configured by virtual machines. The sheer cost for the support of one that is ready to use at all times may prove too high, mainly due to the resource expenses on the VMs – be it hardware or money. A less demanding setup would integrate tests with CI/CD. Setup scripts are another factor to account for. The facilitated resource utilization and optimization allows for their successful setup with the aim for autonomous creation and configuration of that test environment before the execution of each test cycle.
The essential tip here is this – take a moment to sit down with your team and review your test process. Define what your test automation needs exactly. That planning, design, and preparation of all environments in use also calls for the management of that team – namely who are the designated DevOps, System Admins, Developers, and QAs. Once established, the team will further bear in mind yet another set of aspects for the environment provisioning, including:
- Test data preparation, creation and loading within the test environment.
- Creating proper files for tracking the environments and credentials used by your tests.
- Preparing Setup and TearDown scripts according to the environment’s needs.
- Developing of configuration management scripts, if needed.
- Integrating automated testing with continuous integration server for CI, if needed.
Now that all stages for reaching the automation solution have been completed, you need to ensure its continuous workability. Not only that but based on the dynamic technological advancements these days, client demand may call for new features and increased efficiency at any point in time that would, in turn, require altering of a number of the project’s specifications. These can vary from marginal to sizeable alterations to the automation process. In any case, you should take preemptive measures to meet their requests without breaking or disturbing our automation scripts.
To ensure that, we need to preserve the characteristics of our framework and test, namely their reusability, scalability, and change-resistance within the AUT. Such an instance would be the UI tests that should be developed so that they are sustainable against minor changes in the AUT’s Front-End. Then again some tests are inherently complex with their composition structure consisting of a dozen steps and multiple assertions which would ask you to break the test into smaller chunks of modules to formulate. Labeling these tests with their corresponding names could also help you navigate through the process and further involve new members in the test development team more expeditiously. Including code review practices also has its high benefit value as it assists in identifying any automation issues that may emerge as a result of inconsistencies in the test results, stability and level of maintenance effort.
During each release cycle, it is paramount that a review of the new functionalities added to the AUT is conducted so that the automation scripts are themselves updated, analyzed and refractored. This maintenance becomes integral so as to ensure the effectiveness of the scripts. Being already familiar with the characteristics of your project, you also should estimate the likelihood that existing features will be implemented, updated or suspended given our project experience.
All of these suggestions should be part of the discussion at a very early stage – preferably before the framework design and script development process. Spend an appropriate amount of time in planning and designing early on and you will avoid the less satisfying time extended for refactoring when the project moves to new rework specifications. Do these and you will secure a more accurate and precise estimation.
What We Learned
Unfortunately or not, there is no solution that applies to each and every estimation automation effort. Our only point of reference is observing how the global development community in its sub-groups deals with their projects as they share and exchange their experiences. Thus, there is no crude estimate for the time you will spend on putting your ideas under the cap of planning or compile the code and structure tests. Why? Because unlike, say, assembly line manufacturing in factories, where a fairly monotonous regime devoid of change is in place, here we deal with abstract concepts for software that take an exceptional amount of adjustments and modifications to tend to any client’s set of requests.
Once again, let’s go over the expanded list with a summarized step-by-step checklist for the realistic effort approximation of the automation estimation.
- Knowledge transfer
- Test automation environment
- The chosen tool’s compatibility with AUT
- Test automation framework
- Test cases to be automated
- Size of the automation
- Test data requirements
- Types of testing required
- Supporting automation
- Version control and test automation builds