Skip to content
  • Services
    Our Approach
    Personalized, in-depth technical guidance on a global scale that helps organizations achieve their digital transformation goals.
    Learn more
    • Our Approach
    • Development
    • Design
    • Digital Experience Platform
    • Data & Analytics
    • Cloud & DevOps
    • Support
  • Work
    Our Work
    Through our expertise in strategy, design, and engineering, we help clients deliver digital transformation at scale.
    Learn more
    • Our Work
    • Healthcare
    • Finance
    • Manufacturing
    • Agriculture
    • Education
  • About
    About us
    For over 20 years, we’ve partnered with companies of all sizes and industries to solve their most complex business problems.
    Learn more
    • About us
    • Leadership
    • Locations
    • Events
    • News
  • Careers
    Join our Team
    Take your career to the next level. We offer exciting opportunities across every stage of the software development life cycle.
    Learn more
    • Join our Team
    • Open Positions
    • Application Process
    • Benefits
    • Learning & Development
  • Insights
    Our Insights
    Read our latest blogs, watch our recent videos, and browse our library of e-books — all full of insights from our experts.
    Learn more
    • Our Insights
    • Blog
    • Videos
    • Downloads
  • Contact
Menu

Peer Into the Initiation of Automation Estimation

Find out more on the seven steps in the testing process and development in the automation estimation cycle.

Annika Hey

Annika Hey

Design Principal

Atanas Atanasov

Atanas Atanasov

Software Development Manager – Agile Frameworks

Björn Stansvik

Founder & Chief Executive Officer

Daniela Nazim

Daniela Nazim

MentorMate Alumni

Dimitar Dobrev

Dimitar Dobrev

MentorMate Alumni

Craig Knighton

Chief Operating Officer

Eleonora Georgieva

Global VP, Delivery

Georgi Dormishev

George Dormishev

System Administration Manager

Ivaylo Kostadinov

Director, Software Engineering - .NET

Jamie Bolseth profile picture

Jamie Bolseth

MentorMate Alumni

Jay Miller

President

Jeni Kyuchukova

Director, Quality Engineering

Jessica Anderson

VP of Finance and Administration

Liz Spolyar

Global Director, Continuation Engineering

Nick Curran

Nick Curran

Technical Architect

Nikolay Lyubchev

Global Director, Talent Acquisition, MentorMate

Stefan Tsvyatkov

Director, Software Engineering - Mobile

Stefan Tzanev

Chief Financial Officer

Vesselin Dobrev

Vesselin Dobrev

General Manager (Bulgaria)

Sylvia Vassileva

Sylvia Vassileva

Software Development Manager - Spok

Filip Gajtanovski

Software Development Manager - Storyworks

Krasimir K. Nikolov

VP of Technology

Katherine Kelly

Director of Operations (USA)

Carrie Siewert

Strategic Account Manager

Brady Swanson

Global Director, Marketing

Eve Poeschl

MentorMate Alumni

Ryan Peña

MentorMate Alumni

Vassil Vassilev

Software Development Manager - .NET

Pavel Petrov

Director, Software Engineering - LAMP&FE

Ivan Peev

Senior Technology Manager

Bob Reuss

MentorMate Alumni

Vera Kasapova

QA Manager

Greta Yamacheva

QA Manager

Robert Samuelsson

General Manager (Sweden)

Kyle Simmons

Solutions Architect

Robin Thomas

Solutions Architect

Nataliya Naydenova

MentorMate Alumni

Adam Malone

Alexander Dimitrov

Enterprise Architect

Andrea Kates

CEO, LaunchPad Central

Andrew Eklund

CEO, Ciceron

Andrew Marinov

Angel Nikolov

MentorMate Alumni

Anurag Shukla

Aron Wolde

MentorMate Alumni

Ashley Goodridge

Office Assistant

Benjamin Gramlich

MentorMate Alumni

Chris Black

MentorMate Alumni

Christa Haeg

MentorMate Alumni

Colin Lee

MentorMate Alumni

Deyan Stoynov

MentorMate Alumni

Dimitar Danailov

MentorMate Alumni

Dobrinka Tabakova

Doug Leatherman

Emily Genco

MentorMate Alumni

Fanka Vassileva

Gabriela Zagarova

MentorMate Alumni

Gary Conkright

CEO, physIQ

Gary Fingerhut

Executive Director, Cleveland Clinic Innovations

Gavin Finden

MentorMate Alumni

Georgi

Graham Klang

Hyusein Hyuseinov

Senior Automation QA

Ian Good

Global VP, Operations

Iva

Jack Cosentino

James Williams

John Byrne

Kaloyan Stoilkov

MentorMate Alumni

Kosta Hristov

Krasimir Gatev

Senior Android Developer

Lazar Petrakiev

Lyubomir Dobrev

Senior .NET Developer

Lubomir Velkov

Marin Yotovski

Mark Smith

MentorMate Alumni

Martin Dimitrov

MentorMate Alumni

Martin Kalyonski

Mike Hagan

MentorMate Alumni

Nikolay Andonov

Nikolay Arhangelov

Riley Panko

Guest Contributor

Roger Ferguson

MentorMate Alumni

Ryan Sysko

Chairman, WellDoc

Ryan Blake

MentorMate Alumnus

Sarah Rockholt

MentorMate Alumni

Sean McDevitt

CEO, Sensei

Siyana Slavova

Stanislas Walden

MentorMate Alumni

Stanislav Atanasov

Stanislava Bogdanova

MentorMate Alumni

Stefanie Trimble

MentorMate Alumnus

Stephen Fluin

Stoyan Stoyanov

MentorMate Alumnus

Tessa Cacek

Staffing Manager

Tom Clemens

MentorMate Alumnus

V8 JavaScript Engine

Viktor Mitev

Yolanda Petkova

Marketing Design Lead

Pete Anderson

Lead Product Owner, Target

MentorMate Software Development Lead Vasil Nonchev

Vasil Nonchev

Java Software Development Manager

Dilyana Totseva

QA Manager

Stanimir Nikolov profile picture

Stanimir Nikolov

Software Development Lead - iOS, MentorMate

Rosen Kolev

Technology Principal

Dimitar Mihaylov

MentorMate Alumni

Nikola Genov

Software Architect - .NET

Neli Todorova

Software Development Manager - LAMP

Yavor Dimitrov

MentorMate Alumni

Georgi Karanedyalkov

Software Development Lead - Android, MentorMate

Denislav Ganchev

Technology Principal

Stefan Shopov

QA Manager

Konstantin Rusev

Java Developer

Borislav Dimitrov profile picture

Borislav Dimitrov

Senior Android Developer, MentorMate

Tsvetelina Lazarova

MentorMate Alumni

Dimitar Gadzhev

Developer

Plamen Stoev

Software Development Manager - Front-end

Jake Nelsen

Senior Experience Designer

Zlati Pehlivanov

Zlati Pehlivanov

Senior Software Engineer II

Kate Tolmie, MentorMate Senior UX Designer

Kate Tolmie

Senior Experience Designer

Martin Angelov

Director, Software Engineering - LAMP&FE, MentorMate

Dimitar Zhelev

Senior .NET Developer

Joel Swenson, MentorMate Content Writer

Joel Swenson

Content Manager

Kiril Ivanov

Quality Assurance Analyst

Viktor Hristoskov profile picture

Viktor Hristoskov

Software Development Lead - iOS, MentorMate

Violeta Nikolcheva

Database Developer

Biliana Kadakevlieva

Senior Quality Assurance Analyst

Chris McLeod

Senior Solutions Consultant

Antonii Georgiev

Junior .NET Developer

Alexander Rusev

Front-End Developer

Matt Erickson, MentorMate PR and Social Media Manager

Matt Erickson

MentorMate Alumni

Brian Buchkosky

Global Director, PMO

David Tran, MentorMate VP of Solutions

David Tran

MentorMate Alumni

Kristin Krueger

MentorMate Alumni

Magdalena Chervenkova

Business Analyst

Denny Royal

Chief Design Officer

MentorMate Technical Account Strategist Joe Bodell

Joe Bodell

MentorMate Alumni

Viktoriya Chuchumisheva - MentorMate HR Manager

Viktoria Chuchumisheva

HR Manager

Kalina Tekelieva Headshot

Kalina Tekelieva

Senior Content Marketing Artist

Daniel Rankov profile picture

Daniel Rankov

MentorMate Alumni

MentorMate Senior Business Analyst Alexander Alexandrov

Alexander Alexandrov

BA Lead

MentorMate

Clint Rowles

VP, Business Development

Nikola Donev - SysAdmin

Nikola Donev

SysOps & DevOps Lead

Tseko Tsolov

Frontend Developer

Denislav Lefterov

Automation QA Analyst

MentorMate Content Writer Dilyana Kodjamanova

Dilyana Kodjamanova

MentorMate Alumni

MentorMate Project Manager Emma Jorstad

Emma Jorstad

Project Manager, Lead

Georgi Georgiev profile picture

Georgi Georgiev

Software Development Lead - LAMP, MentorMate

Martin Panayotov profile picture

Martin Panayotov

Senior iOS Developer, MentorMate

John Blake

John Blake

Senior Account Manager

Tyler Compton

Tyler Compton

Solutions Architect

MentorMate Software Developer Nikola Peevsk

Nikola Peevski

Software Developer — Lamp & Front-End

Aaron Whitney

Director of Client Strategy

MentorMate Senior Cloud Engineer Veliko Ivanov

Veliko Ivanov

Senior Cloud Engineer

MentorMate Senior Project Manager Suzanne O'Brien

Suzanne O’Brien

Senior Project Manager

Svetlin Stanchev profile picture

Svetlin Stanchev

Software Development Lead - Front-end, MentorMate

MentorMate Senior Cloud Engineer Todor Todorov

Todor Todorov

Senior Cloud Engineer

MentorMate Senior QA Analyst Kate Stamatova

Kate Stamatova

Senior QA Analyst

Frank Anselmo profile pic

Frank Anselmo

Global Director, Project Management

Gyuner Zeki Headshot

Gyuner Zeki

Solutions Architect

Galin Stanchev

QA Analyst

Sarah Hoops

Business Development Manager

Brenden Diehl

Business Development Manager

Anna Krivova profile picture

Anna Krivova

Software Development Lead - Front-end, MentorMate

Ivelina Kavalova profile picture

Ivelina Kavalova

Senior Business Analyst, MentorMate

Paul Sanders

MentorMate Alumni

Jim Cikanek

Senior Client Strategist

Samuil Yanovski profile picture

Samuil Yanovski

Software Development Manager - Android, MentorMate

Krasimir Gatev profile picture

Krasimir Gatev

Senior Android Developer, MentorMate

Kristina Goryalova headshot

Kristina Goryalova

Talent Acquisition Manager

Elena Petrova Headshot

Elena Petrova

HR Specialist

Jay Matre

Senior Business Architect, MentorMate

Lilyana Dimitrova

QA Specialist

Josh Marquart

Chief Strategy Officer

Mario Gorki

Senior Mobile Developer

Simeon Zhekov Headshot

Simeon Zhekov

Cloud Engineer

Hristo Stoyanov Headshot

Hristo Stoyanov

Cloud & DevOps Lead

Ben Wallace

Enterprise Architect

Boyan Stoyanov

Data & Dota Specialist

MentorMate Director of Software Engineering Petya Ivanova

Petya Ivanova

Director, Software Engineering - Java

Sebastian Ortiz-Chamorro

VP of Engineering, Latin America

Consuelo Merino profile pic

Consuelo Merino

Director of Operations, MentorMate

Carlos Rodríguez

Data & Analytics Manager, MentorMate

Joel Hernandez

CTO, eLumen

From simple daily chores like setting your phone’s alarm clock before you go to sleep to some colossal operational projects like launching a rocket into open space, developing the appropriate software to support it all is no walk in the park. To lay out the technological web of systems and ensure their ultimate program purpose is fulfilled is to know the sheer scope and volume of the assignment at hand. But, how do you even begin to size it up? When do you tackle determining the effort, schedule, staff, and the myriad of other project-related metrics? And, how do you then comprehend the magnitude and the extent to which you will carry it all out? It’s an open window to a star cluster of variables. Uncertainty on any of them could leave anyone hanging in the air.

The solution to this overwhelming knot of unknowns is planning.

Tending to the volume of demands for software services nowadays takes more than a handful of steps. Cutting corners to breeze through a list of any project’s specific goals, deliverables, tasks, cost, and deadlines is also not a viable option. The asking price for that is much greater and you’re looking towards a way more thorough approach. Then again, the first strides to opening the gates towards final-day success prove fairly straightforward. The first stepping stone in that regard is estimation.

Evaluation and Planning at the Heart of the Software Development Cycle

Despite the lack of pre-determined uniform methodology for defining the estimation to each and every project, the general rule applies that its first two steps – the evaluation and planning – are your bare minimum essentials. So how does that work?

By default, the act of estimation points out at finding a certain estimate, or approximation of a value that, even genuinely hypothetical, provides a much-needed reference point that could, in turn, be used for a certain purpose – be it cost, investment, scheduling or any other estimate tangible to the success of the project. That could be carried out despite the level of completeness, certainty or stability of the input data. Keep in mind that this estimation would vary depending on the specifications of different projects, but would ultimately set your direction for the track of development of the entire project.

Formulating such an estimation value to kick-start the testing process goes through making an educated guess for a figure that is somewhat approximate to the outcome value. Your safest bet for determining that value would be the use of the pool of project information at your disposal. Before immersing into any further project evaluation however it is paramount that the one performing the testing has a justified and fair assessment of all the project phases. No workload is sweeter than the one where you are familiar with every step of the way. Once that is guaranteed the general practices in QA state that you could proceed with both manual and automation testing. And then, as the cloud of unknowns slowly clears out, it’s time that the automation estimation enters its step-by-step life cycle of factors that help calculate the project’s upturn and feasibility.

The Five Checkpoints in the Test Automation Estimation

1) Scope of the Automation

Hold on before you begin to engage in the application manipulation as we are encouraged to find the right test cases for automation first. That we carry out by following the general rule of thumb – split the application into smaller chunks, or modules and further break it down to sub-modules. It then becomes obvious that each module is different than the rest given their individual functionalities. This is a de facto ‘divide and rule’ strategy that is put in use to help assess each of them and pinpoint all fitting test cases for automation.

On another relevant note, the realistic estimate could seamlessly be provided based on the already defined test suites. But don’t turn a blind eye here as their ultimate selection should still undergo scrutiny. It’s oftentimes the case that even QAs mistakenly select inappropriate automation test cases. Such faults may be incredibly crucial as they could result in a failure of the entire automation testing project.

With no ready-make guidelines available for a standard procedure in determining all right test cases, generating a good automation solution may prove a very gruesome objective to reach. The deciding factor that tips the scales between successful and unsuccessful automation for this is the Return on Investment (ROI) coefficient. To calculate it, simply examine all test cases by carefully analyzing each of them and apply the ROI formula, which considers cost savings, increased efficiency, and quality. Perform that and you’ll only be left with the selection of the top results, otherwise known as candidates, for your automation.

Here are further step-by-step clues that could make the identifying of these cases, which are bound to indicate a positive ROI, even more trouble-free:

Step #1 – Keep in check the parameters on which you will base your test case as a candidate for automation. Normally, these will vary based on your AUT(Application Under Test) and will be affected by the following considerations:

Cases executed with different browsers, devices, environments, sets of users or data
Cases that have any dependencies or complex preconditions
Cases that require special data and complex business logic
Cases with a low or high frequency of execution

Step #2 – Once the test case parameters have been accounted for, the next step is to break your AUT into smaller functional modules. Remember – we split them down to better assess their integrated functionalities. Based on these parameters, our output would be a sum of the identified test cases which we deem fit to be selected for automation. Ideally, this list of test cases will vary based on the purpose, functions and overall goal of each different project.

(Click image to enlarge)

Y – Yes / High frequency
N – No / Low frequency

Let’s refer to the table contents above that narrates an example of verifying the data for each test case. Here we perform an analysis of whether any given test case meets a certain set of requirements and denote each that is deemed appropriate for automation. Generally, this procedure is applied to all modules.

Step #3 – Then we proceed to consolidate all the test cases for each module that are ready for automation. This is performed with the intention for improved grouping of modules and better test case visibility. Its purpose is to quantify the measurements and establish a rough estimate that would finish this manual testing.

(Click image to enlarge)

Step #4 – At the end, we take these estimates to greater detail and scheme them out as shown below. Depending on the level of information about each AUT and its capabilities we could then determine in rough approximation the execution time for any given test case.

At last, we have the right cases at our disposal and it’s high time we move on to calculate the ROI for each one. Then again don’t forget that you could only continue to utilize any of these cases if and only if their ROI measurement is positive and given that they meet a certain project’s specifications.
But conducting this measurement using a ROI calculator has one major flaw. While it considers the hours saved and multiplies them by the hourly rate, thus measuring the displacement of manual efforts in producing savings, the missing piece remains the cost of production. Why is it important? Due to what it factors in – the spending of money, effort, human and technological resources, time, risks incurred, delivery dates and other considerations. Besides, the automation process isn’t free and further lists the following attributes that also help determine your ROI, such as:

  • Ownership and license cost of the automation tools.
  • Approximate time to develop and maintain the scripts.
  • Approximate time to analyze the results.
  • Approximate time and cost to equip the resources.
  • Approximate time for setting up test environments.
  • Time for developing from scratch or customization of an existing test framework.
  • Handling of management overheads.
2) The complexity of the application

A good reference point for how to determine the level of complexity of any given application is to take note of its size. That we estimate by bearing in mind the complexity of the AUT, the test scope, the business logic implemented into the software and the number of features that the application handles. These variables undoubtedly indicate the level of effort to test-automate. As you may have guessed, the larger and more elaborate they appear, the more timely and complex the entire automation process becomes.

But how do we define what constitutes a small, medium or large application size? Is there a guideline that could help us classify our system? The only certainty in answering this is by looking at the size and amount of the test suites part of our automation process. We also disregard the type of the AUT and oversee how complex the comprising modules and functional elements are.

According to the character of the suites, 300 to 500 test cases to automate may be considered a relatively small-size application. On the other hand, an application with 1500 cases could be assumed to be comparatively complex. Yet, the above-mentioned factors – test scope, business logic and amount of features – could render all these estimations misleading since the different considerations for separate applications vary greatly and thus don’t allow for one uniform judgment criteria. In any sense, a greater number of modules and functional elements means a greater amount of case automation therefore creating a demand for more time.

In order to evaluate the application against such approximations, we are obliged to get acquainted and go over all AUT components, functions, requirements, dependencies and features before making the final verdict. This review should also encompass the hypothesis for expected results from each case and the relative need to optimize them according to the demands of our project. This list of pre-emptive actions also includes executions and assertions of the application such as navigation, dropdowns tapping, mouseover and typing, as well as more manifold actions such as connecting to a database, API calls, processing a flash or AJAX, interactions with tables, Google Maps integrations, usage of Oracle Forms and more.

With all these actions in your arsenal you then move on to evaluate the development time for each by spreading them across the test cases, thus coming up with a crude estimate for the time needed to calculate all of them. The analysis that follows will make use of less development time for each test due to the code reuse and your accumulated experience thus far.

3) Test automation tools

The bulk of the process of successful test automation relies heavily on the selection of the most appropriate testing tools. Yet, they need to check off the boxes for both being relevant to the project and being available on the market. Sifting through all tools to evaluate and find the most relevant ones for your task at hand may prove a tad bit too time-consuming, but putting the long hours into this one-time identification exercise benefits your project in the long-term. Here are the criteria used to help you finalize your tool selection.

  • Supported platforms – Be it Windows, macOS, Unix, Linux, web, mobile or yet another platform, your tool will ideally fully integrate into the given software on which your system runs. The execution of your test is fully dependent on this decision.
  • Testing requirements – What types of testing does your project really necessitate? From load testing alone to a combination of Web UI, Mobile, and API testing, it’s always wise to pick a tool or a combination of tools that support the widest range of test types.
  • Project environment and technologies – Oftentimes allowing the selected tool to identify all objects within the application may be obstructed by the small test size. That is why your go-to tool should be reinforced to support all elements of AUT, regardless of the technology used in the project’s creation.
  • Scripting languages – The greater and more flexible scripting options, the wider freedom for the testing teams to write their scripts in their most favorable programming languages.
  • Support – Is the tool actively maintained, upgraded, and supported? An attentive community for sharing and resolving problems may just be the backing you need.
  • Usability – Find out if your tool is reliable, robust and efficient. Preferably, it should also be easy to understand and work with its features at the starting line of your testing, as more advanced functionalities will soon be of need.
4) Implementing the Framework

As you could probably imagine a single tool cannot help build the entire automation solution. The daunting number of requirements to reach that solution – writing test automation scripts, reading and inputting data from a file, creating reports, tracking test results, logging, Setup, TearDown scripting, scheduled test execution, test environment management – requires a framework as a structure that considers all these parameters. Think of this framework as a puzzle, which jigsaw pieces need to be put in a particular order. It is no wonder that this framework in itself is time-consuming to structure. The list of its characteristics for any project’s needs is as follows:

  • Portable, extendable and reusable across and within projects.
  • Ability to be loosely coupled with the test tool wherever possible.
  • Easier debugging and customized reporting facilities of scripts provided by the logging and error feedback.
  • The smoothness of the data-driven test to the scripts and their loose coupling.
  • Easier control and configuration of the test scope for every test run.
  • Simplified and easier integration of test automation components as part of the test management, defect tracking and configuration management tools.

Note that these variables range widely for each framework. Their size and scope depend largely on the nature of the application, its size and complexity. A helpful tip is to modify the framework to serve your initial needs and further customize and fine-tune it as the project demands increase in number.

Also, take into account the duration of the project. Whether you foresee that a short or long-term solution is on the horizon depends on how fast you are pushed to produce results. Rushing into a quick results strategy has its merits if the commitment you’ve made asks for a short deadline execution. That may require a greater spend on maintenance and refactoring the framework in the long-term. Pick an up-front design – framework architecture, third-party libraries, test structure, and design pattern selection – and you are ensured with long-term productivity, thus minimizing maintenance and re-work on extensions. There is no right or wrong approach as long as the goals and circumstances of the project are clearly outlined.

5) Learning & Training

Tools could also be split into two other categories, based on whether your intention leans towards a learning or training-oriented effort. These two tool types fall on the track of either a robust automation tool like Selenium WebDriver or a record-and-playback testing tool like Selenium IDE.

This grouping between the two efforts could be performed for any testing tool. What most often decides in favor of one or the other is the tool’s capabilities and the level of skill required for their full usage.

The creation of the second tool type – record-based tools – is by default considerably quicker, due to the record-and-playback functionality. Moreover, it is effortless to install and easy to learn, which explains its appropriateness for the more uncomplicated, short-term testing projects. On the con side, these tools could not be utilized for more complex application automations, multi-browser, and parallel testing, combining different types of testing, etc.

On the other hand, the robust automation tools, or code-based tools, require a substantially greater effort to engage. The asking is for understanding the general-purpose programming language. Yet, what makes it particularly practical is that it provides support for interacting natively with almost all browsers, performing conditional and looping operations, test types and platforms. You can generally benefit from their service in advanced test scripting in long-term projects.

Let’s take the automating with Java for instance. The greater the know-how for other technologies for building(ANT/Maven), module test structure(TestNG/jUnit), logging(Log4J) or reporting(ReportNG), that complement one’s mastery over the framework’s complexities, the more timely and on-hand use there will be for getting a grasp on that list of technological utilities and plug-ins.

6) Setting Up the Environment

There is a cluster of configurable environments to pick and choose from as we delve into the specifics of our project. The selection of the most appropriate one is a continuous delivery process called environmental provisioning. The concept is simple – one couldn’t build, test and deploy the application code alone if there is no underlying application environment.
Three main areas help determine the environment:

  • Infrastructure
  • Configuration
  • Dependencies

An environment that is deemed well defined and fairly predictable is one where the reliable infrastructure would require no routine maintenance. Let’s assume for a moment that we are using a hypothetical environment that is defined by test beds, or slots, configured by virtual machines. The sheer cost for the support of one that is ready to use at all times may prove too high, mainly due to the resource expenses on the VMs – be it hardware or money. A less demanding setup would integrate tests with CI/CD. Setup scripts are another factor to account for. The facilitated resource utilization and optimization allows for their successful setup with the aim for autonomous creation and configuration of that test environment before the execution of each test cycle.

The essential tip here is this – take a moment to sit down with your team and review your test process. Define what your test automation needs exactly. That planning, design, and preparation of all environments in use also calls for the management of that team – namely who are the designated DevOps, System Admins, Developers, and QAs. Once established, the team will further bear in mind yet another set of aspects for the environment provisioning, including:

  • Test data preparation, creation and loading within the test environment.
  • Creating proper files for tracking the environments and credentials used by your tests.
  • Preparing Setup and TearDown scripts according to the environment’s needs.
  • Developing of configuration management scripts, if needed.
  • Integrating automated testing with continuous integration server for CI, if needed.
7) Maintenance

Now that all stages for reaching the automation solution have been completed, you need to ensure its continuous workability. Not only that but based on the dynamic technological advancements these days, client demand may call for new features and increased efficiency at any point in time that would, in turn, require altering of a number of the project’s specifications. These can vary from marginal to sizeable alterations to the automation process. In any case, you should take preemptive measures to meet their requests without breaking or disturbing our automation scripts.

To ensure that, we need to preserve the characteristics of our framework and test, namely their reusability, scalability, and change-resistance within the AUT. Such an instance would be the UI tests that should be developed so that they are sustainable against minor changes in the AUT’s Front-End. Then again some tests are inherently complex with their composition structure consisting of a dozen steps and multiple assertions which would ask you to break the test into smaller chunks of modules to formulate. Labeling these tests with their corresponding names could also help you navigate through the process and further involve new members in the test development team more expeditiously. Including code review practices also has its high benefit value as it assists in identifying any automation issues that may emerge as a result of inconsistencies in the test results, stability and level of maintenance effort.

During each release cycle, it is paramount that a review of the new functionalities added to the AUT is conducted so that the automation scripts are themselves updated, analyzed and refractored. This maintenance becomes integral so as to ensure the effectiveness of the scripts. Being already familiar with the characteristics of your project, you also should estimate the likelihood that existing features will be implemented, updated or suspended given our project experience.

All of these suggestions should be part of the discussion at a very early stage – preferably before the framework design and script development process. Spend an appropriate amount of time in planning and designing early on and you will avoid the less satisfying time extended for refactoring when the project moves to new rework specifications. Do these and you will secure a more accurate and precise estimation.

What We Learned

Unfortunately or not, there is no solution that applies to each and every estimation automation effort. Our only point of reference is observing how the global development community in its sub-groups deals with their projects as they share and exchange their experiences. Thus, there is no crude estimate for the time you will spend on putting your ideas under the cap of planning or compile the code and structure tests. Why? Because unlike, say, assembly line manufacturing in factories, where a fairly monotonous regime devoid of change is in place, here we deal with abstract concepts for software that take an exceptional amount of adjustments and modifications to tend to any client’s set of requests.

Once again, let’s go over the expanded list with a summarized step-by-step checklist for the realistic effort approximation of the automation estimation.

  • Knowledge transfer
  • Test automation environment
  • The chosen tool’s compatibility with AUT
  • Test automation framework
  • Test cases to be automated
  • Size of the automation
  • Test data requirements
  • Types of testing required
  • Supporting automation
  • Version control and test automation builds
  • Reviews
  • Maintenance

Tags
  • Quality Assurance
  • Development
Share
  • Share on Facebook
  • Share on LinkedIn
  • Share on Twitter
Share
  • Share on Facebook
  • Share on LinkedIn
  • Share on Twitter
Sign up for our monthly newsletter.
Sign up for our monthly newsletter.

Read what's next.

Blog

Five Steps to a Successful Agile Transformation

Blog

Flutter: A Perfect Fit for Enterprise Startup Projects

  • Twitter
  • LinkedIn
  • Instagram
  • Facebook
United States
MentorMate1350 Lagoon Ave, Suite 800
Minneapolis
, MN 55408

+1 612 823 4000
Bulgaria
67 Prof. Tsvetan Lazarov Blvd.
Sofia 1592, Bulgaria,
+359 2 862 2632
Sweden
Drottninggatan 29
411 14 Göteborg

+46 3 199 0180
Paraguay
Carlos M. Gimenez 4855
Asunción, Paraguay

+595 21 327 9463

Copyright © 2023 MentorMate, Inc.

  • Cookies
  • Privacy
  • Terms
  • Continuity Policy
This site is registered on wpml.org as a development site.