POSTED : January 13, 2018
BY : Concentrix Catalyst
Categories: Automation & Operations,Intelligent Automation
In this article, I’m writing about a major test automation metric (I invented) that can be used to address some key operational challenges.
Test automation initiatives are super exciting, especially when they are aligned with DevOps principles or iterative development processes. We have seen how SDETs and Automation Engineers can quickly build a base framework and start automating test scripts. Automation managers are struggling to allocate an operation budget for test automation and more importantly track what has been invested on a real-time and value-driven basis.
Needless to say, the transformation in Test Automation brought by lightweight open source tools and frameworks like Selenium, Protractor that are built on the principles of less maintenance and high reusability. HP’s newer product like LeanFT and CA’s DevTest have joined the bandwagon of giving the ability to build rapid test automation scripts that can be highly reusable and has less maintenance nightmare.
Over the last few years, I have been managing the delivery of some key automation initiatives that have few operational challenges: what tests are automated and what is currently being executed? How many tests are automated and how many are executed? How often are the automated tests added to the baseline test suite regressed on a periodic basis? How do I value my ONGOING automation investment before my release (much before measuring ROI)?
When drilled deeper, it came out as the biggest challenge: keeping up the tests that are built and with the tests that are executed.
For example, say you have x-test cases to be automated in a sprint or iteration. As the team begins automation, chances are that the tests are not ready in a form to be executed as a batch on a periodic basis for various reasons, like dependency between tests, etc. By the time you complete building a reasonable number of tests to execute in batch, the focus tends to split between monitoring the batch execution results, fixing the failed tests and keeping up with new test development.
While you have a backlog of tests that are being cranked as test scripts, you also have a growing backlog of test cases that do not have the right test data, not having the correct test configuration, missing a test environment spec. and failure of tests vs. an application defect and many such including but not limited to simple issues, such as you just don’t know why those tests were failing.
As such backlog of maintenance test cases grows, there is not a lot of metrics around tracking what was built, what was executed and which ones need tweaking or maintenance. Typically, I ask SDETs not to call a test script baseline unless you execute successfully for 10 times in both run and debug mode. Despite such reliable measures, you are hit upon roadblocks like test data, test environment, etc. I also came across SDETs claiming that there are some tests that need developer clarification.
To overcome this, I invented a metric called Built to Execute ratio. Denoted as Built: Execute represented as either number of test cases or re-represented percentage of test cases. Assume you have 100 test cases for 1st release that has 5 sprints where the built and execute are pure test case numbers represented cumulatively. Here is what the ratio looked like:
Finally, for the 1st release, you would still spend some time manually validating the 20 test cases that are not executed as batch and results unknown. Luckily, you have such a tracking in place to know technical debt. For the context of automation, this is called Automation Debt. This can be translated fairly easily into hours by using an assumption.
Automation managers should start using a similar metric to bring up the % of execution vis-à-vis the % built and focus on how to get maximum benefit on the automation $$ already invested.
More importantly, you also get a pattern of issues and roadblocks that you typically hit when you start automating if you perform a root cause analysis on the Built: Execute ratio. Although from project-to-project, it could change, it is recommended to keep 80% of built tests executed. Anything less means you have more technical debt. This is during the initial sprints.
As you do more development and test sprints, the execute ratio tends to go much lower. Hence it is important to track on a regular basis and reprioritize the automation team’s effort to focus on getting more built tests executed successfully.
If you use open-source frameworks like Selenium or any Xunit style framework with reporting and integration into Jenkins, you can configure Jenkins to produce a summary report by the desired frequency or even nightly. This can help in taking necessary decisions in terms of prioritization of the automation team’s effort. You can also configure HP ALM’s life analysis reports to produce this metric for you.
This metric also assumes that there is a discipline in the team by periodic check-in of completed test scripts. Without such a discipline you won’t be able to measure “Automation Debt”. This also helps in ensuring there is a meaningful automation investment or a true measure of MVP in test automation.
Now that you’ve grasped the concept of the Built: Execute ratio, learn how to pick the best tests for automation.
Tags: DevOps, Integration, Intelligence, Test Automation