Continuous performance testing
POSTED : January 11, 2018
BY : Concentrix Catalyst

CPE/CPT

Following the footsteps of Continuous Integration, Continuous Development, Continuous Deployment, Continuous Testing, Continuous Performance Engineering, and Continuous Performance Testing are emerging as key areas of focus to solve some of the key DevOps problems to help build and deliver software products to the market faster.

Is Continuous Performance Testing new? Since the evolution of Performance Centered Lifecycle Management Suites from HP, IBM, CA, etc., as well as the integration of cloud-based performance testing solutions, CPT/CPE, has significantly helped Performance and Release Engineers to get ahead of their release curves. The purpose of this blog is to highlight some key CPE/CPT challenges and how to overcome them.

  1. Continuous Integration brings more agility and periodic changes to application functionality. Is there a regression for performance?
  2. What happens to baseline metrics when new functionality is added to previous baseline metrics/application performance?
  3. What does DevOps mean to Continuous Performance Engineering? Can the environment be automatically provisioned and bootstrapped depending on my performance and scalability TEST scenario requirements?
  4. How does Cloud Integration help Continuous Performance Engineering?
  5. What do we do with logs from monitoring in Continuous Performance Engineering?

Agility in performance testing

With the need for releasing software more frequently, testing strives to keep up with the demands by applying principles, such as testing early and often. We are seeing increased demand for a highly efficient regression testing approach and frameworks that can help keep up the scripts with minimal to zero maintenance. Performance Testing follows such regression testing cycles and the need for agility in performance testing becomes unavoidable.

Like functional regression, performance regression is needed based on a certain product and development process criteria, including but not limited to the significant change in the application from previous releases, viz., simple UI changes vs. architectural tweaks, database calls, and changes to the application or server configuration. Agility in performance testing can be addressed by building full and partial performance regression suites. The Performance Testing Pipeline can be integrated into the Delivery Pipeline using existing CI techniques like forking and merging of full or partial performance regression suites. To validate the quality of builds before partial performance regression, performance sanity or smoke tests can be integrated into the Delivery Pipeline.

For example, how white box and grey box testing enables agility in regression and reduces the stress on end-to-end testing, unit level performance testing (although debatable) can reduce the amount of end-to-end or full performance regression tests needed. By increasing the focus on unit level performance, integration with tools like Jenkins and Sonar can help in tracking “Performance debts” for developers to collaborate, communicate and bring overall predictability to the release process.

Baselining and re-baselining performance metrics

As the number of performance test cycles increases within a short duration of the delivery cycle, it becomes challenging to keep up with the performance baseline metrics.

Use the master baseline as the foundation to track performance metrics of the major release quality and use “Architecture Control Limits” as upper and lower control limits to evaluate where the delta metrics stand against the baseline. For example, if the response time master baseline is 3.5 seconds as baselined on a major release, depending on regression criteria, Architecture control limits can be set to evaluate the performance of the application with the delta. In this example, +/-10% as control limits would mean that your baseline can vary between 3.2 to 3.8 seconds for response time and you can still let the code change be released to production.

The term Architecture Control Limits may be new but has been derived from management limits/control limits used in any goal-based management technique.

The challenge comes when it is time to re-baseline. For example, in one of your major releases, you come across a better performance response time, say in this example, mean 3.2 seconds. Question is, would you re-baseline your goal for response time to 3.2 seconds or would you still continue with 3.5 seconds as the goal? Using techniques like Percentile can assist in making a judgment call. For example, across all services and UI, where the mean is observed at 3.2 seconds, I recommend using two statistical techniques, ANOVA and Percentile. ANOVA to analyze the variance and Percentile to determine the central tendency. Along with this, using a guideline based re-baseline criteria that may include some common sense questions, such as, are there critical architectural improvements in this release and stability of such architectural improvements? Has there been any replacement of code/plug-ins? Has there been a new design pattern or optimization to an existing design pattern? Has there been any change with server configurations to effect performance, etc.? Based on such guidelines, the goal can be re-baselined.

DevOps and CPE

Now that we are ready to run a pipeline of multiple parallel performance regression tests that may be full or partial, the unique challenge posted is the readiness of environments as well as the readiness of performance test setup. As the complexity grows, it becomes near impossible for the creation or maintenance of these environments. More importantly, the manual processes could have huge wait times, killing the purpose and breaking the continuity of performance test pipelines.

DevOps principles have already unleashed the opportunity to do periodic releases and brought in the agility needed in your infrastructure to keep up with builds, tests, and releases seamlessly. Adopting DevOps principles (tools and techniques) for continuity in performance testing could mean:

  • Bootstrapping and provisioning of performance test environments using a “copy paste infrastructure” or reusable environment provisioning techniques like Ansible, Chef or puppet
  • Configuration of a Performance Test Pipeline and using CMDB for tweaking and managing configurations needed for performance test requirements including scalability requirements like VUsers, etc.
  • Managing a complex set of environments that are responsible for delivering performance test pipeline using tools like XLRelease or CA Release Automation
  • Using ITSM tools like ServiceNow or ServiceOne and stitching a custom performance test pipeline workflow into the delivery pipeline workflow can help ensure 100% automation as well as bring in visibility into the overall CPE / CPT process.

Cloud integration and continuous performance engineering

Cloud integration using existing cloud behemoths like EC2 or Azure not only helps in pre-baked DevOps solution integration into continuous performance engineering but also helps in reducing total cost of performance quality (COPQ).

Additionally, using a cloud infrastructure like IBM Bluemix not only helps in setting up auto-scaling of the application under the test based on performance requirements but also helps in auto-scaling of Performance Test setups like a burst of thousands, if not millions of VUs to evaluate holiday sales footfall type load scenarios, high-volume bulk processes or batch processes, etc.

If carefully set-up, using the concept of High Availability Proxy, similar methods can be applied to scale and reduce the virtual users, performance test environment as well as auto-scaling of the actual application under test.

The biggest advantage I see is the fact that you may be able to keep the cost far lower when you have partial performance regression tests executed over cloud in parallel to full-performance regression test saving $$ in VM or HW cost for creating and maintaining farm of performance test resources.

Monitoring in continuous performance engineering context

Now that we can do multiple performance tests across different versions of applications in the form of pipeline, the last challenge to deal with (sometimes don’t know what to do with) is the ever-growing data from the logs.

This challenge can be solved by adopting data analytics. Using analytics modules of monitoring tools like Splunk, AppDynamics, as well as taking raw data from tools like Introscope and integrating with the BPM or APM infrastructure, you now have access to performance data across multiple cycles of your performance tests that are versioned. Beautiful graphs can be generated to show the response time trends and architecture limits performance, variance analysis graphs, etc.

Applying techniques like machine learning may help in identifying potential failures in the form of poor results in performance metrics aligned to release cadence, its content and quality. More importantly, we can build a pattern of failure and use it for root cause analysis.

The above topics contribute towards addressing most of the key if not the entire challenges with Continuous Performance Engineering. Additionally, the test data process needs to be tweaked to feed into any delivery pipeline in an automated fashion and not just for the Performance Test Pipeline. Using versioning strategy for test data can help in such challenges.

Performance Testing is no longer the last lap of your release marathon. Architects are not just moving towards testing Performance but engineering Performance as well. More so, the industry is moving towards a super-fast phased development to release process. Moving toward Continuous Performance Testing will help the development team continue to build and do periodic releases to Performance QA for early and more frequent performance evaluation.

Learn more about how to pick the best tests for automation.


Tags: , , , , , , ,