How many times have you come across websites (or applications) that falter when a large number of users are using the said product? Well, such an experience can be a huge turn off for the end users of your product. Consider a scenario of an e-commerce company that is scaling at a rapid pace but it starts throwing 404’s, page lags, and other issues when the Christmas Sale is On 🙁
Since customers have a lot of choices up their sleeves, it is essential to invest in building a scalable, reliable, secure, and robust product. In many cases, a microservices architecture might reap more benefits than a typical monolithic architecture. An Agile approach to software development and testing will help in improving the product quality from all perspectives.
Application performance is of prime importance since those tests will be useful in ensuring that the AUT (Application Under Test) is meeting the desired criteria in terms of robustness and responsiveness. Testing on an environment that mimics the production helps in unearthing issues that your users would normally encounter in the shipped product.
The QA (or test) team can then triage the issues that are witnessed in the near-production environment. Once the issues are fixed by the development team, performance tests can be run in a CI (Continuous Integration) pipeline thereby ensuring that a product built at scale is released to the intended users of the product.
In this particular blog, we deep dive into the important aspects of application performance testing. We will also cover the key differences between Application Performance Management and Application Performance Monitoring. Partnering with an outsourced QA vendor with experience in providing quality performance testing services can be beneficial in short-term as well as long-term.
What is Application Performance Testing?
As the name indicates, application performance testing is a form of non-functional testing that is primarily used to test the application (or website) from speed, scalability, and performance standpoint.
By devising performance tests, you would be able to unearth issues that hamper the performance and robustness of the application. Transfer speed, CPU utilization, GPU utilization (optional), memory utilization, concurrent users, database access time, etc. are some of the important factors that are evaluated (or analyzed) using performance tests.
JMeter, Gatling, Locust, Taurus, and Goad are some of the most popular application (or web) performance testing tools.
Also Read – [Guide] Best Practices in Performance Testing
What is Application Performance Management (APM)?
Application Performance Management (or Application Performance Monitoring) is the process of monitoring and management of performance of system resources. It extends the observability aspects of the AUT beyond system availability.
Contrast to performance testing, APM looks into the availability and performance of the AUT. APM tools make extensive usage of telemetry data to achieve the same.
APM can be used to monitor and manage performance & availability of websites, mobile applications, or any other piece of software. APM (Application Performance Management) tools like TraceView, DataDog, New Relic, Application Insights, etc. are some of the most popular APM tools.
Different aspects like services, hosts, networks, logs, etc. are monitored as well as managed from the lens of application performance, availability, and management.
Key Metrics Of Application Performance Management (APM)
Now that we have touched upon the basic aspects of APM, let’s look at some of the key software metrics that must be a part of APM.
Application response times under varied loads, usage of computing resources, and uptime (or availability) of the application are some of the key set of performance metrics that are a part of APM.
Mentioned below are some of the integral dimensions of APM as per Gartner research report on APM Conceptual Framework (Source):
- E2E (End-to-End) observability [Active & passive]
- Complete application monitoring
- Endpoint monitoring
- Detailed profiling of user (or business) transactions
- Detailed reporting with in-depth analytics
- Automation powered via Integration with third-party tools (and frameworks)
Application Performance Monitoring vs. Application Performance Management
There are cases where Application Performance Management and Monitoring are used interchangeably. However, there is a thin line of difference between the two.
As the name indicates, a monitoring solution simply monitors data (or information) from different data points (i.e. software, hardware, servers, etc.). For instance, if 1000 systems are being monitored and each of them have close to 10000 metrics; the APM tool will collect (1000 * 10000) metrics. However, data derived from the metrics can be put to best use with a management solution.
Application performance management solution not only collects the data from different data points (that we discussed earlier) but also analyzes them and identifies problems so that there is minimal (or zero) impact on the business continuity.
Now that we have looked into the integral aspects of performance testing, performance management, and performance management; let’s deep dive into the best practices of application performance testing.
Also Read – Different Forms of Performance Testing
Consider performance testing a priority
Gone are the days when companies followed the traditional waterfall model for software development. In that era, testing would start only after the entire development is complete. Fast-forward now, the agile approach has completely taken over the industry. DevOps, CI (Continuous Integration), Continuous Deployment (CD), cloud, etc. have now become the standard norms.
Just like any other form of testing, it is recommended to run performance tests at every stage of the product life cycle. In fact, unit tests and integration tests when run at a large scale can be used for checking the readiness of the AUT (Application Under Test).
Automated Performance testing tools when integrated with the CI pipeline can help in isolating the performance bottlenecks even before the code reaches the staging (or production).
Run varied types of performance tests
Though there are different types of performance testing, you must consider the performance tests that suit the project requirements. It is a fact that you might witness some performance issues when a large number of users are simultaneously using the app. To set the right benchmarks for comparison, it is better to run single user tests since it will give an idea about the best possible performance and response times.
This data can act as the performance benchmark such that it can be used in subsequent tests for the purpose of comparison. Load tests should be run to test the system performance and reliability under different load conditions. Load and user concurrency will provide deeper insights into behavior of the system.
Apart from the above documented tests, you should also consider running soak tests, stress tests, and peak load tests. Availability tests are extremely critical since they help in testing the system behavior when there is any kind of failure under certain loads. The intent must be to have maximum uptime of the AUT so that the end-users are not impacted in a negative way.
Document performance criteria
Akin to different software requirement documents, a good practice is to have crystal clear documentation about the performance requirements. Depending on the type of application, you should prioritize the right set of performance goals.
Some of the goals could be maximum concurrent user sessions, maximum response times, user scalability, performance when the system is subject to varied loads, etc.
Consider Average Response Times and beyond
Average response time (i.e. total response time/requests) is one of the key performance KPIs that is important from an app performance standpoint. However, considering only average response time can sometimes turn out to be slightly misleading.
The solution to this particular problem is to look at the average and standard deviations. It can be loosely compared to the case of false positive(s) or false negative(s) since that also impacts the end-result of the test. In performance tests, tests with a higher deviation can be used effectively for improving the user experience.
Triage Issues in a timely manner
Isolation of issues is important in any form of testing. In fact, it is the first step that needs to be carried out once a performance issue is identified in the system. Issues must be prioritized (or triaged) so that more severe issues can be resolved on top priority.
Issue resolution can be delayed if the right stakeholders (e.g. developers, testers, product owners, etc.) are not involved at right stages of the discussion. Whether the issue is with client side or server side or infrastructure side, doing a proper RCA (Root Cause Analysis) and assigning to the right stakeholder must be done at the earliest.
Issue (or defect) triage is an art that must be executed with utmost perfection. An experienced performance testing company can prove to be pivotal in scenarios where the internal team members do not have sufficient experience to handle and triage performance issues that come along with large scale.
Features, interface, and other integral aspects of the application (or website) can be at stake if they fizzle out when used at scale. This is why application performance monitoring, application performance management, and application performance testing can come super handy.
Performance can turn the tides of the application upside (or downside) depending on the end-user experience. Hence, it is important to invest in performance tests at early stages of the product development cycle. A much better practice is to have performance tests run as a part of the CI pipeline.
KiwiQA, a performance testing company in UK, has proven expertise in providing a range of performance testing and performance monitoring (& management) related services to a wide range of businesses. Time is always ripe to start performance related tests, else it might be a lost battle for your product (or application).