Why Top Software is Built on Performance Testing strategy…

MentorMate
20 min readDec 6, 2017

You’ve built an application for your business. The ideas you brought into your kickoff meetings have been executed in the form of an MVP. It looks great, and your few beta testers respond well to its proposed utility, as well.

But can it stand up to unexpected demands like periods of high user traffic or limited internet access?

The best software is built on a thorough performance testing strategy. Without it, product owners can only guess at the quality of the solution for their business.

Accommodating a rigorous performance testing strategy and the appropriate personnel to oversee it in your project scope can eliminate weaknesses in software.

Development work that has been debugged and verified to thrive in extreme conditions (high traffic levels) lends a better experience to end users. Performance testing positions businesses to build software that scales with end users’ needs and use patterns, but also with the needs of the business itself.

Why Do Businesses Need Performance Testing?

Performance Testing Help QAs Validate System Speed

Knowing these parameters helps testers make informed decisions about what parts of the system can be improved.

They also provide stakeholders with information about the quality of the product or service that is tested. The analytics in reports can help business stakeholders understand how to tailor development strategy in order to achieve better outcomes for their users and the business alike.

Performance testing eliminates bottlenecks

Testers can identify weaknesses in the system and how they slow down overall functionality

  • Is the application prone to crashes during extended periods of high user traffic?
  • Are there a lot of errors in one or a few functions?
  • Is the time needed to execute a request more or less than anticipated?

By answers these and similar questions, performance testing teams can collect the information that will help developers remedy the identified issues.

It creates a baseline for future regression testing

With an initial round of performance testing and the subsequent development tweaks, performance testers establish a baseline from which to make effective comparisons of software versions as modifications to features are made.

It determines compliance with performance goals and requirements

There is no defined window during which teams should implement performance testing efforts — it depends from project to project.

But performance testing phases can be tied to other project milestone or as different requirements are built out. After each iteration or development phase, testers can compare QA results to the client’s expectations or to the goals QAs set for themselves.

Performance testing can provide another layer of validation in software development that aligns product owner vision with developers’ responsibilities.

How to Measure Software Performance Under Specific Load Volumes

An ecommerce site’s polished UI might guide the shopper effortlessly from browsing to checkout — but what happens when there are 10,000 other shoppers trying to do exactly that?

Those who have invested in performance testing know that their product will hold up to high traffic. Those who haven’t? Their users will have found other, better places to buy those concert tickets.

Asking these questions can help performance testers pinpoint key criteria the software must meet in order for the software to be successful in business settings. They will determine what problems needs to be fixed in future development sprints:

  • Can the system handle the anticipated number of concurrent users during peak load in production?
  • Is the system error-free with the traffic of 100 concurrent users?
  • Are there any requests that take significantly more time with this amount of users than before?
  • Does the application suffer from memory leaks when under heavy user traffic for extended periods of time? QAs need to follow server parameters of memory and CPU

As software is subjected to varying amounts of user traffic, QAs can determine if the system functions well or if further improvements should be made.

Test cases should consist of the upper and lower use case limits that are determined by the business/use case scenario. That way testers can determine at what point the number of concurrent users burdens the system to the point where its performance doesn’t meet requirements that are crucial to running business functions effectively.

Microsoft provides additional information on how to evaluate systems for better performance testing.

Why Business Stakeholders Should Work With QAs for Better Performance Testing Strategies

A comprehensive understanding of the business, its customers, its software ecosystem, and the challenges it seeks to overcome is essential to developing excellent software itself, but also crafting thorough QA strategies as well.

Will the software be used by a company’s customers, or only by internal administrators? Product owners and QAs must meet halfway to answer these questions, and many others:

  • Do users come from one country or continent, or many?
  • Has a version of the project yet to be released, or is the product already in use and several versions old? If so, existing analytics can inform how the performance strategy should take shape.

Business logic is also very important to building and refining effective software.

  • What does the product owner need to achieve?
  • How are the features related to each other and what are their dependencies.
  • What key features does the user need?

High-priority features might include registration or login functions, while low priority ones could include “edit profile” and “change password.”

By working with performance testers to identify the key requirements and features, they can devote the bulk of their resources to improving them.

This also spares QAs from running performance tests for features that are rarely used. Time wasted on little-used features barely benefits end users and cuts into the time the performance testers could spend validating the functionality of more important features.

Bringing developers, QAs, and the product owners together helps teams weigh whether a performance testing strategy benefits the project and its goals.

Teams can make an informed decision to proceed with performance testing if they determine it is important for achieving project goals outlined earlier.

If they find information about performance problems at this point, this will also help teams build a proper strategy that can improve functionality of the feature in question.

How Should You Design Your Performance Testing Strategy?

Identify the App’s Key Features

For existing software, product owners and their technicals teams should review traffic analytics that reveal typical usage patterns. These patterns can serve as starting points where performance testers can begin their work.

If the application has yet to be released to the general public, testers should discuss with the client what are the key features that will benefit from further performance testing.

Identify the Anticipated User Load

As a next step, testers should identify the predicted user load. In case the project scope includes many production releases, statistics should be gathered that demonstrate how many concurrent users are accessing the application and what is the peak level of user traffic.

Analytics that are readily available will speed this process, but the product owner’s involvement can provide further insight as to how the user count may change over time.

If no previous analytics exist, testers should discuss with the product owner the concurrency of users she predicts.

Here, though, a fine balance should be achieved.

  • On one hand, the client may have unrealistically high expectations of what the predicted concurrency of users will be.
  • On the other hand, there should be a buffer, meaning the software should be able to support more than the expected user load.

In case the predicted load is too high, teams should consider seeking approval from the product owner and involving several web servers with a load balancer installed to achieve more accurate results from the performance tests.

The testing process should include tests using just one web server and tests with several web servers using a load balancer.

Collaborate to Create a Performance Strategy

Product owners looking to build software that supports their business needs must include strategic testers within the ranks of their technical teams.

The best testing teams will come prepared with performance strategy documentation. Starting these efforts with the first build provides a baseline understanding of the software’s functionality. Going forward, teams can test against those first results and gain meaningful indications of improvement.

Having a strategy defined beforehand also helps the team to organize and distribute responsibilities more effectively as the software development and QA processes unfold. .

The following is an example that shows how teams can outline their performance testing strategy:

  • Introduction — Preliminary descriptions regarding the project, its necessities, objectives, assumptions, and scope.
  • Project Description — Basic information about the project. What is the team trying to accomplish, and what milestones will anchor progress?
  • Premise — What is the current situation within the project and what are the targets.
  • Purpose and Objective — What is the end goal after completing performance testing.
  • Analysis and Assumptions — What are the assumptions prior to starting the performance testing, and what analysis can be made regarding the goals of the performance testing.
  • Scope — What is within the scope of the performance testing and what will not be tested.
  • Requirements — A short list that details the system’s current setup.
  • Hardware — Current hardware specifications of the performance and production environments.
  • Software — Current software specifications of the performance and production environments. Additional information can be listed like type of load balancer, etc.
  • Automation Tool — Brief description about the performance tool that will be used.
  • Metrics — A list of metrics that will be followed during the performance testing process
  • Approach — a step by step work plan for what will be done.
  • Script Development — Explanation of how the test scenarios and test scripts will be developed.
  • Workload Criteria:
  • Business Function Overview — overview of tests that will be done from business perspective.
  • Workload Scenarios and Test Plan — test cases, steps, and explanations of what each test case accomplishes.
  • Test Execution — Process of test execution
  • Execute Pre-Test Checklist — steps of the pre-test checklist that will be executed prior to performance testing.
  • Execute Scenario — How the test cases will be executed.
  • Capture and Deliver Test Results — How the results will be collected and they will be represented.

Metrics to follow

When running a performance test, QAs need to take measurements that can be summarized at the end of the testing. These measurements can be taken both on the application side and the server side.

Testers frequently derive metrics from the following sources to monitor a system’s response time, throughput, and resource utilization:

  • CPU
  • RAM
  • Disk I/O
  • Network I/O

Understanding Response Time Metrics in Performance Testing

When analyzing response time testers should not take into account the average response time.

Instead, they should focus instead on the 90%, 95% and 99% percentiles. These percentiles tell us that 90%, 95%, or 99% of the requests had response times of up to the value of that percentile.

If the application and database are hosted on different servers — an effective way to distribute the load in a way that allows the system to work faster — QAs need to monitor CPU, RAM, disk I/O, network I/O utilization on each server.

If testers look at each of these percentile lines individually, they can assume that everything is correct. But if 90% of the requests finish in 2 seconds and 95% of the requests finish in 5 seconds, there is some problem that requires further investigation.

Measuring Response Time is Not Enough to Identify All Software Bugs

Product owners can guide testing teams to see beyond this metric to ensure their software is of the highest quality.

Testers also need to keep an eye on the CPU and RAM usage on the server and look at the network and data I/O as well. That will lead to the detection of potential memory leaks or areas in the application to optimize further. Memory leaks can cause software to slow down dramatically or cease working entirely. Even worse, they can result in a denial-of-service attack from malicious parties.

Usually the CPU load should not exceed 70%-80% for more than a few seconds. If it is consistently over 80%, then the software architecture needs further fine-tuning. Measuring the network load indicates to testers that requests are too large and take too much time to reach the server. Throughput tells how many requests are handled per second.

Software that is expected to support a great number of users for short periods of time typically requires more attention from testers to reduce the burden of large amounts of server requests. Examples could include some of the following:

  • Ecommerce websites
  • Course registration software for universities
  • Flight booking websites

Steps Your Performance Testing Teams Should Follow

1. Identify the Order and the Execution of the App Requests

Product owners may need to encourage testing teams think like UX designers — it can help them to validate software functionality in a more thorough way.

In order to measure the optimal user traffic of your application, performance testers need to see how the app works when each service call is being made. They also need to identify when to add think times in the tests and how long these think times should be.

A think time is an artificial delay added to the requests that testers use to simulate more realistic test scenarios where the user actually “thinks” before clicking on a feature.

When your QAs can account for user journeys during performance testing, their results will more accurately measure software functionality when it’s used by real people.

  • No think time between requests needed: a mobile app sends all requests upon app startup and then does not send any requests until the app is reopened again.
  • Add think time between requests: a request is sent when a new page of the application is opened

2. Organize Request Types Into Groups to Save Time

While performance testers account for a wide variety of test scenarios, many of these scenarios test similar paths and functionalities across different software.

Separating and reusing requests by type can spare testers from rewriting code several times.

Modularizing tests allows teams to isolate one or more requests in a separate module that can be reused in other testing scenarios. This neat arrangement allows testers o optimize existing scenarios, since any change made to a test affect those in other modules.

By organizing request types, teams can locate the best tools for the task at hand faster and divert their efforts to more important tasks, saving themselves time and trimming the cost of validating software quality over the duration of a project.

Modularizing in jMeter can be accomplished by using test fragments.

There aren’t any rules for placing requests in a given test fragment. However, it is always a good idea to combine requests for one action in a test fragment. For example, the steps for logging into an application will be added to one test fragment, and the steps for searching will be added in another one.

3. Organize Request Groups into Test Cases

Once testers have created the test fragments it will be easier to create the required test scenarios. QAs just need to add the test fragments in the necessary order in a Thread Group. This not only saves time creating the load scenarios, but maintaining them, too.

  • If the login feature in the software is somehow changed, then a corresponding change only needs to be reflected in a single Test Fragment. All other tests will continue to run normally.

4. Streamline Passing Data Between Requests

Often performance testers are required to login to the software being tested in order to perform some necessary action. In these cases they need to ensure that after the login request, they are still logged in to perform the rest of the actions. This can be achieved by adding a cookie manager.

The cookie manager automatically stores the cookie that is returned by the first request and uses it in all the requests that follow.

A more common option is using an authentication token as a request header. In this case QAs can extract that header, save it in a variable, and use it in all sequential requests.

Performance testers might also use a variable when they create a record and want to modify it in sequential requests. If the service returns the ID of the record, they can save it to a variable and reuse it instead of hard-coding some ID.

The advantage of doing this is, if there is a need for a change, performance testers will need to change it in just one place. The rule of thumb is that if someone will use something more than once, it should be saved as a variable to spare confusion and wasted time later on.

What to Remember Before Performance Testing Starts

Configure Test Server Hardware and Software Parameters to Resemble Those of Your Production Servers

During performance testing, successful teams use a test server that has the same software and hardware parameters as the production environment — that is, the one that the business will use when it launches the new software.

The results achieved during testing will more accurately reflect how the application will behave on Production. Different hardware parameters, whether more or less powerful, will skew readings of the data coming out from the test server for performance testers.

For example, the processor in a test environment is 5% slower than the one in the Production environment. But QAs cannot conclude that in production the performance will be 5% better.

1. Database records count should be as close to production records as possible.

In terms of database records, they should also be as close to production records as well. The more records there are in the database, the slower the response times will be. Having much more or many fewer records in the database test environment can also skew test results. Performance testers thus cannot make an informed decision as to whether the application is fit for production.

But if QAs test against an almost empty database, they achieve deceptively positive performance results. After launch, product owners will find that the system performs poorly and fails to execute user requests.

The opposite is also true. If tests run against a database packed with data — more data than what’s in the production environment — testers might receive especially poor results. This could lead them to charge clients with unnecessary optimizations that add to the client’s costs.

2. Update Test Environment Before Performance Testing Begins

This is important because if QAs execute tests in an environment that has been tested over and over again, the environment can perform too slowly. The results will be deceptively poor and can negatively impact further extraneous efforts spent to refine the software further.

3. Warm Up System Before Conducting Performance Testing

Once the system is freshly updated, it is best to have it warmed up so performance testers avoid the impact that a ‘cold start’ can have on the results.

During a cold start, the system is very slow and lead to performance testing results that are deceptively negative.

For that matter QAs can add a Ramp-Up period for the Thread Group so that 5 users are added before the real test begins.

What to Dro During Performance Testing

Use Reports When Running the Load Test

The performance testing requires listeners. Listeners allow QAs to track the results from web services that are amassed over the course of performance testing process. This can help identify where issues are located. Based on the results, performance testers can provide recommendations for improvement.

Using jMeter for Performance Tests

  • jMeter’s Aggregate Report — where QAs can get a lot of useful data. Among all the data, the most important datapoints that testers can track include the following:
  • Average — Average time to compile a set of results
  • 90% Line — 90% of the samples took no more than this time. The remaining samples took at least as long as this. (90th percentile)
  • Min — The shortest time for the samples with the same sample name
  • Max — The longest time for the samples with the same sample namel
  • Error % — Percent of requests with errors
  • Throughput — the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
  • jMeter’s Aggregate Graph — QAs can draw a chart based on the data, compiled from Aggregate Report. This is very useful because performance testers can provide both clients and developers with visual representation of the work done, offering tangible proof of what is and isn’t working properly.
  • jMeter’s Throughput vs Threads — shows the change of throughput through simultaneous threads.

There are many more listeners that could be used based on the data that is needed.

But teams should only use listeners to gain data that is absolutely necessary, or use a limited number of listeners at a time. The more listeners testers use in their work, the more negative their test results. .

Use Detailed Reports Only for Debugging Purposes

While executing performance tests, it is highly recommended to turn off all listeners that are not related to performance, as they can skew the results and have a negative impact on the test results.

The best practice is to leave all listeners on when the normal tests are executed, to see that everything functions normally and without bugs.

When the real performance tests start, all listeners not related to performance results should be turned off.

Remove Assertions…

Assertions are an element of performance testing which verifies that the results gained through testing are what the testers initially expected to see.

Testers compare expected results against actual results. Were their assumptions true or false? If the answer is false, there are issues in the software that need to be solved.

Like listeners, assertions should be fully removed — or removed as much as possible — during performance testing. Otherwise they can also skew the results in a negative direction as they require more time from the computer to execute the tests, impacting the accuracy of the end results.

… But Leave Small Assertions About Status Codes and the Most Critical Parts of Tests

The best practice is to leave all assertions on when the normal tests are executed to see that everything is okay. When full performance tests start all assertions should be off, except for some response code assertions and those for the most critical parts of the tests.

Run Test From the Console Instead of the Testing Tool’s UI

When running performance tests, it is good practice to run them from the console instead of from the UI of a tool like jMeter. The graphical UI can also skew the performance results.

Additionally, it burdens the computer that runs the performance tests and can throw off the accuracy of the results.

Run Tests on Several Computers

If a system is tasked with supporting traffic from more than 400 users, testers should use distributed loading from several physical computers against the server being tested. A single computer cannot load so many users by itself and may crash, rendering any performance tests invalid.

Also, distributing the load among more computers makes the performance tests more accurate since the test environment is closer to the real scenario that will be expected in the final version of the software.

For example, jMeter offers the option of Master/Slave configuration where there is a master computer and as many slaves needed. All slaves execute simultaneously the tests and report to the master computer.

But the drawback is that many computers need to be booked in the test lab for this task. This means the owners of computers would not be able to do anything during the time of the performance tests. Using those machines as performance tests run would skew the test results.

Run Tests on Virtual Machines

Similar to distributed testing with physical computers, there is an option to conduct distributed performance testing on virtual machines. To pursue this method, administrators need to set up virtual machines and provide the IPs and access credentials for them. Then performance tools can be installed and set. Once the master/slave configuration is ready, performance testing can begin.

Using virtual machines means that users of actual local computers are not displaced during intensive performance testing. Instead, programmers can continue work on their machines as normal while tests run on as many virtual machines as the team wishes or is able to pay for.

Finding enough vacant machines in one location can be difficult, so virtual machines are an obvious solution for many performance testing teams. They also allow teams to be more versatile as they explore strategies in executing performance testing with product owners.

Run Tests on VMs in AWS

Due to hardware limitations, testers can simulate about 400–500 simultaneous users on one computer. Usually this is not enough user traffic for a thorough test, so performance testers must use several computers.

A better solution is to create several VMs in AWS and run the tests there. Given the cost of AWS services, this makes the most cost-effective solution.

Take Server Caching Into Consideration

Product owners and their teams are left with imprecise test results when QAs send multiple requests with the same parameters to one server. The results show functionalities working faster than they would in reality. Normally, when every user uses the web services via different parameters, it takes more time for the system to process all of the unique requests.

For example, if endpoints are filtered by different parameters, testers must try to parametrize each test to invoke different a set of filters as often as possible. That way the server will not be able to anticipate the data. In turn it will not be able to generate cached data, and there will be slower and more realistic measurements.

Testers can also test server caching, so it is acceptable to have tests with requests that send the same parameters.

That way the server caching can be tested and testers can compare results to those that came from tests where the server did not expect the parameters sent to it.

Configure Test Machines Carefully

While running tests on each physical machine, there are certain rules that should be followed in order to have more accurate and unbiased test results.

  • Restart machine — Performance testers can avoid clogging the resources of the machine by executing too many process before the performance test starts.
  • Kill unnecessary processes — Even when computer is restarted, there are still processes that run on the background, which could endanger the accuracy of the performance results. QAs need to identify all processes that are redundant and kill them.
  • Don’t do anything during the time of test run — Testers need to make sure that they do not do anything else while the performance tests are running. Of course take care of the other physical machines and warn their owners

Reporting Best Practices

In terms of reporting, a separate document should be created and appended to the performance strategy. That document should consist of the following sections:

  • Tables with results — each table with the metrics coming from each test should be listed and named.
  • Graphical representation of the results — based on the tables graphical representation with the most valuable metrics should be presented, like error %, 90% line, average response time and throughput.
  • Graphical representation of the Server Load during the performance testing period — during the performance testing, developers can help and provide testers with screenshots of how the web server and DB servers are doing.
  • Conclusions — based on the above results, testers could make conclusion of what is looking good and what is wrong that could be improved.
  • Recommendations — the areas of improvement could be listed and the way they could be done. Developers can help with this section as well.

What Do Businesses Lose Without Software Performance Testing?

Modern businesses need software solutions to must pull their weight. Enterprises rely on increasingly sophisticated digital tools to understand and react to the shifting landscape of markets and their users’ needs.

When businesses need software that’s reliable and scalable, performance testing is an essential component of the development lifecycle. Product owners who seek thorough teams that are knowledgeable in performance testing best practices are better equipped to build solutions that are responsive to all of their users’ needs — especially during instances where the software must accommodate high levels of user traffic.

Software that works best when it serves fewer users isn’t so much an asset as it is a liability. Performance testing can help teams understand existing weaknesses in a solution and provide stakeholders a meaningful baseline from which to measure improvement throughout the development lifecycle.

Original post can be found here.

Innovate with us. Click here to access all of our free resources.
Authored by
Stephen Shopov.

Stefan excels in growing MentorMate’s ability to deliver clients excellent software. In his day-to-day, Stefan interviews and trains new members of the QA practice. But Stefan has also played a vital role in overseeing the establishment of an office in Ruse, where staff members are dedicated to supporting some of the company’s biggest clients.

Stefan previously worked as a SEO Specialist and Team Lead at Oriann. Certified in Cisco’s Routing and Switching Program, maintains a hands-on approach to leading teams to solve clients’ problems successfully.

When he isn’t busy expanding MentorMate’s QA practice, Stefan is an avid traveller who likes to play tennis with his colleagues and in Sofia’s amateur league.

--

--

MentorMate
MentorMate

Written by MentorMate

Blending strategic insights and thoughtful design with brilliant engineering, we create durable technical solutions that deliver digital transformation at scale

No responses yet