Lowest Cost Cloud Load Testing Tool

Test Graphs & Key Performance Metrics

Below are two screenshots of the two graphs that appear for each load test. By mousing over one of the lines, the actual value of the metrics will appear.

Volume Graph

The first one shows the volume of traffic hitting your server. It shows three lines representing these metrics:

  • Concurrent Users
  • Requests per Second
  • Throughput



Requests per Second

RPS is the measurement of how many requests are being sent to the target server. It includes requests for HTML pages, CSS stylesheets, XML documents, JavaScript libraries, images and Flash/multimedia files.

RPS will be affected by how many resources are called from the site's pages. Some sites can have 50-100 images per page, and as long as these images are small in size (e.g. <25k), than the RPS will be higher than long text pages with few images that are dynamically generated from database queries. The reason for this is that images and other static resources are served by the web server or a Content Delivery Network, and there is virtually no expensive processing that must take place before that resource is sent to the browser (i.e. LoadStorm).

Concurrent Users

Concurrent users is the most common way to express the load being applied during a test. This metric is measuring how many virtual users are active at any particular point in time. It does not equate to RPS because one user can generate a high number of requests, and each vuser will not constantly be generating requests.

A virtual user does what a "real" user does as specified by the scenarios and steps that you have created in the load testing tool. If there are 1,000 vusers, then there are 1,000 scenarios running at that particular time. Many of those 1,000 vusers may be spawning requests at the same time, but there are many vusers that are not because of "think time". Simply put, think time is the pause between vuser actions that simulates what happens with a real user as he or she reads the page received before clicking again.

Throughput

Throughput is the measurement of bandwidth consumed during the test. It shows how much data is flowing back and forth from your servers.

Throughput is measured in units of Kilobytes Per Second, and it is a good measurement for the load test's impact on the network. Bottlenecks can form at network interface cards (NIC) if the amount of data flowing surpasses the NIC's ability to move it into and out of the server.


Response Graph

The second graph shows how your server is performing. Response data is captured for each request that is generated, and this graph is a representation of what the users of your site will be experiencing - response times and errors.

It shows three lines representing these metrics:

  • Average Response Time
  • Peak Response Time
  • Error Rate





Average Response Time

When you measure every request and every response to those requests, you will have data for the round trip of what is sent from a browser and how long it takes the target web application to deliver what was needed.

For example, one request will be a web page...let's say the home page of the web site. The load testing system will simulate the user's browser in sending a request for the "home.html" resource. On the target's side, the request is received by the web server, it makes further requests of the application to dynamically build the page, and when the full HTML document is compiled, the web server returns that document along with a response header.

The Average Response Time takes into consideration every round trip request/response cycle up until that point in time of the load test and calculates the mathematical mean of all response times.

The resulting metric is a reflection of the speed of the web application being tested - the BEST indicator of how the target site is performing from the users' perspective. The Average Response Time includes the delivery of HTML, images, CSS, XML, Javascript files, and any other resource being used. Thus, the average will be significantly affected by any slow components.

LoadStorm measures Time to Last Byte in regards to response time. The clock stops when all of the resource has been returned. Thus the response time metrics will include the entire cycle of response that encompasses the duration of download for the resource. Meaning, we get the complete response time related to the user experience, and that includes the delivery of the full payload from the server. A user wants to see the HTML page - which requires receipt of the full document.

Peak Response Time

Similar to the previous metric, Peak Response Time is measuring the round trip of a request/response cycle. However the peak will tell us what is the LONGEST cycle at this point in the test.

For example, if we are looking at a graph that is showing 5 minutes into the load test that the Peak Response Time is 12 seconds, then we now know one of our requests took that long. The average may still be sub-second because our other resources had speedy response.

The Peak Response Time shows us that at least one of our resources are potentially problematic. It can reflect an anomaly in the application where a specific request was mishandled by the target system. Usually though, there will be an "expensive" database query involved in fulfilling a certain request such as a page that makes it take much longer, and this metric is great to expose those issues.

Typically images and stylesheets are not the slowest (although they can be when a mistake is made like using a BMP file). In a web application, the process of dynamically building the HTML document from application logic and database queries is usually the most time intensive part of the system. It is less common, yet occurs more often with open source apps, to have very slow Javascript files because of their enormous size. Large files can produce slow responses that will show up in Peak Response Time, so be careful when using big images or calling big JS libraries. Many times, you really only need less than 20% of the Javascript inside those libraries. Lazy coders won't take the trouble to clean out the other 80%, and that will hurt their system performance.

Error Rate

It is to be expected that some errors may occur when processing requests, especially under load. Most of the time you will see errors begin to be reported when the load has reached a point that exceeds the web application's ability to deliver what is necessary.

The Error Rate is the mathematical calculation that produces a percentage of problem requests to all requests. The percentage reflects how many responses are HTTP status codes indicating an error on the server, as well as any request that never gets a response.

The web server will return an HTTP Status Code in the response header. Normal codes are usually 200 (OK) or something in the 3xx range indicating a redirect on the server. A common error code is 500, which means the web server knows it has a problem with fulfilling that request. That of course doesn't tell you what caused the problem, but at least you know that the server knows there is a definitive technical defect in the functioning of the system somewhere.

It is much trickier to measure something you never receive, so an error code can be reported by the load testing tool for a condition not indicated by the server. Specifically, the tool must wait for some period of time before it quits "listening" for a response. The tool must determine when it will "give up" on a request and declare a timeout condition. Timeouts will not a code received from a web server, so the tool must choose a code such as a 408 to represent the timeout error.

Other errors can be hard to describe because they do not occur at the HTTP level. A good example is when the web server refuses a connection at the TCP network layer. There is no way to receive an HTTP Status Code for this, thus the load testing tool must choose some error code to use for reporting this condition back to you in the load testing results. A code of 417 is what LoadStorm reports.

Error Rate is a significant metric because it measures "performance failure" in the application. It tells you how many failed requests are occurring at a particular point in time of your load test. The value of this metric is most evident when you can easily see the percentage of problems increase significantly as the higher load produces more errors. In many load tests, this climb in Error Rate will be drastic. This rapid rise in errors tells you where the target system is stressed beyond its ability to deliver responses adequately (performance failure).

No one can define the tolerance for Error Rate in your web application. Some testers consider less than 1% Error Rate successful if the test is delivering greater than 95% of the maximum expected traffic. However, other testers consider any errors to be a big problem and work to eliminate them. It is not uncommon to have a few errors in web applications - especially when you are dealing with thousands of concurrent users.



Storm on Demand - Pay Per Test

Storm on Demand Users Cost
1,000 $39.90
5,000 $199.50
10,000 $399.00
25,000 $997.50
50,000 $1,995.00
100,000 $3,990.00
200,000 $7,980.00

performance testing sign upIt's easy. You can be load testing in 15 minutes.

  1. Click the "Free Account" button.
  2. Enter your name & email address.
  3. Click the confirmation link in an email.
  4. Create a test scenario for your site.
  5. Run a load test.
  6. Analyze the test results.
  7. Send us a testimonial because you are amazed!

Customers love our load testing tool

“We needed an easy & cost effective way to load test our Windows Azure solution. Thanks to LoadStorm - highly recommended!” - Jonas Stawski, Microsoft MVP

"LoadStorm is a very useful tool." Alan Cheung, Manager - Technical Services, Dow Jones Publishing Company

"It has been a pleasure to work with LoadStorm." - Mike Compton, V.P. of I.T., Hearst Business Media

"Load-testing in the cloud was a great solution and LoadStorm a dream partner. " - Julie Hansen, COO, Publisher, The Business Insider

"There was no risk because I knew what the tool would provide before spending a dime. LoadStorm is a great tool." - Richard Ertman, QA/Release Manager, PETA

"I am definitely a fan of LoadStorm. I like its ease-of-use and the way in which the solution scales." - Darin Creason, Sr. Software Engineer, TransCore Corp

Want a Live Demo? Have Questions?

Please feel free to contact us:

(970) 389-1899

We are eager to help you with LoadStorm in any way that you need.