Taking our benchmarking data was about more than just gathering a solid basis for comparison for our upcoming tests, it was also about interesting discoveries with our existing system. After running 324 tests, we were left with 303 successful results, of which we ended up using 297 since six were thrown out as outliers. Let’s break this down and show you what we found.

Performance Tests

Our performance tests were all run through www.webpagetest.org, and covered a wide variety of locations and browsers. The connection settings were “Cable (5/1 Mbps 28ms RTT)” for every test.

These are the results as organized by location. As you can see, Sydney, Tokyo, and Sao Paulo had the highest load times by far. Interestingly, Amsterdam had a 37% faster average load time than Sao Paulo, despite being much farther away. I’m tempted to blame this on latency, but according to www.speedtest.net, the difference in ping time from where I am in Michigan to each of those locations is only about 80 ms. (129 ms between me and Amsterdam, 207 ms between me and Sao Paulo.)

The other location we tested, St. Petersburg, Russia, often had load times between 15 and 22 seconds. It’s unknown why load times were this high, but due to our tests consistently failing from this location we ultimately scrapped those results.

Our homepage kept coming up with the highest load times, a half-second above the other two.

…but that’s probably due to having over 18% more requests than the other pages…

…and having almost 70% more to download.

Chrome turned out to be the fastest browser of the three we tested, but the differences are less than a half second between all of them. (0.375 seconds difference between the highest and lowest.)

What’s interesting, though, is that IE9 had the fewest number of requests, 14% fewer than Firefox and 11% fewer than Chrome. As mentioned in a previous post, this seems to be due to a much higher rate of failed requests being counted in Chrome and Firefox. It’s ironic, then, that IE9 is still slower than Chrome and not much faster than Firefox, even with significantly fewer requests.

Load Tests

In addition to the hundreds of tests with webpagetest, we also ran a couple of load tests using our own software to test the initial capacity of our website. Here are the results of a load test we ran that ended at 5000 users:

Summary:

Request Graph:

Response Graph:

To find out what the maximum capacity of our website was, we look for the point where things started to fail. We got our first time-out error at the 8 minute mark, with 1601 concurrent users. This can be seen in the response time graph when peak response time suddenly spikes to 35 seconds. This performance decline can also be seen on the request graph, where throughput and requests per second started leveling out even as the number of users continued to rise.

By the time we hit 10 minutes and just over 2000 users, the site was regularly timing out at 35 seconds, which is a sign that our server was starting to fail with no sign of recovery. From these two points, we can deduce that a comfortable upper-bound in capacity for LoadStorm’s website is around 1600 concurrent users.

What’s next?

Iterating through optimizations is next. Now that we have a baseline comparison, we can see exactly how much each optimization improves (or decreases) performance. Since it doesn’t really make a whole lot of sense to run 324 tests for each little thing we do, we’ve decided to limit further tests strictly to one location and one browser. We chose Dulles as the location since it’s closest to the LoadStorm servers in Ashburn, VA, and Chrome since it was the fastest browser we tested on average. To make these comparisons fair, we’re only going to be using the Dulles/Chrome results from the benchmark, which are as follows:

Total Features Home Pricing
Average of Load Time (ms) 1634.50 1982.17 1499.5 1421.8
Standard Deviation of Load Time (ms) 456.43 607.69 122.4 320.73
Average of TTFB (ms) 246.06 256.5 223.33 258.33
Average of Bytes In 268743.44 214765.8 371976.8 219488
Average of Requests 34.06 31 38.83 32.33
Average of Time To Start Render 1123.50 1388.33 870.83 1111.3
Average of Throughput (KB/s) 160.57 105.81 242.25 150.75
Average of RPS 20.84 15.64 25.9 22.74

Throughput and RPS are both calculated fields based off of the other data provided. Throughput is calculated as:

Throughput = (Bytes In / 1024) / (Load Time / 1000)

RPS is calculated as:

RPS = Requests / (Load Time / 1000)

Well, I think that’s about enough build-up. Next week, we’ll be getting into the meat of these experiments as we start analyzing optimization results.

Similar Posts