A couple weeks ago, we looked at some of the best Web performance articles written in 2011. There was so much good material out there that we couldn’t stop with just a single round-up!
Selecting Tools and Designing Tests
Jason at Performance Testing Professional published a couple of great guides last year on the subject of choosing the right testing framework to fit your company’s needs. In his first article, Jason discussed some guiding principles for selecting a performance testing vendor. Most companies only adapt a performance testing framework after experience a major issue in production, so Jason gears several of his 11 questions towards investigating this instigating incident. His other questions focus on factors such as how your company plans to integrate performance testing into its existing QA cycle, and how often new system builds are deployed into production. He also lays out seven factors for making a final decision, including initial price, overall maintenance cost, and the platform’s scripting capabilities and overall learning curve, among others.
Continuing his analysis in a separate article, Jason reviews the three major types of load testing platforms – freeware, commercial, and cloud-based – and discusses which tools tend to suit which types of projects. He also bemoans that no one has yet pulled together the existing open source tools into a freeware platform that provides a workable alternative to the commercial standards. (As we will see below, though, at least a few people are thinking about how to do this in the mobile performance space.)
Meanwhile, over at the dynaTrace blog, Alois Reitbauer published an epic post about common errors made in measuring server response times. For example, rather than using averages, which do not capture the pain felt by most users, Reitbauer urges developers to follow performance experts and measure percentiles, which capture how many users are experiencing sub par response times. He also counsels against measuring server-side response times, mixing together different transaction types in one’s measurements. Performance tests, argues Reitbauer, should be as close to a real world end user’s experience as possible.
Elsewhere, the folks at Web Performance tackled an age-old question: how do you know many concurrent users to simulate in your tests? Michael Czeiszperger provides some basic formulas for calculating average hourly loads based on daily traffic. Web Performance also provides a simple online calculator for estimating virtual users.
A Pair of Innovative Ideas
Steve Souders is responsible for writing some altogether excellent content in 2011. We mentioned Steve in Part 1 of our round-up, pointing users to his excellent presentation on mobile performance. A few months after giving that talk, Steve created a great application called LoadTimer, a Web-based framework for measuring performance of pages on mobile devices. Initially created to address the lack of good testing options for the Kindle Fire’s Silk browser, LoadTimer works on a variety of mobile tablet platforms. To confirm this, Steve used it to compare the Web browsing performance of the Kindle Fire, iPad 2, and the Samsung Galaxy. Souders suggests that, combined with pcapperf and the Mobile Perf bookmarklet, LoadTimer provides a solid foundation for a mobile performance measurement toolkit.
Earlier in the year, Souder announced another innovation: the HTTP Archive, an active repository of performance information gathered from around the Web. Souder’s goal in collecting this data is to provide snapshots of current and past trends in Web data delivery. For example, Souder has used his collection to calculate how much data is downloaded for the average Web page, and how much is downloaded for each type of page asset (HTML, images, Javascript, stylesheets, etc.). Souder also produces trending charts that demonstrate how data usage on the Web is increasing over time.
Garbage Collection and the Perils of Performance Hacking
When you’re testing Java applications, the virtual machine you use can greatly impact performance. Another winning article from dynaTrace last year focused on the different garbage collection behavior in the industry’s top three Java Virtual Machine environments: Sun’s HotSpot JVM, Oracle JRockit, and the IBM JVM. Each JVM uses different garbage collection algorithms, and author Michael Kopp discusses how each can be tuned for maximum performance.
And finally, a cautionary tale. Sometimes, when we’ve found a performance problem, it’s tempting to resort to a quick hack to fix it. Cliff Click provides an object lesson in the dangers of ad hoc optimization, as he discovers that a 32-bit cast he made 20 years ago now malfunctions when run under 64-bit JVMs.
Conclusion
We saw a lot of great content in 2011 pertaining to mobile performance, server performance, and the selection of load testing platforms. We expect in 2012 that mobile performance will continue to be a hot topic, and we also expect to see more content focused on performance testing applications that live in the cloud.