Today I saw a tweet that led me to download a document published by WebPagetest.org’s development team containing proposed changes to the way web performance testing is conducted. The following is a summary of the document with a little commentary. The focus of the document is NOT on load testing; rather, it primarily deals with individual web page analysis. Thus, the definition of performance testing used herein is relative to taking a browser, hitting a page, and analyzing the response metrics relative to the single page.

Load times of each resource such as images, CSS, HTML, Flash, XML, Javascript files, etc. are a key measurement. The speed of DNS lookups, intitial connection, content download, start render, and document complete are other important measurement in the type of performance testing involved in this proposal. Patrick Meenan, Sadeesh Kumar Duraisamy, Qi Zhao, and Ryan Hickman are the authors of this piece, and they refer to the scope of their proposal as “ad-hoc performance testing”. They submit four main points:

  1. Current state of web performance testing
  2. Proposed changes
  3. Use cases
  4. Making it happen

Current Web Performance Testing

Their document has bullet points without much explanation, so I must read between the lines and offer my thoughts. The first bullet is “Monolithic Solutions”. Yep, I think I understand that one. Most of the performance testing solutions in the market are well-known by developers because those tools have been around for a long time. Until recently, there have only been a few players such as Mercury, Rational, and Borland. Consolidation in the past 5 years caused the names to change to HP, IBM, and Micro Focus, but the software tools are the same monolithic solutions created in the 1990’s.

The vast majority of marketshare has been controlled by the large corporations with deep pockets for advertising and PR. Their sales relationships with the IT Managers and CTOs of Fortune 1000 companies has assured them of guaranteed deals through being a part of a suite or through the FUD factor. Big companies must buy software from big companies. Otherwise, the CTO would be exposing himself or herself to ridicule and contempt by the vendors. This contempt would be delivered to other executives like CFOs that don’t know software, but they do understand risk mitigation. Along this line of thinking pitched by the entrenched big vendors’ is the conclusion that monolithic solutions are safer. Why? Mainly because they are too big to fail. Oh my goodness! How could a rational (pun intended) IT executive swallow that crap? There is lots of IBM software that is dead and gone – which cost many companies millions of dollars to replace/convert to the new software products. Reminder: GM was too large to fail as well…see the fallacy?

Most of these same characteristics of today’s web performance testing environment are shared by the second bullet: Full stack from individual vendor/solution. Yes, the theory is that if all of my testing suite comes from one place, it must all work together beautifully. The implication, and erroneous conclusion, is that one vendor umbrella will make your development team work together more efficiently. Not necessarily. Many times the software is pieced together from multiple acquisitions, and the software isn’t truly integrated at all. It’s a kluge. It’s a bogus sales job that doesn’t hold up the scrutiny of smart software programmers, project managers, and web architects. The authors refer to this as, “Hope one vendor meets all of your needs”.

Specialization of functionality has become the driving force for global economies. That’s why China is excelling at manufacturing. That’s why India is so popular for offshore coding. That’s why Ireland has turned into a huge data center hub. That’s why Japan has dominated the automobile industry for the past 10-15 years. That’s why there are a million boutique retailers, each targeting a different demographic.

Specialization in web software is a good thing. Monolithic solutions in a full stack from one place is going to give you mediocrity in one basket. It will be a detriment to your development cycle. Just ask your team. Agile teams understand the value of smaller, more responsive vendors. Let them share stories with you. Integration is currently vendor-specific, and it is only supporting application programming interfaces that are in-bred with the vendor’s other tools. The authors are advocating an open approach to interfacing.

Proposed Changes

Improving the situation involves the authors’ proposal to, “split the functionality into services with standardized, easy-to-consume APIs. These services include testing, storage, and reporting.

Testing services should be separated from the application logic. They present a nice graphic that shows a small cloud of testing services based on several types of browsers which a web performance application can call through an API. The service would then actually make the requests of the target web site and return the measurements back through the same API. This would allow for more specialization, therefore better efficiency. The services would be available to hosted solutions, commercial products, and self-managed implementations.

Storage can also be a specialized solution that is reusable by many consumers or tools. All of the individual test results can be retained at the detailed request/response level. The summary data aggregated from many tests can be available for higher-level analysis of common metrics. The storage mechanism would be abstracted from the web performance application through a standardized API. Thus, any app could retrieve whatever measurements are useful to its customers without having any knowledge of where or how the results are physically stored. Once again, the efficiency of the overall system can be greatly improved by allowing specific providers to do what they do best and handle the aspects of storage which cannot be managed as well by “amateurs”. I don’t mean “amateurs” to be a derogatory or insulting label, but we can’t all be storage experts. If we are good at web app development, or if we are performance engineers, we should admit that we have limitations to our abilities with moving data in/out of a storage facility. It’s also been proven that costs go way down on storage if you utilize one of the cool new cloud solutions. Save money, speed up delivery of code, and eliminate tuning headaches. Win, win, win.

Using a web service for reporting and visualization also reaps the same type of benefits for the same reasons. Whether your web performance application wants to produce elaborate waterfall graphs, time-series charts, OLAP analysis with drill-down, or a complex five-dimensional database of raw relational tables, it stands to reason that the front-end tool needs to be specialized. Many times tools are best when they are industry-specific. For example, I’ve seen tremendously valuable OLAP tools that focus on marketing analytics. The calculations are unique. They way marketing managers and analysts think is different than say an insurance actuary. Healthcare measurements and visualizations are also going to be specific to the way medical practitioners need to see them in order to be useful. Operational dashboards for manufacturing executive need to have metrics presentations and measurements that only they understand. By putting the control of calculations and presentation of reporting data from performance tests into separate processes from the actual storage or testing tool, the end-user will get a much more rich experience that delivers higher usability in their domain of expertise.

End of Part 1

I guess I have gotten a bit verbose in my analysis, but I like this stuff. This is going a bit too long for one blog post. I’ll work on part 2 tomorrow. It will cover the Use Cases and Making It Happen sections of WebPagetest’s proposal. I’ll also try to find any other analysis of this proposal. Google backs it, so I bet they have something to say about it.

To be continued…

Similar Posts