I see something becoming more frequent and its increasingly concerning me.   The current method of Performance Testing – capture network level traffic, inspection, correlate, and then replay, is becoming increasingly complex.  In the ‘good old days’ we had simple GET and POST requests.  Then Ajax and Web2 came along, making things a little trickier.  I’m now seeing a lot of requests being generated dynamically within JS – and it’s nearly impossible to understand the JS code and replicate the logic in whatever is the performance tool of choice.

We also have CDN’s and Magic boxes (e.g. Yottaa, Strangeloop) increasingly being sandwiched between client and the server – what was a static named resource can easily become dynamic from one moment to the next.   Do we want to test the CDN’s or not?  (In the majority of cases the answer is not).

I see performance engineers having to compromise on the traffic they replicate using their tools.  You can argue this has always been the case, but I would say that the approximation is diverging more and more from the actual traffic.

Performance Engineering is Getting More Difficult

Traditional Performance Engineering is at heart of development activity – inspecting the logic of others, decomposing it, simplifying and then replicating.  Good Performance Engineers also understand key business flows, architectures, risk and the metrics produced.   But as the browser logic become more complex the traditional approach is becoming more problematic.

Take a look at the HTML 5 specification – specifically items such as websockets. This will introduce a whole new level of complexity.  Regarding asynchronous communication, the types of messages sent to and from the browser using this method of communication will begin to get increasingly complex (encoded, compressed and event-driven by user actions) as the sophistication of the browser app increases.

The advanced features of SPDY if implemented will also cause a headache… and I think this is just the beginning. HTML clients are going to get more and more sophisticated – they are becoming fully blown Apps.  This ultimately means that the simulated performance behaviour will become harder to mimic and digress increasingly from the actual behaviour of the system.  If the performance engineer wants to effectively mimic this behaviour more accurately (and capture issues) then they will have to replicate the logic implemented in the client. This simply isn’t sustainable as a way forward.  Performance testing in this form is climbing an ever-increasing mountain and eventually it’s going to hit the wall.

So what is the answer?

I’ll attempt to outline a potential solution because it’s never great to highlight a problem without suggesting a possible way out. LoadStorm and HP’s TruClient, while viewed as crude by hardcore performance engineers with years of legacy tool experience (e.g. LoadRunner, Rational), is a sustainable scripting solution for the future. The two major issues are memory footprint & filtering hosts, but I think by using a combination of the two approaches these obstacle’s will be overcome. Having small proxy agents to filter requests (and ping dummy 200’s) in front of a group of client browsers would resolve filtering of host issues and provide more accurate measurements.

Piping all TCP/IP traffic generated from the cloud directly through the firewall may leverage the benefits of the elastic cloud (Memory and CPU) for testing required inside the firewall. Complex, but solvable, even with the latency issues. The benefits of the LoadStorm approach is that the time to script key business scenarios is extremely quick, the scripting level required is less involved and more accessible to a wider technical audience – and the cost is significantly lower.

Predicting the Future of Performance Testing

Traditional scripting becomes harder and more specialized.  Larger entrenched companies will need to keep hold of their performance engineers as more knowledge becomes locked-up inside them (e.g. Betfair/Expedia).  Some companies with switched on engineers will use the traditional approach and target risk points whilst accepting limitations (e.g. Facebook).

As the limitations of the traditional approach become more apparent (time to develop, accuracy of simulated behaviour) someone will refine and merge the two very different approaches that currently exist into a workable solution for the lion’s share of the market, which I believe to be companies wishing to leverage < 10k users, with very tight deadlines.

These newer methods will become more prevalent, relevant, easier to leverage and accepted as companies move from internal environments to PaaS/Cloud environments.

Performance Testing isn’t dead – it’s simply going to get more complex…and then change.

Jason Buksh is an independent Performance Testing consultant. He runs and contributes regularly to the popular and independent load and performance testing site perftesting.co.uk.

Similar Posts