Sign up in 30 seconds.
No credit card.
No risk.
No download.
Traditional Performance Testing is Reaching Its Limits
I see something becoming more frequent and its increasingly concerning me. The current method of Performance Testing – capture network level traffic, inspection, correlate, and then replay, is becoming increasingly complex. In the ‘good old days’ we had simple GET and POST requests. Then Ajax and Web2 came along, making things a little trickier. I’m now seeing a lot of requests being generated dynamically within JS – and it’s nearly impossible to understand the JS code and replicate the logic in whatever is the performance tool of choice.
We also have CDN’s and Magic boxes (e.g. Yottaa, Strangeloop) increasingly being sandwiched between client and the server – what was a static named resource can easily become dynamic from one moment to the next. Do we want to test the CDN’s or not? (In the majority of cases the answer is not).
I see performance engineers having to compromise on the traffic they replicate using their tools. You can argue this has always been the case, but I would say that the approximation is diverging more and more from the actual traffic.
Performance Engineering is Getting More Difficult
Traditional Performance Engineering is at heart of development activity – inspecting the logic of others, decomposing it, simplifying and then replicating. Good Performance Engineers also understand key business flows, architectures, risk and the metrics produced. But as the browser logic become more complex the traditional approach is becoming more problematic.
Take a look at the HTML 5 specification - specifically items such as websockets. This will introduce a whole new level of complexity. Regarding asynchronous communication, the types of messages sent to and from the browser using this method of communication will begin to get increasingly complex (encoded, compressed and event-driven by user actions) as the sophistication of the browser app increases.
The advanced features of SPDY if implemented will also cause a headache… and I think this is just the beginning. HTML clients are going to get more and more sophisticated – they are becoming fully blown Apps. This ultimately means that the simulated performance behaviour will become harder to mimic and digress increasingly from the actual behaviour of the system. If the performance engineer wants to effectively mimic this behaviour more accurately (and capture issues) then they will have to replicate the logic implemented in the client. This simply isn’t sustainable as a way forward. Performance testing in this form is climbing an ever-increasing mountain and eventually it’s going to hit the wall.
So what is the answer?
I’ll attempt to outline a potential solution because it’s never great to highlight a problem without suggesting a possible way out. LoadStorm and HP’s TruClient, while viewed as crude by hardcore performance engineers with years of legacy tool experience (e.g. LoadRunner, Rational), is a sustainable scripting solution for the future. The two major issues are memory footprint & filtering hosts, but I think by using a combination of the two approaches these obstacle’s will be overcome. Having small proxy agents to filter requests (and ping dummy 200’s) in front of a group of client browsers would resolve filtering of host issues and provide more accurate measurements.
Piping all TCP/IP traffic generated from the cloud directly through the firewall may leverage the benefits of the elastic cloud (Memory and CPU) for testing required inside the firewall. Complex, but solvable, even with the latency issues. The benefits of the LoadStorm approach is that the time to script key business scenarios is extremely quick, the scripting level required is less involved and more accessible to a wider technical audience - and the cost is significantly lower.
Predicting the Future of Performance Testing
Traditional scripting becomes harder and more specialized. Larger entrenched companies will need to keep hold of their performance engineers as more knowledge becomes locked-up inside them (e.g. Betfair/Expedia). Some companies with switched on engineers will use the traditional approach and target risk points whilst accepting limitations (e.g. Facebook).
As the limitations of the traditional approach become more apparent (time to develop, accuracy of simulated behaviour) someone will refine and merge the two very different approaches that currently exist into a workable solution for the lion’s share of the market, which I believe to be companies wishing to leverage < 10k users, with very tight deadlines.
These newer methods will become more prevalent, relevant, easier to leverage and accepted as companies move from internal environments to PaaS/Cloud environments.
Performance Testing isn’t dead – it’s simply going to get more complex...and then change.
Jason Buksh is an independent Performance Testing consultant. He runs and contributes regularly to the popular and independent load and performance testing site perftesting.co.uk.
Comments
The benefits of the LoadStorm
The benefits of the LoadStorm approach is that the time to script key business scenarios is extremely quick, the scripting level required is less involved and more accessible to a wider technical audience - and the cost is significantly lower. It's good for my team!
Jason, Have you worked with
Jason,
Have you worked with the Winsock protocol? It all depends what you define as 'good old days' - if you look a few more years back to scripting client-server applications, today's AJAX scripting is relatively easy.
See, for example, my old CMG paper at http://alexanderpodelko.com/docs/Workload_Generation_CMG05.pdf about challenges of 'good old days' and some answers.
Although, I guess, the message is the same: the record on the protocol level/playback approach is not the only method of generating workload and you need to use whatever methods works.
However, I don't quite agree that recording on the UI level/playback is the answer. Definitely not yet - unfortunately, you may have a lot of issues on the UI level too. But it is an answer in some case.
Actually I answered to quite a similar post at http://applicationperformanceengineeringhub.com/is-the-current-model-of-...
Alex
Hi Alex, I am specifically
Hi Alex, I am specifically referring to HTML clients. I've seen increasing complexity in web front ends with an increasing amount of clients, particularly in front of the firewall. For example - I've seen several large client sites determine URL parameters dynamically within the JavaScript. Very often the performance engineer will not have the time to decode this logic. There is also a time-latency issue at play as well, and it is specific to a particular profile of client - they are about to go-live and make initial contact for performance testing about 5 days in advance. The delivery is often sufficiently detached from the developers to not know and be able to distill this actual detail. Even if this is possible - replicating the logic is time consuming in the programming lang of choice. This is just one aspect, I'm not even including the dynamic nature of requests being made here - resources requested can vary from POST and GET's due to Magic Boxes and CDN's.
HTML 5 and websockets is a particular concern, and where I think performance engineers are really going to 'hit the wall'. I'm can only begin to imagine the complexity of the traffic and events developer will invent to send to and from the server. Whereas with the current method there are crude ways of approximating traffic, with a HTML5 app using websockets they will need to know and replicate the code for the behaviour of the messages being sent within this pipe – and the format of the message sent will not have to conform to any standard. Whereas perviously there have been easy hacks to allow crude duplication of traffic, this won't be possible ... and explaining this to clients who have a tight deadlines will become less acceptable. I think the UI approach seems to have many benefits (scripting/ease of use) and the current inherent issues will be ironed out. Its not a silver bullet - but its going to become more appropriate. I also think traditional performance testing will continue - but become a less significant part of the market and reserved for specialised in-house teams where their product is well known and understood. GUI testing in one form or another will become more prevalent. I think I will write a post to explain the specific concerns with the current method of performance testing with concrete examples.
Jason Buksh
PS I think the the summary of your paper is still spot on. I always assess the problem domain and then choose the most relevant solution for the customer
All true - but the UI
All true - but the UI approach has its own challenges (both scripting and scalability). It may work in some cases, but it is not a silver bullet (at least with the current products). And it quite could be that you won't be able to do load testing in 5 days. And that you would just need more time (maybe much more time). Maybe it is a new notion for those who use to work with plain HTML - but those who worked with corporate applications probably seen that before.
Alex
Exceptional post Jason. Thank
Exceptional post Jason. Thank you for sharing such well articulated post on Performance testing. I agree this is gonna be more complex and will require special skill set and expertise. Keep sharing such informative post.