Performance Requirements : Web Performance : Performance Testing

The aim of this post is to outline how to determine and prioritise the key performance requirements within a project. I’ve already covered how important it is to have good performance requirements. These are the items that drive and determine the quality of the performance testing – but actually how do we best manage, assess and identify performance requirements?

Managing Performance Requirements

Lets take a step back first. I’ve often found that the person that best defines the performance requirements is usually the performance tester. This is in contrast to the business analysis or the stakeholders defining them. Why? A number of reasons – the main being time and accuracy.

Here’s a typical conversation:

PT: What is the load you want me to replicate on the system?
BA: 1000 users

PT: 1000 users doing what?
BA: 1000 users doing actions A,B,C,……X,Y and Z

PT: Thats too many, which are the most critical? (Discussion ….)
….. BA goes to find out and confirm

PT: And at what transaction rate?
BA: All at the same time.

PT: No, (explains transaction rate) – can you tell me the rate of actions x,y and z over an hour … peak and normal?
BA: No – I have to get back to you ……

….. two days later

BA: Such and Such says we need to simulate X,Y and Z at this rate …
PT: Can you explain to me what X does precisely

BA: I’m not quite sure, I will have to find that out from such and such

The above discussion is hypothetical but not untypical of the conversations I have. What it attempts to illustrate is the time lag and 3rd party knowledge required when gathering information.

If I leave a BA or any other stakeholder to write a document containing performance requirements and collate all the statistics it is likely to be inaccurate, imprecise, structured incorrectly and also result in a lengthy delivery time. The fundamental issue here is that performance testing is specialized and the Performance Analyst knows exactly what is required and how they want it delivered, however the BA doesn’t. So I take control – I talk to the BA, I write up a summary document and I ask them for the contacts I need to speak to (Notice how many times I used I). This means I can get a more precise set of requirements directly from source and with much more speed. It also means I can build up a list of valuable contacts – so when I have a query I can bypass unnecessary layers. It also means I can start building an understanding of the business requirements and validating the performance requirements. So taking control of the document and being able to directly engage with the stakeholders is a key aspect.

Identifying Performance Requirements

So how did we identify a good performance requirement? I had a small team of performance testers on a client site – we had 140 developers in 10 Scrum teams delivering into a single product release (once a month). Each of these teams had a Technical Project Manager (TPM) – each with varying amounts of technical ability.

Here are two examples of some of the requests from TPMs:

  • We have changed a icon on this screen – we want you to performance test it (honestly, I’m not kidding)
  • We have a new drop down box on this screen – please performance test

There was a request to performance test every new piece of functionality from some TPM’s. So I sent this set of guidelines and questions out to filter out the number of requests that were entering my team:

  • Is new introduced functionality introducing significant architectural change?
  • What is the transaction rate that the expected change is estimated to be enacted per hour?
  • How business critical is the new change?
  • Can this be performance tested in isolation or do we require the whole system built together to performance test?

Using the above guidelines we could speedily identify changes entering the system that were subject to performance risk. We could then prioritize the requirements that required further investigation and warranting performance testing.

Assessing Performance Requirements

When assessing performance requirements I prioritise according to the following broad categories:

  • Business Critically: How business critical is the flow to be executed
  • Frequency: How frequency is the flow enacted over a typical period
  • Architecturally: How complex is the flow ‘under the bonnet’.
  • Isolation: Can a developer test this in isolation without the whole system

Its worth saying a little about each of these.

Business Critically: Always identify those items that are critical and key to the business – just because something is business critical doesn’t mean it needs to be performance scripted. If a critical business flow is not performance tested we should be able to evidence that it has been considered for test show the reasons it hasn’t been tested.

Frequency: How frequent are the actions – if this is enacted a small number of times then we can consider testing manually while generating load on the system. Sometimes manual is faster, more convenient and easier.

Architecturally: Talk to the developers, architects and DBA’s about how complex a flow is under the bonnet. There have been numerous occasions when an item has been de-risked because it is similar to another piece of functionality or there is no perceived technical risk. This is the one category that is most often overlooked. An analogy is the indicator on a car – this is critical but mechanically simpler than turning the ignition. Assessing things architecturally enables more intelligent and targeted performance testing.

Isolation: Can a developer test this in isolation without the need for specialist performance tools? e.g. Can a JUNIT test be run in parallel before entering performance testing. Where possible, performance testing should take place before the system integration phase and by developers if possible. This also significantly reduces project risks. Everyone is responsible for performance testing, guidelines can be given where required and the performance team can sign off.

By taking a combination of these factors – you can then begin to prioritise which performance requirements are going to be targeted and delivered within a build.

Key Takeaways

  1. Taking control of gathering performance requirements enables the Performance Tester to quickly get a more accurate picture of what needs to be tested.
  2. Getting involved with Performance Requirements also helps gain an understanding of the business, this means incoming requirements can be validated.
  3. Validation is more important than verification.
  4. Always prioritise according to a combination of the 4 categories, don’t overlook the architectural complexity.
  5. Look for load test items that can be pushed earlier into the SDLC

Note: It may seem a little odd that I haven’t talked about metrics and SLA’s. Metrics will naturally fall out when the performance requirements are being assessed. They are a by-product of the overall process. SLA’s are artificial and subjective – spending too time attempting to define SLA’s around metrics is wasteful. Report the metrics and then decide with stakeholders if a product if fit for release.

Similar Posts