We posted an article on the LoadStorm blog last week about Software Testing is Detective Work. We covered about 50 questions that could be very useful to ask oneself when engaging in a testing process. Those questions were rather generic to the world of software testing, some were just intended to be funny, but they weren’t specific to load testing.
The following questions are particularly helpful as you begin assessing and planning for a web application load testing, performance testing, and/or stress testing undertaking. It is recommended that you intentionally set aside a few hours to seek detailed answers to these questions and document your findings.
The load testing process should always be iterative. So, don’t get stuck if some of these questions have no answers (yet). A common example that I’ve experienced is where the Product Manager has no idea what it expected for a typical concurrent user load. Just do the best you can with what you do know. And of course, keep asking.
I suggest that you invest energy to build a relationship with as many of your team members and people in the user community as possible. Not only will these players be cooperative in helping you answer these questions, but they will also be the best source for accurate data from which to create load test scenarios.
- Does our Product Manager (or insert your boss’ title here) have an expectation for the number of concurrent users possible?
- What are the expectations of other stakeholders – especially the senior management representing users?
- Has performance/load test planning been included since the beginning of the project? Is this an afterthought?
- What does the spec say about response times?
- Has a list of objectives been written for the load test?
- How many active connections are expected during a typical hour?
- What are the peak times of usage?
- Are there occasions of heavy spikes in user activity? If so, what are the characteristics of a spike?
- What scenarios are most important for simulating heavy volume?
- Does it make sense that 90% of users will just be anonymous browsers while 10% are logging in and generating business transactions? (fine tune this simple example to find a good mix of user scenarios)
- How are errors currently captured in normal production?
- Is a frequency distribution of error types/codes available for analysis before testing?
- Have we run performance tests before? If so, do we have historical performance testing data for comparison to this set of tests?
- What parts of the system can be “crippled” during a typical processing day?
- Are there certain resources such as load balancers or RAID arrays that can be disabled during testing?
- How should we exploit the infrastructure weaknesses to stress the system?
- What are the statistics on system memory consumption patterns?
- Can we get metrics for historical CPU usage?
- Which devices will be important to monitor during the performance testing?
- Do we have a network schematic of routers, firewalls, app servers, web servers, database servers, etc.?
- What key performance indicators (KPIs) will be useful to the developers for tuning?
- Can we apply the Pareto Principal to this load test to choose the best 20% of the scenarios to get 80% of the testing coverage we need?
- What is acceptable performance for this web app? (e.g. 95% of all pages have response time under 1 second)
- How many testing iterations should we run to feel comfortable with the results?
- Should we run performance and load tests monthly or quarterly regardless of our release schedule?
Asking questions is a good thing for anyone attempting to test software. Getting outside of the box is beneficial. Just testing to the checklist or building a script to address each item in a spec will probably result in missing something.
This is only a partial list to get us thinking. Can you please offer some of your own? I appreciate your willingness to comment on this post and contribute to the list for the benefit of all.