In the spirit of New Year’s resolutions, I’m going to try to provide as many tips of the week as I can. There are weeks when I might get too busy, but I’ll sincerely try to post a small hint or trick that will be helpful to web developers. Tips may be about load testing, performance testing, performance tuning, stress testing, or anything related to web application development.
Performance Reference Point
When embarking on a performance engineering project, it is wise to understand the target web application’s existing level of performance. I have heard this called “establishing a benchmark” or “creating a baseline”. Whatever you want to call it, you should put a stake in the ground with actual metrics.
I’m partial to metrics such as average response time, concurrent users, requests per second, and error rates. Certainly it is advisable to track any performance data that is in the test plan or project requirements, and as long as the metrics are clearly identified, then those will be how you measure your performance improvement.
Start by running a relatively small load test. David Makogon recommends running a test with one virtual user, repeating the test three times, and averaging the results. That’s a good approach, but if you already know your stress point is about 500 users, then perhaps you run your benchmark tests at 50 users.
It’s important to use the identical test scenarios every time, so save them and don’t get tempted to monkey with them half way through the tuning project. I set these aside and only use them for comparison with baseline performance.
Performance engineering is an iterative process. Test, tune, test, tune, test…
After you have a baseline, make a small number of application improvements or configuration changes. Then run another performance test with the same amount of load used in the benchmarking.
The reason I suggest making a small number of changes is because you want to pinpoint what is working and what is not. You may be spot-on with all your tuning, but us mortals make mistakes. Sometimes the needle moves in the opposite direction I expected. I like to keep the variables limited so I know how much a tweak helped (or not). Consider what is possible with three application adjustments: you might see little change in performance because one has a big positive impact while the other two have a moderate negative impact. Even if all three are improvements, don’t you want to know how much each one helped?
I have received some pushback for advocating so much iteration. Some people can’t afford running load tests over and over. That’s not a problem if you are using LoadStorm on a subscription plan. Unlimited testing for a monthly subscription is how we wanted it to be. That is how we would use it, so we priced it that way.
At every iteration of test and tune, you should re-establish your performance reference point. Raise the bar so to speak. It isn’t necessary to compare every possible data point available on your reports, however you do want to evaluate all of the significant metrics used when establishing baseline.
Lastly, take the metrics from each iteration and create a beautiful progress report that shows your brilliant engineering work. Managers and clients love that type of stuff. Graphs and charts are easy to create from the raw data in a spreadsheet. It’s worth the extra effort. Plus, you should review the progress report thoroughly yourself because it is a fantastic learning tool. You want to always be improving your skills, right? Well, think of it like journaling…reviewing success and failures is the cornerstone of learning to be a good performance engineer.
Good luck and never give up!