I’ll probably take some heat for this post. Most professional testers get deep into the details of arcane aspects of the science of load testing because that is their job. It’s important to them, and they have studied it for years, and they are immersed in it. I understand. It helps to differentiate them from other testers that are not as knowledgeable; thus, it is a competitive advantage to incorporate as many functions of performance as possible. Consultants certainly need to show off their advanced skills gained from decades of load testing so that the customer can be assured of getting good value. Again, I understand. I respect these highly trained testers and engineers.
That said, I’m of the opinion that many times the professionals spend 80% of their time building load test scenarios that only have a 20% impact on the performance metrics. Implementing these nuances into scripts and testing plans makes the project more thorough, but from a business ROI perspective, gettting that extra 20% of accuracy is NOT worth the 80% of effort. The following paragraphs discuss some examples of considerations that I recommend you ignore when building your test scenarios.
- Abandonment
- Time of day
- Browser Varieties
- User Connection Speeds
Abandonment
The rate at which users “go away” from your site is referred to as abandonment. Some testers insist on writing scripts that simulate users that don’t wait until the first page is completely downloaded before requesting the subsequent page.
The argument is twofold. First, some users may be able to select their next page prematurely because they may know where they want to go as soon as the Navigation bar appears. Second, some users will leave your site if they have to wait too long.
I just don’t see the value in accounting for these behaviors during a load test. Sure, it more accurately reflects real world possible user actions, but the frequency is so low that the overall mathematical affect on metrics like Requests per Second or Average Response Time are negligible on a percentage basis.
Time of Day
It is common for a site to experience large swings in usage at certain times of day. For example, a retail B2C site will get more hits usually in the evenings (6:00 p.m. EST to 10:00 p.m. PST). B2B sites will usually get more hits during regular working hours (9:00 a.m. EST to 5:00 p.m. PST). LoadStorm gets more heavy usage in the evening hours because that’s when many project managers want to run load tests. Content sites with current news will have larger volume in the mornings as people want to read about the top stories and events to start their day.
Thus, some testers want to schedule tests that will run over the web at those times of the day or week to reflect issues such as network latency. It just doesn’t seem worth it to me. My experience tells me that the performance of your database delivering query results to the application layer is thousands of times more valuable to you as a web developer than the latency in the network. Even if the latency is problematic, what can you do about it?! Focus on what’s behind your web server – that’s where the big performance gains will be made from testing and tuning.
Browser Varieties
The types of browsers utilized by your customers and prospects can cause your application to process dynamic content differently for FireFox rather than Internet Explorer. This can put slightly different burdens on your system to produce the content, and thus, some testers say it is necessary to reflect it in the test scenarios. Yes, I can see that. Building test scenario to incorporate different browsers does add to realism.
However I haven’t known any good coders lately to put that much emphasis on writing several versions of their web app. Most of the toolsets nowadays strive hard to put any browser eccentricities into stylesheets. Your web server will easily deliver a specific CSS file to the requester at no cost to speed – those are static resources. CSS files are almost always in cache because they don’t change frequently. Therefore, I see no reason to waste time making test scenarios include browser-specific scripting.
User Connection Speeds
The Internet speed of a user’s connection can unquestionably vary greatly on their end. It’s possible some of them could still be on dial-up connections, while many will be on ISDN, cable, or T1. In today’s world much of web traffic can indeed come from smart phones or iPads.
Some professional software testers believe that this consideration for connection speed must be accommodated in the load test scenarios. Their reasoning is based on needing to know the actual performance from the user’s point of view. I agree that it is good to know that when analyzing the performance of a given page. Yet I see no value in creating test script deviations that generate different connection speeds coming into the target web servers. Rather, I would like to have all virtual user traffic at a steady speed in order to better isolate any issues with the web server, application code, or database.
Remember, we are talking about load testing here. The goal is to apply large volume to see how the application delivers what is requested when under heavy user activity. The connection speed has no bearing on the load. If the user is on a dial-up line, then he or she is already accustomed to slow response and will not penalize your site unless it is much slower than others. Their connection speed is a level playing ground for all apps. It simply is not important to load testing.
Summary – Realism is Good, Don’t Go Overboard
Building test scenarios can be complex, but many testers go overboard. Making the requests as close to the real world traffic patterns can create better load tests which produce more accurate performance results.
In order to achieve this realism, it requires taking into account user types, volume allocations of users, think times and as many of the previously mentioned concerns as possible. Some aspects of the test scenario’s load profile can impact metrics by a thousand percent, while others may only affect the test results by a few percent.
I strongly recommend ignoring the insignificant ones in order to invest your time and energy wisely into the parts of testing that will make the biggest difference. Focus on getting your test profile set up to model the real world conditions that your web application will encounter in production. The objective is to have more reliable test results which will guide your performance tuning efforts.
Experiment in your test plans to see which scenarios have the largest affect on results. All architectures have unique components, so try several things I’ve described. If you find any significant impact from an item I stated is insignificant, please feel free to beat me up about it. I just don’t think you will find much.