The following is an interview with David Makogon, and he shares his thoughts with us on load testing.

David is a Senior Consultant (web development, Azure, Silverlight) with RDA Corp that has several years of deep load testing experience because not only has he been a tester, but he has written a load testing tool. He is a recognized leader in other areas as well (e.g. application architecture, cloud computing, Silverlight). His background is explained in more detail during the interview.

Please visit David’s blogs at rdaarchitecture.blogspot.com and www.davidmakogon.com. He is also on twitter and can be followed at @dmakogon.

We extend our thanks to David for his willingness to invest his time answering these questions for us and providing his insights into system performance. Here is the interview:

What is your technical background?

I’ve been building software since the mid-80’s. I cut my teeth on the good stuff – assembler, C, Pascal, a bit of dBase… I spent considerable time with C++, including embedded work, and then onto Python. Then I really got into distributed and large-scale systems – getting all those moving pieces to work in harmony is always a fun challenge. Nowadays, I hang my hat at RDA (www.rdacorp.com), where I focus on system architecture, with .NET as my platform of choice.

Around 1998, I got into website testing, using both off-the-shelf tools and a custom-built hosted test tool. This really helped change my perspective on scalable systems. It seemed that, whenever a system was pushed beyond its limits, the failing component was never the one predicted.

Do you consider yourself more of a software developer or QA professional?

I wear the “software developer“ hat most of the time, so I guess that’s what I’d consider myself. However, I spent several years focused on testing. In the late 90’s, I co-founded a company that specialized in web application performance testing. My QA hat fit much better then…

When and why did you get into this industry?

My dad worked for IBM and bought one of the first PCs (4.77MHz of pure processing power!). I loved the challenge of figuring out how to write programs, especially considering the dearth of technical books. The challenge grew into my hobby, then I found out that someone was willing to pay me to spend my days hobbying. That sealed the deal.

What is your specialty? Why?

This answer might seem more dev-oriented, but it has a definite testing slant…

These days, I’m really focused on architecting the user experience (UX) piece of the software puzzle. There are so many ways to approach the same problem, and I think that can lead to lots of challenges when it comes to maintainability and testability. When WinForms and asp.net became popular, it became almost trivial to get a great-looking UI up and running very quickly, but it was even easier to raise the long-term pain factor by burying all code in easy-to-reach code-behinds.

With the advent of Silverlight and WPF, there are even newer and better ways to make a mess of things. So I focus on cleaning that up with separated presentation patterns, testing with mocks, etc.

What do you believe to be the most critical elements of web application testing? Why?

It probably seems obvious that the test planning itself is critical – building representative scenarios, establishing reasonable and justifiable performance goals, etc.
Maybe less obvious is the execution of the actual testing. It’s easy to produce invalid results. For instance, a .NET web application is compiled on-demand, so the first time code is executed, it runs slower. When testing, these details must be taken into consideration.

How do you see development or testing evolving over the past few years? the next few years?

Through the late 90’s, there weren’t many players in the web-test space. The “big guns,” Mercury and RadVIew, required expensive licenses and large testing farms (not to mention considerable expertise required to craft the test scripts). There were a few alternatives (Microsoft had its Stress tool, and Apache produced JMeter). However, these still required customer-supplied hardware and network infrastructure.

Right around the Y2K mark, before the term cloud-computing was in vogue, there were a few hosted testing products that started popping up (including one I co-wrote, StressMy.com). The idea was simple: let the user create their test scenarios via a web interface, without any software to purchase or install. Then, run load tests against those scenarios, without any hardware to purchase to drive the load.

Now, cloud computing is quite pervasive, and with offerings from Amazon, Microsoft, and others, I see cloud-based testing solutions becoming more and more popular over the next few years.

Is there anything commonly overlooked in web application testing?

I think the simple stuff is overlooked most often. For instance: before crushing a site with some huge amount of traffic, it’s extremely helpful to run a test with one virtual client. And let it run through several iterations, against the same scenario. And to repeat the test three times and average the results.

This “kickoff” step is so often overlooked, yet the results are critical for setting an accurate baseline.

How much involvement do you have with load or performance testing?

Years ago, I spent most of my time either load testing or writing load testing software. Right now I’m more focused on the development side, including automated unit testing and mocks. I still do some load testing from time to time, though.

What would you say is the difference between load testing and performance testing?

Hmm… I usually talk about these two things as inseparable parts, sort of like Queen’s We Will Rock You and We Are the Champions. I see load testing as the hunt for the “knee in the curve.” That is, start a test, driving minimal transactional traffic to the site, then raise the load level over time, measuring results, until at some point, the response time takes a jump. Maybe a site is responding with sub-second response times, with only marginal response time increases, as transactional load is increased from 5 virtual clients to 10, then 20. But maybe at 25, the response time soars to 10 seconds. At that point, the website developers take over, trying to identify the moving part (or parts) that causes this.

Performance testing looks at results from a different perspective. I see it as more of a user-experience test. The user will have expectations about response times, graphics downloading, page navigation times, etc. It’s helpful to know what type of traffic the website can handle, while still producing an acceptable user experience.

Do you see much difference in load testing for web applications versus traditional software?

By Traditional Software, I’m assuming you mean a locally-installed application? If so, then as long as we’re still talking about a non-monolithic application, there’s not much difference at all. The local client would still need to communicate with a service tier; the service tier would still need to manipulate data in a database, or possibly collaborate with external services. The same performance bottlenecks would apply to these components.

With a locally-installed client, the UI rendering costs and related network-related latency would obviously be eliminated.

As far as creating and executing tests, web-based applications tend to have a large degree of commonality when it comes to generating HTTP GET/POST calls, so website testing tools are not limited to applications written with, or hosted in, a specific environment. Locally-installed clients might require more specialized tools to inject simulated user actions into the UI’s input mechanism. So, it’s probably more beneficial, when testing a locally-installed client, to actually test against the supporting services, since that layer should be easily accessible via standard Web Services protocols (or even platform-specific transport/messaging protocols such as Windows Communication Foundation’s netTcpBinding.

What are the KPIs you track for testing?

I talked about looking for the knee in the performance curve. To do that, I’m typically watching connection latency, response time per call, and time to complete an iteration of a scenario.

I also watch performance characteristics for each server in the System Under Test (SUT). However, I typically don’t set these up immediately. First I get the test running and prove that I can actually find a knee in the curve. Sometimes, the knee triggers some secret signal to one of the developers who then scurries off to make a fix. It’s at this point that I’ll set up additional performance counters more closely related to the system that’s been tweaked (such as a database server that just had an index added).

How do you use them?

I run tests with a transactional slant. That is, I build my scripts with zero “think-time” – I just care about how many transactions I can push through the server, one after the other. I can always extrapolate this data to predict the number of simultaneous “real people” that could be visiting a site at any given time, considering real-world think-time between clicks.

Having said that, I run tests in a particular way:

1. Establish a baseline time, with a single virtual client generating load. Let the system cook for a few minutes to make sure all components are compiled and cached, log files are created, etc.

2. Increase the load to 2 virtual clients. Again, run for the same amount of time. This helps smooth out the performance curve, and to make sure the system continues to operate with concurrent requests.

3. Depending on the SUT and established baselines, I will then run up the VC count to maybe 5 or 10. I’ll continue to observe the KPI’s: connection latency, roundtrip time, scenario time.

4. Repeat #3 with ever-increasing load until a knee shows up.

How does cloud computing affect the future of automated testing?

As I mentioned before, load testing typically involves having a plentiful supply of hardware that’s capable of generating adequate load for a given test. The question is, what’s adequate? Each SUT will vary in its characteristics.

With a cloud-based load-generation platform, there’s no need to bulk up on hardware that will sit mostly idle. Instead, a particular test session can take advantage of the cloud to allocate as much horsepower as necessary for a given test, then scale allocation back down to a minimal footprint, and only incurring an on-demand cost directly related to the test.

Any thoughts on how the global economic downturn will specifically affect the application development and testing business? (e.g. less testing, more offshoring, etc.)

All too often, testing gets the axe when budgets are squeezed. With performance/load testing, the value is not always obvious when a new product is being pushed out the door. It almost always works on the demo box when showing it off to the internal product and sales teams, so it’s easy to assume it’ll work when the general public hits it.

What resources do you typically remove/restrict from a system to perform a stress test?

I like to test on an isolated network, with SUT servers configured with only the necessary components to run the application. I’ll leave things such as SNMP, event logs, etc. running, as they’d also be running in production. Oh, and I disable 3D screensavers, a known CPU-killer (what’s up with people running 3D Pipes, anyway???)

Do you see any intersection points between usability and load testing?

I try to avoid usability topics when load-testing. It’s one thing to have the developers changing directly-related parameters between test runs, such as changing a database connection from a named-pipe to a socket. It’s entirely another thing to have a new page-navigation be rolled out to avoid a network hop.

If you could make a career from one of your favorite hobbies, what would it be?

When I’m not writing code, I’m taking photos. That would be my ideal career-shift. I shoot local sports games, and an occasional portrait session or wedding. I’m waiting for someone to fly me out to some exotic location to cover an event. Still waiting…

Similar Posts