Performance Testing Links on Christmas Eve

We have about 3 feet of new snow here from the storm earlier in the week. It makes for a picturesque Christmas holiday for my family.

Today I’m just going to share a summary of 2 performance testing articles that I read this morning, then I’m taking off early to enjoy time with my wife, mom, and daughters.

Performance Testing in Agile Framework

Dishit D posted an interesting article yesterday on his The Performance Testing Zone blog that outlines six challenges of making performance testing work well in a typical agile development project.

From my perspective, I just don’t think many agile teams are going to drink the Kool-aid and really make the investment in performance testing early and often. It is a defensible position, and I am a big believer in the benefits of load testing and web performance tuning. Dishit addresses the following problems with fully committing to having a performance team involved from the beginning and running tests every sprint:

  1. Stakeholders Buy-in
  2. Project Management Vision
  3. Definition of the SLAs
  4. Unstable builds
  5. Development and Execution of Test Cases
  6. Highly skilled performance team

When you have been around software development projects for 25+ years, you probably have seen how the stability of builds can vary wildly – even on the same team. My belief is that instability is the biggest reason performance testing will struggle to get traction early in the lifecycle. I like the idea of testing each sprint, but with so many pieces of the code changing, it is going to be difficult to show a manager the value of perf metrics on an immature product that contains less than 50% of the functionality it will eventually have.

Top Ten Failures in Performance Testing

LoadRunner has been the premier software package for performance testing since the early nineties. It has probably more features than any other piece of software in the industry. It is also the most expensive in every respect. Licensing and maintenance are so high that it is a closely guarded secret by all the LR ecosystem. The Price is Right for LoadRunner? provides a breakdown of the costs.

That LR ecosystem has some interesting traits. There are many companies that do nothing but specialize in projects to use the tool for large company performance testing. LR has a very difficult learning curve, so in some cases it makes sense to bring in outsiders that understand the tool. Second, some offerings involve renting the software. Apparently James Pulley has figured out how to make his team’s LR expertise cost-effective through a matrix of services priced by the hour. He wrote a post yesterday about the Top 10 Failures in LR Performance Testing.

Here are his points:

  1. Failure to define performance requirements
  2. Inadequate LoadRunner Performance Tool skills
  3. Failure to use enough performance test data
  4. Failure to properly estimate the duration all tasks
  5. Failure of Management to understand what you do, but telling you how to do it and how long it should take.
  6. Failure to effectively analyze bottlenecks
  7. Hired based upon price rather than skills
  8. Failure to effectively interview performance test engineers
  9. Failure to allow a performance test engineer to specialize and focus on performance testing
  10. Failure to train.

Numbers 1 & 6 are true of any performance testing project. However, what stands out is how all of the other failures are somehow tied to needing “more money” or “more skills”. They scream out to me about big, expensive, complex, time-consuming, and cumbersome.

I suspect his team knows there stuff when it comes to using LR. What caught my eye was his statement, “Excellent performance engineering staffs are always encountering leading edge technologies in use by developers.” That seems ironic since the leading edge technologies in the performance testing field are cloud tools. LoadRunner experts have voiced their dissatisfaction with how HP has not significantly improved the product in many years. I’ve read blog posts about how the customers are unhappy how the new “cloud” version of LR is just a hosted version of the software written in the 1990’s.

It’s natural…software architected in the 80’s and 90’s cannot be enhanced and maintained into an “innovative technology”. It just can’t be done. And frankly, I’m glad. That leaves the innovation to the little guys like us.

You don’t have many of the failures mentioned above if you are using a cloud load testing tool like LoadStorm. The software isn’t complex. Nor is it expensive. Nor does it take a bunch of training. It doesn’t spawn an industry of consultants that have skills in a product.

In fact, management is always happy when they call me and discuss their project. They love the price. They love the simplicity. They love the speed at which testing can be accomplished.

The only ones upset with LoadStorm are the LoadRunner consultants and tool jockeys. Sorry guys, but you can’t ride that old school train for too much longer. Innovation is passing you by because it provides real business value – and it doesn’t hinge on the longest features list.

Similar Posts