LoadStorm™

Forgive me for stating the obvious, but web applications are a critical part of global business in 2011. I see no alternative other than more dependence by companies everywhere on web software and Internet infrastructure. In my opinion, all business trend data predicts greater overall web usage, more complex application architectures, and tremendous spikes in extreme traffic volumes.

Critical Applications, Yet They Aren’t Getting the Investment Needed

ComputerWorld last week made a definitive statement regarding the critical nature of web applications:

Those who are unprepared are vulnerable to service outages, customer dissatisfaction and trading losses – and often when it hurts the most. Successful businesses understand the need to assure service and application availability if they want to retain customers, deliver excellent service and take maximum advantage of the opportunity their market offers.
This is not a theoretical problem – just look at the recent challenges for the London 2012 Olympics andTicketmaster. Just when everyone wants to do business with you, you’re not available.

The London Olympics site was overwhelmed by high demand for tickets and many buyers received the message, “We are experiencing high demand. You will be automatically directed to the page requested as soon as it becomes available. Thank you for your patience.”

That’s a failure even if the representatives of the site said it had not crashed. Performance failure…pure and simple for the whole world to see.

Examples of performance failure like this seem to occur weekly, if not daily, somewhere in the global business universe of websites.

Transformative Moment? When Global Retailers Fail!

Recently Target.com crashed under extreme user volume. They cut a deal with a designer line of knitware (Missoni) and promoted a special sale on the morning before products were sold in stores. By 8:00 a.m. EDT, the site was crashing. The Boston Globe went so far as to say:

”…the Missoni mess could be a transformative moment in the relatively brief history of e-commerce. Retail analysts say it shows that even though online shopping has made major strides since Victoria Secret’s website famously faltered during a 1999 webcast, companies still may not always have the technological muscle to meet consumer demand for such frenzied promotions.”

The aim of this post is to outline how to determine and prioritise the key performance requirements within a project. I’ve already covered how important it is to have good performance requirements. These are the items that drive and determine the quality of the performance testing – but actually how do we best manage, assess and identify performance requirements?

Managing Performance Requirements

Lets take a step back first. I’ve often found that the person that best defines the performance requirements is usually the performance tester. This is in contrast to the business analysis or the stakeholders defining them. Why? A number of reasons – the main being time and accuracy.

Here’s a typical conversation:

When will retailers learn? When will marketing departments going to consider the technical ramifications on their campaigns and launches? When will the IT department escalate performance engineering to a high priority? When will we stop reading about sites crashing under heavy volumes of traffic?

Hopefully never! Because these stories are great examples why you need LoadStorm.

Target Inc.’s website crashed yesterday due to special promotion. Apparently, the discount retailer has cut an exclusive arrangement with an Italian luxury designer called Missoni, and it would seem that the online sale of Missoni knitwear generated enough buyers to bring the site down.

I sure would like to know how many concurrent users killed it. Wonder how many requests per second the Target site was handling with less than a 5 second response time?

Can there really be more than a few hundred knitwear aficionados that would hold Missoni goods in such high esteem? What are the odds that those few hundred would all be anxiously awaiting the online sale and access the site simultaneously?

Perhaps it was 5,000 or 50,000. The result is the same – lost revenue, bad press, unhappy customers, and brand devaluation.

Welcome to the fourth part of our Web Performance Optimization series. It is our hope that these suggestions and best practices will assist you in improving the speed of your site. We view performance engineering as an iterative process whereby developers and testers will run load tests, analyze measurements, and tune the system incrementally. Our goal is to help you make your site faster and handle higher traffic.

We’ve talked extensively on this blog about how server applications can use caching to improve Web server performance, often dramatically. In addition to caching, Web site developers and administrators can employ other techniques to reduce the size of wire transmissions and increase document delivery speed.

File Consolidation

Web servers can reduce the number of requests generated by the client by reducing the number of separate files the client must fetch. Servers can facilitate this by combining separate files of the same type. From a maintainability standpoint, it often makes sense for a Web site developer to store the Cascading Style Sheet (CSS) code for her site in several separate files. A Web browser, however, doesn’t care if the CSS code is contained in four small files, or one monstrous file; all of the elements share the same namespace once they’re loaded into the browser.

According to The Exceptional Performance Team

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.”

Minify is a PHP5 application that combines multiple Javascript and CSS files into a single file. This simple utility can eliminate anywhere from two to 12 HTTP requests for a single page. Minify goes the extra mile and applies GZip compression and cache control headers to these unified files for maximum performance.

Another key technique for consolidating multiple files is to take advantage of CSS sprites. This technique puts multiple images into one composite image that can be used by the browser for many different individual parts of a page. CSS background positioning is used to display only the image you need from the sprite.

The HTTP requests are greatly reduced because one request replaces potentially hundreds of requests. I have seen some customers’ ecommerce pages that contain over 300 images. A sprite could produce a 300 to 1 reduction in requests for that page. Multiply that overhead savings by say 10,000 concurrent users, and the result is a tremendous performance improvement. Most new versions of browsers support CSS backgrounds and positioning, which has allowed developers to adopt this performance technique.

A rumor has swept around the world this week through Twitter and Facebook that Jason Buksh, performance consultant, is going to be an ongoing guest blogger for LoadStorm. That rumor has apparently been substantiated! You heard it here first. We look forward to many useful insights from Jason.

So, please welcome our newest performance test expert blog contributor here at LoadStorm.com.

As his first contribution (other than his insightful interview), he sent me this funny cartoon about stress testing.

We are excited to announce a partnership with CopperEgg because they provide a product that is a perfect complement to our load testing tool. RevealCloud is a web monitoring tool that can be used for performance engineering or operations.

Many of our customers have asked if we have a tool to monitor the server-side metrics during a test. Unfortunately, we don’t. We now recommend their RevealCloud Pro product. It helps busy app developers monitor the performance and health of web applications in the cloud.

It’s easy to use and installs quickly. Please give it a try.

Here’s a cool excerpt from their site that I think concisely explains their value proposition:

If you want to focus on developing great apps instead of wasting time troubleshooting why that app is running slowly, then get RevealCloud Pro.

  • Installs in less than 10 seconds
  • Sends alerts via email or text
  • Updates every few seconds
  • Monitors the health, load, performance and capacity
  • Provides in-depth insight into performance including historical tracking
  • Supports Linux and FreeBSD, physical, virtual and cloud
  • Is viewable from ANY browser (laptop, iPhone, iPad or Android)

Jason Buksh is a Technical Project Manager and Performance Consultant in London, England. Jason has extensive experience with performance testing at many companies including HSBC and Siemens. He is skilled with tools such as Rational, Grinder, and Performance Studio. His certifications include LoadRunner.

We appreciate his time to share some good thoughts with us about a topic that gets us excited. Here is his interview with us.

What is your technical background?

I learned to program when I was 13 – it was a vic20, I then swiftly moved onto 6502 for the BBC micro. University studying computer science was an obvious and easy progression for me. First job was writing rendering engines (C++) for virtual reality simulators. I would describe myself as a techie at heart – I’m genuinely interested in how things work. I think having a strong and long background in IT enables me to grasp new concepts easily – which is great when I have to go into different companies and need to understand their systems quickly. I’ve a 2:1 in Computer Science, ISEB Practitioner and SCRUM Master Certified.

Do you consider yourself more of a software developer or QA professional?

We should get out of the habit of separating them so readily. I feel strongly that every software developer should QA their work. Its not good enough to code and then relinquish QA to another team. Its lazy, increases delivery time, wastes effort and increases cost. Everyone should be a QA professional within their own field. I think a large dedicated QA team is a good measure to the inefficiency of an IT project. I’m going to write a post on this very topic.

How much involvement do you have with load and performance testing?

My career is built on it. I’ve performance tested many mission critical and highly transactional systems. Companies like Expedia have extremely large volumes of traffic, and the performance of such a system is paramount. My experience at global financial institutions has taught me a great deal about trading platforms and the importance of milliseconds in response time.

What is the biggest change you have witnessed in the way people conduct load testing?

There is a quiet move away from Loadrunner and its going to become an avalanche. It’s been underdeveloped and overpriced for a long time.

If you are hosting your web application in Windows Azure, here are some tips regarding monitoring your servers and application during a LoadStorm test that were provided to me from a Microsoft software engineer:

Sharing with you the steps for performance testing/bottleneck identification. Attaching all the requisite documents and counters.

Explanation of perfmon counters – http://msdn.microsoft.com/en-us/library/aa905154(SQL.80).aspx

Other tools we use:

Ways to Use Perfmon Counters Config File via Command

In the previous installments of our Web performance series, we’ve examined how developers can employ Web server caching and application caching to speed up their Web applications. In this installment, we’ll see how caching entire data sets can increase platform efficiency.

What is Data Caching?

Database caching is a species of application caching that caches the result of one or more database queries in the application server’s memory. For our purposes, we use application caching to refer to caching any component in an application server’s memory, and data caching to refer to caching a collection of data for future querying and sorting.

There are two main approaches to data caching:

  1. Data set caching. Data returned from a database is stored in a data set, an in-memory representation of data that mimics the column/row structure of a relational database.
  2. In-memory databases. An in-memory database is a relational database that operates completely in a server’s RAM, as opposed to storing data to a hard drive. In-memory databases can be used for fast access to data previously retrieved from a traditional, disk-based RDBMS.

Let’s dig deeper into each of these.

If you’re serious about web performance optimization, then you must run an efficient database. One necessary skill is an ability to analyze a database slow query log and optimize the most expensive, slowest results. While one relatively slow query may not be the end of the world, many slow queries will add up quickly and frustrate users.

Setting Up a Slow Query Log

Running a slow query log is not a default setting in MySQL, so the first thing you’ll need to do is set one up. Open the server’s configuration file and enter “log-slow-queries” as well as an output file. On the next line, enter “long_query_time=x”. The value x represents a certain number of seconds, and any query taking longer than that number will appear in the output log.

Although needs will vary, a value of 1 is generally recognized as a good starting point. For heavily used servers, a value of 1 may result in a slow query log which is too large, creating additional performance problems.

In addition, you may want add a third line which reads “log-queries-not-using-indexes”. (Evidently the creators of MySQL did not realize that the plural of “index” is actually “indices.”) This will instruct MySQL to log any query which does not use an index, whether or not the query time exceeds the value listed on the second line. If you only want to log queries not using indices, you can set the value on the second line very high.

In our first installment of this series, we examined how Web server caching is implemented. Caching stores critical pieces of information in memory or on a local hard drive for subsequent rapid retrieval. In this installment, we’ll look at how Web application environments can use application caching to reduce load and increase performance.

What is Application Caching?

Application caching stores calculated components in memory for future use, either by the same user or multiple users. Because of the complex architecture of modern Web applications, application caching can produce huge gains in Web site response times.

A Web application is usually comprised of multiple architectural layers. At the simplest level, Web applications consist of the client, which runs the application (usually a series of HTML pages and associated technology); and the server, which processes data and generates output. When we zoom in on the server side, however, we find that most non-trivial applications consist of several additional layers. Typically, these layers include the UI layer, which generates the user interface (usually HTML); the business logic layer, which implements business rules; and the data layer, which stores and retrieves data from one or multiple data sources.

In 1948, Indian Motorcycles asked my father if he wanted to be a dealer for them. The rest is history. I grew up riding dirt bikes, racing at field events, and rebuilding a few classic cycles. My dad always wore a helmet and made sure I did too. Sometimes the helmet he gave me wasn’t very cool, but I was sure it was best for me because he told me stories of guys that didn’t wear them.

Sad news caught my eye today about a guy protesting the helmet law in New York. Unfortunately, while he was riding in a rally he lost control of his Harley, flipped over the handlebars, and hit his head on the pavement. He didn’t survive the crash. State troopers determined that he would not have died with a cracked skull if he had been wearing his helmet.

Two things come to mind:

  1. Legislation doesn’t always work.
  2. He made a choice that was costly in the end.

How does this motorcycle helmet situation relate to load testing? It’s simple:

Every day web developers make the decision that load and stress testing is NOT necessary for their site or application.

Yeah, and you can ride without a helmet too. It’s just a bad idea. The risk is too great.

Performance of your site has a direct correlation to your success. Slow sites lose revenue. Sites crash under heavy traffic every day because they got a favorable review on Slashdot. Unexpected volume comes from unlikely sources and blindside your company. Digg, Reddit, Twitter, and hundreds of other social media sites can immediately pour tens of thousands of users to your URL. How will your site handle it?

Back in 1986, I had no idea who H. Ross Perot was. Nor did I really care. However, he not only played a unique role in American politics back in the 1990s, but he also was instrumental in the forming of the global technology powerhouse known as CustomerCentrix, LLC. By extension, he was critical to the development of LoadStorm because CustomerCentrix is our parent company.

Therefore it is only fitting that today we honor the 80th birthday of one H. Ross Perot, founder of information technology giant
Electronic Data Systems (EDS).

EDS was purchased by General Motors in the eighties because the IT outsourcing industry was booming and because GM had a poor track record with information technology. As a student in Nashville, TN during those days it was huge news when GM announced the creation of Saturn – a new car company to remake all the old manufacturing paradigms encumbering the US Big Three. The first plant was planned for a small town just south of Nashville. It was going to be an enormously beneficial investment for the area and for the whole country. We all believed the hype! So I determined that my career as a computer scientist should start with the biggest revolution in American business, and I set about getting a job in the computer side of Saturn.

As we’ve discussed previously, Web site optimization directly affects a company’s bottom line. A sudden traffic spike that swamps a website’s capacity can cost a company thousands or even tens of thousands of dollars per hour. Web servers and Web applications should be built and deployed from day one with performance at the forefront of everyone’s mind.

Web site administrators and web application developers have a host of tricks and techniques they can employ to deliver Web pages more quickly. From my experience, I’ve seen 1,000% performance improvement from simple web application optimization techniques. Caching is the #1 tuning trick in the web developers kit. Customers ask me weekly what I recommend for speeding up their app. I always start with “caching, caching, and more caching”. It’s like magic for a site.

A ViSolve white paper shows a return on investment of $61,000 for a $20,000 total cost of ownership of only two caching servers!

In this article we’ll look at how two different but related forms of caching are used to conserve server resources and reduce network latency; thus, greatly improving your customers’ experience with your site.

What is Caching?

Caching refers to any mechanism that stores previously retrieved content for future use. As I learned it in college back in the VAX/VMS operating systems class, it is temporarily putting something into memory that you will use again in order to avoid hitting the hard drive. Computer scientists today are less concerned about saving every byte like we were back then. Still, web applications are constantly re-using data and files; so why in the world would we want to make an expensive hit to the database? Hard drives can be 10,000 times slower than memory because they are mechanical and must move to the correct position and spin to the exact spot where data exists. Memory moves at the speed of electricity.

The goal of caching is to increase the speed of content delivery by reducing the amount of redundant work a server needs to perform. Putting a file in memory to re-use it can save millions of drive accesses; thus, the speed of getting the browser what the user needs is increased by magnitudes. Caching and performance go hand-in-hand. It’s a no-brainer.

The Miami Herald published an ad for clothing congratulating the team on their 2011 NBA championship. Perhaps no one noticed. #FAIL

Whoops, I guess the editor wasn’t watching the game when the Dallas Mavericks closed out the NBA Finals and took home the trophy. Congrats to the Mavs. Dirk earned the right to be in the greatest player discussions – especially when it applies to comebacks in the 4th quarter.

The huge mistake by the newspaper makes me think about how obvious web performance failure is a tremendous error. Perhaps a site owner or web application product manager can ignore the performance aspect, but their users will not. The speed and scalability of an online system has been proven statistically to directly affect its effectiveness.

Load and performance testing your website is important. Tuning it can have more than a 1,000% return on investment.

Some of the case studies referenced below show us that we can improve revenue 219% just by improving performance our site. Other data confirms that the average business loses $4,100 per hour when their site slows down under load. An outage costs $21,000 per hour on average. Retailers can lose $100,000 per hour.

It’s possible to have a successful ad campaign or a wonderful Slashdot day that your site can’t handle – and that can send
46.9% of your traffic to your competitors. Or worse, cost you 150,000 customers.

Take Web Performance Seriously

At the beginning of 2009, Denny’s made a bold marketing move. At a time when many Americans were out of work, and those with jobs were struggling to make ends meet, the restaurant chain cut an ad offering every American a free breakfast. The ad only aired once, during a little event called the Super Bowl.

And that’s when Denny’s troubles began.

Within minutes of the ad airing, customers who attempted to access Denny’s Web site to get their free meal coupons found that they couldn’t get through. The company planned the commercial and the $3 million ad buy perfectly. What it didn’t plan for was the ensuing deluge of traffic, which Internet marketing expert Rob Kmiec estimates represented anywhere between a 434% and 1,700% spike in the daily traffic to dennys.com. Had the company planned for the additional attention and invested in a Content Delivery Network (CDN) or cloud network to handle the ensuing load, Kmiec argues, it would have reaped nearly a 1,000% return on that investment. Instead, the company lost the opportunity to serve an additional 153,300 customers.


Rodney Bone
is a performance consultant that works for Revolution IT in Brisbane, Australia. He has graciously invested his time to share insights about performance testing with us. Please follow him on Twitter (@rod_bone) and tweet your thanks for his interview.

That’s Rod in the picture at the right when he did some reserve time during the Brisbane floods over Christmas. The slow sign is just ironic.

What is your technical background?

Started as a software engineer with Accenture, where I was involved in the entire end to end SDLC including experience in the BA space.

Do you consider yourself more of a software developer or QA professional?

Now, definitely QA, I think todays generic developer is only one part of a large picture and Accenture’s model of exposing developers to the whole SDLA is one that should be encouraged by all companies.

How do you determine the load to apply to the target app during load testing?  

Ask the business, and ask as many people as applicable. Users, and BA’s have different opinions, so talk to them all. Once you nail the processes and the amount of transactions per hour run with that. Back that up with Log info as historical information is the best source of information. With new applications this is not always available.

Do you prefer using requests per second (RPS), concurrent users, or some other metric to define load?

They answer different questions. You can sometimes hit a server with the right RPS with one virtual user but that doesn’t tell you that the web server is up to serving the required amount of concurrent connections. I have tested an app where they plugged the Apache Load Balancer in out of the box and it only served 20 concurrent connections. FAIL! And you wouldn’t pick this up without concurrent users.

Over at his blog Spoot!, Nicolas Bonvin recently posted two summaries of the great work he’s done benchmarking how well various open-source and free Web servers dish up static content under high loads. Bonvin, a PhD. student at the …cole polytechnique fÈdÈrale de Lausanne (EPFL in Switzerland, specializes in high-volume distributed data systems, and brings considerable expertise and real-world experience to bear in designing his tests.

First Round of Performance Testing

In his first post, Bonvin laid out the evidence he’d accumulated by running benchmark tests against six Web servers: Apache MPM-worker, Apache MPM-event, Nginx, Varnish, G-WAN, and Cherokee, all running on a 64-bit Ubuntu build. (All Web servers used, save for G-WAN, were 64-bit.) This first set of benchmarks was run without any server optimization; each server was deployed with its default settings. Bonvin measured minimum, maximum, and average requests per second (RPS) for each server. All tests were performed locally, eliminating network latency from the equation.

On this initial test battery, G-WAN was the clear winner on every conceivable benchmark, with Cherokee placing second, Nginx and Varnish close to tied, and both strains of Apache coming in dead last. As Bonvin notes, it wasn’t even close. G-WAN, a small Web server built for high performance, completed 2.25 more requests per second than Cherokee (its closest competitor), and served a whopping 9 to 13.5 more requests per second than the two versions of Apache.

It seems like weekly that we hear of some very high profile website performing poorly or going down completely. Are site outages reported somewhere centrally? If so, I haven’t found it. However, this one caught my eye because I’m a sports fan. Saturday night both ABC.com and ESPN.com were poster children for website outage.

Many people on Twitter were complaining and tweeting jokes about the performance of both sites.

Stress tests are good for finding out how many people can be on your website at the same time. A good test plan for an e-tailing site includes scenarios that represent people browsing your product pages and searching for specific items. It should also have realistic traffic for buyers going through the shopping cart experience and purchasing products with a credit card.

I had a call yesterday from a customer that wanted to stress test his e-commerce site with about 100,000 concurrent users. He explained that their marketing department is expecting significant growth of their sales because of a combination of the economic outlook, increased advertising, and a cool new unique channel technology that is going into production.

Thanks to some data from Gallup, Experian, and Pew Research, Marketing Charts produced some great graphs that show how the U.S. economic financial growth indices are improving in the past year or so. Consumers are spending more online with e-tailing sites, perhaps because they are more confident that their financial situation is improving.

Online Spending is Healthy

In January, 58% of survey respondents said 2011 will be better than 2010, 20% said 2011 will be worse, and 21% said it will stay the same. As the mood has improved, so has online spending which hit a record of $43.4 billion in the fourth quarter of 2010. That’s up from $32.1 billion in
Q3 2010, and it is the fifth consecutive quarter of positive year-over-year growth and second quarter of double-digit growth rates in the past year. So, online spending is one of the most vigorous parts of our economy right now. Is your site ready for growth?

Similar Posts