When December rolled around, I had been with the Web Performance Lab for 4 months. By then, it was time for another realistic experiment. For months, my focus had primarily been AWS management and scaling Magento in addition to my blogging duties. I was asked to approach this project as if I had been dealing with one of our valued clients. We had one of our software engineers act as an e-commerce vendor who happened to use Magento. He laid out some requirements, then I discussed it with him and went to work. I was pretty excited about this project because it required me to use my skills as a systems admin. This gave me a chance to hammer my test environment with VUsers, while keeping scalability in mind.

Goal

When I approached our “client” he stated that I needed a 5,000 concurrent VUser store! This upgraded the challenge a bit for me, and I was willing to do the best I could. So here’s what I wrote up in the statement of work as our main goal:

The customer is expecting a Magento website that will not reach a fail-like state until at least 5,000 concurrent VUsers are reached while running a load test from the Stress account on LoadStorm PRO [testing] system.

The “fail-like” state I’m talking about is a 35-second peak response time at any time during the test.

Planning and Approach

I did a bit of digging to find some good preliminary articles. Here’s one that was extremely useful and that I ended up following. The topology was within my ability and project scope, yet still promised scalability. Since the time limit of the project was a work week, I spent half of that time working on preparation. I broke the project into two testing iterations. I would have a baseline environment (Test Environment #1) and a scaling environment (Test Environment #2).

Project Scope

In general, I was constrained to using Amazon Web Services for my infrastructure. Due to AWS pricing, we imposed a strict limit of $25 per test hour. This limited me to three of the largest instances; more than enough for our needs. However, financial constraints are an important part of any project.

Figure 1. Topology for Test #1

I started with a single server as my baseline benchmark. This server also utilized a Cloudfront distribution. Since we were so comfortable with CDNs in the past, we might as well take advantage of it in our benchmark.
Here’s what’s going on in the diagram:

  • Orange Box with Cylinder: Amazon c3.8xlarge EC2 instance with local MySQL database.
  • Cloudfront Distribution: Our CDN distribution which serves most of our static content. Eases traffic going to the Magento server
    Amazon Cloudfront provides the distribution
  • VUsers: Traffic generated by LoadStorm engines. 5,000 of them will be hitting the EC2 and distribution.

Test Environment #2

Figure 2. Topology for Test #2

This topological structure is more involved. In general, an elastic load balancer will distribute traffic between three Magento server nodes. Cloudfront is still distributing static content, and there is also a centralized MySQL database using an RDS instance.
Here’s the components in detail:

  • Elastic Load Balancer: Load balances traffic between the three application servers based on Amazon’s algorithm.
  • Orange boxes: Amazon c3.8xlarge EC2 hosting three identical Magento servers. In configuration, all servers point to the same MySQL database instance.
  • Purple box (magento-db): Shared Amazon RDS with MySQL hosting the database for the Magento store.
  • Cloudfront Distribution/Cloudfront: Hosting static content to the three Magento nodes.

We’ll see you soon for part two of the experiment!

Similar Posts