https://loadstorm.com Load Testing the Better Way Wed, 11 Mar 2015 22:23:41 +0000 en-US hourly 1 http://wordpress.org/?v=4.0 Web Performance News of the Week https://loadstorm.com/2015/02/web-performance-news-of-the-week/ https://loadstorm.com/2015/02/web-performance-news-of-the-week/#comments Fri, 27 Feb 2015 21:07:16 +0000 https://loadstorm.com/?p=11978 It’s a great week on the internet! This week in web performance the preservation of net neutrality and new announcements from Google and Apple make headlines.   FCC votes to preserve net neutrality, classifying broadband as a utility Yesterday the Federal Communications Commission voted 3-2 to approve the proposed net neutrality rules for both wireless and fixed broadband. The proposed rules will disallow paid prioritization, as well as the blocking and throttling of lawful content and services. After overwhelming public outcry, this win for advocates of net neutrality is being called “the free speech victory of our times” and “an […]

The post Web Performance News of the Week appeared first on LoadStorm.

]]> It’s a great week on the internet! This week in web performance the preservation of net neutrality and new announcements from Google and Apple make headlines.

 

FCC votes to preserve net neutrality, classifying broadband as a utility

Yesterday the Federal Communications Commission voted 3-2 to approve the proposed net neutrality rules for both wireless and fixed broadband. The proposed rules will disallow paid prioritization, as well as the blocking and throttling of lawful content and services. After overwhelming public outcry, this win for advocates of net neutrality is being called “the free speech victory of our times” and “an even bigger win than SOPA”. But the debate looks to be far from over.

Response from Verizon came in both morse code and typewriter font saying the rules were “written in the era of the steam locomotive and the telegraph. In addition, a group of 21 republicans sent a response to FCC chairman Tom Wheeler threatening legislation that would “ensure the antitrust laws are the preferred enforcement method against anticompetitive conduct on the Internet” and that “may include a restriction on the FCC’s ability to regulate the Internet.”

Apple to spend $1.9 Billion on European data centers powered by renewable energy

In what will be Apple’s biggest investment in Europe to date, Apple announced plans to build and operate two new data centers in Denmark and Ireland. Running entirely on renewable energy, the data centers will power several of Apple’s online services for European customers, including the iTunes Store®, App Store℠, iMessage®, Maps and Siri®. The operations are expected to launch in 2017 and will include initiatives to restore native trees to Derrydonnell Forest, provide an outdoor education space for local schools, and create a walking trail for the community. “We believe that innovation is about leaving the world better than we found it, and that the time for tackling climate change is now,” said Lisa Jackson, Apple’s vice president of Environmental Initiatives.

Apple releases new Playgrounds

The new Xcode 6.3 beta 2 now contains improvements to Swift playgrounds, with inline results, stylized text, and a resources folder. The new playgrounds were made to be useful for authors and educators.

Google introduces a new open source HTTP/2 RPC Framework

Google has introduced a new open source (BSD-licensed) cross-platform library for making remote procedure calls. Built on the recently finalized HTTP/2 specification, gRPC will allow for bidirectional streaming, flow control, header compression, multiplexing requests over a single TCP connection and more. In addition to gRPC, Google has released a new version of Protocol Buffers, an open source binary serialization protocol intended to allow easy definition of services and automatic generation of client libraries. The project has support for several different programming languages (C, C++, Java, Go, Node.js, Python, and Ruby) with libraries for several others (Objective-C, PHP and C#) in development. Google indicated that they have begun to use gRPC internally in order to begin transitioning to HTTP/2.

A look ahead: Barcelona will host Mobile World Congress

The first week of March brings along the exciting 2015 Mobile World Congress held in Barcelona, Spain. The four day event is like the Ted Talks of mobile tech, with thought-leadership keynotes from Mark Zuckerberg and Tom Wheeler, numerous panel discussions, and 1900 technology and product exhibitors. The event will feature the Global Mobile Awards, and App Planet, an opportunity for the mobile app community to come together to learn and network. In addition, all attendees will gain access to 4 Years From Now, a 3 day event focused on startups and corporations, led by globally recognized entrepreneurship and innovation experts.

Other headlines this week:

facebooktwittergoogle_plusredditpinterestlinkedin

The post Web Performance News of the Week appeared first on LoadStorm.

]]> https://loadstorm.com/2015/02/web-performance-news-of-the-week/feed/ 0 AWS Aurora – The Game Changer https://loadstorm.com/2015/01/aws-aurora-the-game-changer/ https://loadstorm.com/2015/01/aws-aurora-the-game-changer/#comments Mon, 26 Jan 2015 16:30:07 +0000 https://loadstorm.com/?p=11571 Last November, Amazon Web Services (AWS) announced a new database service, named Aurora, which appears to be a real challenger to commercial database systems. AWS will offer this service at a very competitive price, which they claim is one-tenth that of leading commercial database solutions. Aurora has a few drawbacks some of which are temporary, but the many benefits far outweigh them. Benefits Using hardware specially built for Aurora, AWS has come up with a service that tightly integrates database with hardware, Aurora delivers over 500,000 SELECTS/sec and 100,000 updates/sec. That is five times higher than MySQL 5.6 running the […]

The post AWS Aurora – The Game Changer appeared first on LoadStorm.

]]> Last November, Amazon Web Services (AWS) announced a new database service, named Aurora, which appears to be a real challenger to commercial database systems. AWS will offer this service at a very competitive price, which they claim is one-tenth that of leading commercial database solutions. Aurora has a few drawbacks some of which are temporary, but the many benefits far outweigh them.

Benefits

Using hardware specially built for Aurora, AWS has come up with a service that tightly integrates database with hardware, Aurora delivers over 500,000 SELECTS/sec and 100,000 updates/sec. That is five times higher than MySQL 5.6 running the same benchmark on the same hardware. This new service utilizes a sophisticated redundancy method of six-way replication across three availability zones (AZs), with continuous backups to AWS Simple Storage Service (S3) to maintain 99.999999999% durability. In the event there is a crash, Aurora is designed to almost instantaneously recover and continue to serve your application data by performing an asynchronous “crash” recovery on parallel threads. Because of the amount of replication, disk segments are easily repaired from other copies that make up the cluster. This ensures that the repaired segment is current, which avoids data loss and reduces the odds of needing to perform a point-in-time recovery from S3. In the event that a point-in-time recovery is needed, the S3 backup can restore to any point in the retention period up to the last five minutes. Aurora also has survivable cache which means the cache is maintained after a database shutdown or restart so there is no need for the cache to “warm up” from normal database use. It also offers a custom feature to input special SQL commands to simulate database failures for testing.

Aurora consists of two types of instances:

Aurora replicas are very interesting. In terms of read scaling, Aurora supports up to 15 replicas with minimal impact on the performance of write operations while MySQL supports up to 5 replicas with a noted impact on the performance of write operations. Aurora automatically uses its replicas as failover targets with no data loss while MySQL replicas can have this done manually with potential data loss. Since these replicas share the same underlying storage as the primary, they lag behind the primary by only tens of milliseconds. For many use cases this might be as good as synchronous.

In terms of storage scalability, I asked AWS some questions about how smoothly Aurora will grant additional storage in the event that an unusually large amount of it is being consumed since they’ve stated it will increment 10GB at a time up to a total of 64TB. I wanted to know where the threshold for the autoscaling was at and if it were possible to push data in faster than it could allocate space. According to the response I received from an AWS representative, Aurora begins with an 80GB volume assigned to the instance and allocates 10GB blocks for autoscaling when needed. The instance has a threshold to maintain at least an eighth of the 80GB volume as available space (this is subject to change). This means whenever the volume reaches 10GB of free space or less, the volume is automatically grown by 80GB. This should provide a seamless experience to Aurora customers since it is unlikely you could add data faster than the system can increment the volume. Also, AWS only charges you for the space you’re actually using, so you don’t need to worry about provisioning additional space.

Aurora also uses write quorums to reduce jitter by sending out 6 writes, and only waiting for 4 to come back. This helps to isolate outliers while remaining unaffected by them, and it keeps your database at a low and consistent latency.

Pricing

For the time being AWS Aurora is free if you can get into the preview which is like a closed beta test. Once it goes live there are a few things to keep in mind when considering the pricing. You pay an hourly price for each instance (primary and replica). Storage is $0.10 a month per 1GB used, and IOs are $0.20 per million requests.

Backups to S3 are free up to the current storage being actively used by the database, but historical backups that are not in use will have standard S3 rates applied. Another option is to purchase reserved instances which can save you money if your database has a steady volume of traffic. If your database has highly fluctuating traffic then on-demand instances tend to be the best so you only pay for what you need.

For full details, please visit their pricing page.

Drawbacks

Currently, the smallest Aurora instances start at db.R3.large and scale up from there. This means once the service goes live there will be no support for smaller instances like they offer for other RDS databases. Tech startups and other small business owners may want to use these more inexpensive instances for testing purposes. So if you want to test out the new Aurora database for free you better try apply for access to the preview going on right now. AWS currently does not offer cross region replicas for Aurora, so all of the AZs are located within a single region. On the other hand, that does mean that latency is very low.

It only supports InnoDB and any tables from other storage engines are automatically converted to InnoDB.

Another drawback is that Aurora does not support multiple tablespaces, but rather one global tablespace. This means features such as compressed or dynamic row format are unavailable, and it affects data migration. For more info about migrations, please visit the AWS documentation page.

Temporary Drawbacks

During the preview, Aurora is only available in the AWS North Virginia data center. The AWS console is the only means for accessing Aurora during the preview, but other methods such as CLI or API access may be added later. Another important thing to note is that during preview, Aurora does not support encryption at rest, but they plan to add it in the near future (probably before the preview is over). Another feature to be added at a later date is the MySQL 5.6 memcached option, which is a simple key-based cache.

Conclusion

All in all, it sounds amazing, and I for one am very excited to see how it will play out once it moves out of the preview phase. Once it is fully released, we may even do a performance experiment to load test a site that relies on an Aurora DB to see how it holds up. If you’re intrigued enough to try and get into the preview, you can sign up for it here.

Sources:

facebooktwittergoogle_plusredditpinterestlinkedin

The post AWS Aurora – The Game Changer appeared first on LoadStorm.

]]> https://loadstorm.com/2015/01/aws-aurora-the-game-changer/feed/ 0 Optimizing for Mobile Traffic https://loadstorm.com/2015/01/responsive-web-design/ https://loadstorm.com/2015/01/responsive-web-design/#comments Thu, 15 Jan 2015 15:01:10 +0000 https://loadstorm.com/?p=11484 Every year mobile traffic is growing. This year 41.2% of all cyber monday traffic came from mobile. It’s a trend that every business should start to plan, and develop for. Options for Mobile Optimized Experiences Based on figures provided by the mobile experts at Flurry, app usage is increasing while mobile web surfing is decreasing. With the growing trend of using apps over web browsing on mobile devices, it is a good idea to consider developing an app. However, developing your own app is usually very time consuming and expensive. For many companies, developing their own mobile app is simply […]

The post Optimizing for Mobile Traffic appeared first on LoadStorm.

]]> Every year mobile traffic is growing. This year 41.2% of all cyber monday traffic came from mobile. It’s a trend that every business should start to plan, and develop for.

Options for Mobile Optimized Experiences

Based on figures provided by the mobile experts at Flurry, app usage is increasing while mobile web surfing is decreasing. With the growing trend of using apps over web browsing on mobile devices, it is a good idea to consider developing an app. However, developing your own app is usually very time consuming and expensive. For many companies, developing their own mobile app is simply not an option.

As an alternative, responsive web design is much less expensive to develop and easy to maintain. When deciding on which to use, consider your customer demographics and the type of product or service that you’re providing. Do you have analytic data that tells you how much mobile traffic you have so far? How many of your customers do you think will benefit from one of these options?

According to a study by the Aberdeen Group, responsive design websites showed a 10.9% increase in visitor to buyer conversion rates year-over-year, while non-responsive websites only had a 2.7% increase. So if you’re not ready to invest in developing a full-blown mobile app, the evidence supports responsive web design as a cost effective alternative to a mobile app.

What is responsive design?

For a quick example, click and drag your browser window from full size to very small. You will notice how this website changes based upon the screen size the viewer is using. The goal of responsive web design is to give the customer an experience that is quick and easy to use on whatever size device he or she is using.

Responsive design is a combination of three techniques: fluid grids, flexible images, and media queries. Here is an excellent video tutorial to learn the fundamentals of responsive design:

Responsive Design and Web Performance


During this video tutorial, the presenter mentions that the images in his test site are fairly large, which allows them to be responsive, but being responsive is not enough. If you really want to cater to your mobile audience, you also need to improve performance by considering the impact quality and file size have. Smashing Magazine has a wonderful article to help guide you in making your site both responsive and a great performer over mobile broadband.

Test Your Site

To see how responsive your site is currently, you can visit Matt Kersley’s page.

Or for a more detailed option in testing responsive design, you can use the developer tools in most browsers. This allows you to change the viewing experience so you can select one of many brand name devices, like an iPhone 6 or Galaxy S4, to emulate. To do this in Chrome, you can open your developer tools by right-clicking a page and choosing “inspect element”. Then look to the top left of the tools window for a small icon that looks like a smartphone. Click the icon and the browser view will constrain the viewing area.

This developer tool also shows how easy it is to load test your mobile site using LoadStorm PRO. A recording can be made of the site while using the mobile view, which can then be uploaded and utilized in a load test.

Sidenote for SEO:

Google prefers that developers use responsive design mostly for requiring less resources and is future friendly. However, Google does not rank responsive design higher over other configurations.

Sources:

facebooktwittergoogle_plusredditpinterestlinkedin

The post Optimizing for Mobile Traffic appeared first on LoadStorm.

]]> https://loadstorm.com/2015/01/responsive-web-design/feed/ 0 Stress Testing Drupal Commerce https://loadstorm.com/2014/04/stress-testing-drupal-commerce/ https://loadstorm.com/2014/04/stress-testing-drupal-commerce/#comments Thu, 17 Apr 2014 13:53:38 +0000 https://loadstorm.com/?p=9243 I’ve had the pleasure of working with Andy Kucharski for several years on various performance testing projects. He’s recognized as one of the top Drupal performance experts in the world. He is the Founder of Promet Source and is a frequent speaker at conferences, as well as a great client of LoadStorm. As an example of his speaking prowess, he gave the following presentation at Drupal Mid Camp in Chicago 2014. Promet Source is a Drupal web application and website development company that offers expert services and support. They specialize in building and performance tuning complex Drupal web applications. Andy’s […]

The post Stress Testing Drupal Commerce appeared first on LoadStorm.

]]> I’ve had the pleasure of working with Andy Kucharski for several years on various performance testing projects. He’s recognized as one of the top Drupal performance experts in the world. He is the Founder of Promet Source and is a frequent speaker at conferences, as well as a great client of LoadStorm. As an example of his speaking prowess, he gave the following presentation at Drupal Mid Camp in Chicago 2014.

Promet Source is a Drupal web application and website development company that offers expert services and support. They specialize in building and performance tuning complex Drupal web applications. Andy’s team worked with our Web Performance Lab to conduct stress testing on Drupal Commerce in a controlled environment. He is skilled at using LoadStorm and New Relic to push Drupal implementations to the point of failure. His team tells me he is good at breaking things.

In this presentation at Drupal Mid Camp, Andy explained how his team ran several experiments in which they load tested a kickstarter drupal commerce site on an AWS instance and then compared how the site performed after several well known performance tuning enhancements were applied. They compared performance improvements after Drupal cache, aggregation, Varnish, and Nginx reverse proxy.

View the below slideshare to see the summary of the importance of web performance and to see how they used LoadStorm to prove that they were able to scale Drupal Commerce from a point of failure (POF) of 100 users to 450 users! That’s a tremendous 450% improvement in scalability.

Drupal commerce performance profiling and tunning using loadstorm experiments drupal mid camp chicago 2014 from Andrew Kucharski
facebooktwittergoogle_plusredditpinterestlinkedin

The post Stress Testing Drupal Commerce appeared first on LoadStorm.

]]> https://loadstorm.com/2014/04/stress-testing-drupal-commerce/feed/ 0 Performance Tuning : 7 Ways to Spring Clean your Website https://loadstorm.com/2014/03/performance-tuning-7-ways-spring-clean-site/ https://loadstorm.com/2014/03/performance-tuning-7-ways-spring-clean-site/#comments Mon, 17 Mar 2014 17:10:02 +0000 https://loadstorm.com/?p=8623 Spring is here! Historically, this time of the year is representative of growth and renewal. It’s the perfect time for cleaning up your house, your yard, and your website. Keeping a modern website running smoothly can be time consuming and resource intensive. Modernizing your website may never become effortless, but this process can definitely become more manageable, maybe even enjoyable. This list provides an overview of quick and easy to apply tricks – including information on image optimization, general code cleanup , and using Gzip – to zip through clean up and arrive at a happy site. Before you start,choose […]

The post Performance Tuning : 7 Ways to Spring Clean your Website appeared first on LoadStorm.

]]> Spring is here! Historically, this time of the year is representative of growth and renewal. It’s the perfect time for cleaning up your house, your yard, and your website.

Keeping a modern website running smoothly can be time consuming and resource intensive. Modernizing your website may never become effortless, but this process can definitely become more manageable, maybe even enjoyable. This list provides an overview of quick and easy to apply tricks – including information on image optimization, general code cleanup , and using Gzip – to zip through clean up and arrive at a happy site.

Before you start,choose a tool to performance test your website

To be effective in performance tuning, it’s essential to establish a baseline and measure the effect of the changes you make. Otherwise, how will you know if the changes were worthwhile? And, in the future, how will you know what to do again?

We like WebPageTest!
There are a ton of free tools out there. Our personal favorite? WebPageTest! This free, open-source tool makes it easy to analyze the most important metrics, a waterfall view of all requests and resources, and even capture video of rendering. Check out our previous post on performance analysis using WebPageTest.

Wherever you decide to proceed from, focus on one task at a time.

1. Update Your Platform

Every once in a while, I get these annoying pop ups in the center of my screen, while I’m in the middle of watching YouTube videos and scrolling endlessly through Facebook. Ahem, I mean, studying. I usually ignore them. However, updating your web app to the latest version usually yields better performance and speed, and bug fixes. For example, if you use a content management system, make sure you update it regularly.

2. Get Rid of Bad Links

There’s some debate on whether broken links on a site can harm SEO health. Either way, nobody likes clicking on something that doesn’t work. It gives the impression that the site owner, and even the company, is unprepared. There are several free tools available to help you find these broken links and fix them, including Screaming Frog and Google Webmaster Tools.

On the same token, minimize redirects. Redirects may not be as obvious to users (or even web developers for that matter),but they cause extra delay that can be avoided. Screaming Frog can help to diagnose these occurrences as well.

3. Minimize HTTP Requests

By now we’ve all heard the popular mantra: The fastest request is one not made. A simple way to minimize requests is to combine code and consolidate multiple files into one. This can also be done by implementing CSS Sprites for multiple images. More information on implementing this strategy can be found here.

4. Remove Unused Code, Plug-ins, and Settings

Sometimes it’s easier to just comment out the code we don’t need, but after a while, this stuff can just become unnecessary clutter. This is applicable to code, images in a database, old plug-ins, themes, and even the settings/tables left over from an older theme or plug-in that has since been replaced. If you’re not currently using a theme or plugin, get rid of it. You can always download it again later. Chances are you’ve moved on to something sexier anyways.

5. Clean up Images

There are countless image optimization techniques that can be utilized to boost performance, some of which are more complicated than others. Some simple image tuning techniques to start with include cropping and resizing. It’s also important to serve the image at it’s real size, rather than adjusting them on the fly. For example, if your image size is 1000 x 1000, you don’t want to use a markup to display the image at 100 x 100 because this will mean you’re just sending extra bytes. Instead, resize the image in an editor to 100 x 100, and use that smaller file. Additional image tuning techniques include reducing color depth, as well as removing any unnecessary image comments.

6. Clean Out Your Database

This is probably the trickiest thing on the list. Have you removed old revision posts lately? What about unused images?

7. Use GZIP

The average website has grown 151% in the past three years, with an increase in the amount of requests and the size of requests. GZip is a tool that can be used to combat and reduce the weight of the requests. The easiest way to implement it is to stick the script into the PHP of your site’s header. An in-depth explanation of using GZip can be found here.

Now that you’re done..

Ahh.. Don’t you feel better? Now you can measure your results and compare them to your baseline. Even though some of these suggestions aren’t meant to drastically improve the speed of your site, an making incremental improvements and keeping an organized production will reap huge gains in the long run. The best thing you can gain is experience. Post the results of your optimization below! How did these tips work out for your website? Do you have any useful tips for performance tuning your site?

facebooktwittergoogle_plusredditpinterestlinkedin

The post Performance Tuning : 7 Ways to Spring Clean your Website appeared first on LoadStorm.

]]> https://loadstorm.com/2014/03/performance-tuning-7-ways-spring-clean-site/feed/ 0 AppDynamics Review https://loadstorm.com/2014/02/appdynamics-review/ https://loadstorm.com/2014/02/appdynamics-review/#comments Thu, 20 Feb 2014 16:11:11 +0000 https://loadstorm.com/?p=7538 The last app monitoring tool review on our tour is AppDynamics. There’s more than meets the eye with AppDynamics. In terms of features, AppDynamics is huge in both depth and breadth. The app has potential for use in large firms who want to keep track of everything going on in and with their application. This is because each account can have multiple users monitoring multiple web apps. Setup I noticed something strange when I began to install the agent. I didn’t have the option to select the PHP agent! This was unexpected because AppDynamics says that they support PHP applications […]

The post AppDynamics Review appeared first on LoadStorm.

]]> The last app monitoring tool review on our tour is AppDynamics. There’s more than meets the eye with AppDynamics. In terms of features, AppDynamics is huge in both depth and breadth. The app has potential for use in large firms who want to keep track of everything going on in and with their application. This is because each account can have multiple users monitoring multiple web apps.

Setup

I noticed something strange when I began to install the agent. I didn’t have the option to select the PHP agent! This was unexpected because AppDynamics says that they support PHP applications (Magento in our case). I had to get the PHP agent package from another AppDynamics account in order to get the agent working. The issue caused us to be late for the demonstration web meeting. Monitoring agents should be easy to install, but their support team was not as helpful.

Application

Once the agent was installed, we were on our way to exploring AppDynamics. The first page shows all the different applications being monitored. In our case MAGE_TEST is the group of servers housing the applications we’re testing.

appdyn-mage-test

After delving deeper into the MAGE_TEST application, we are greeted with the application dashboard. In the dashboard, I uncovered the Application Flow Map. While it may be superficial, it is one of my favorite features. It lays out your application network’s topology and basic metrics in a way that is easy to understand.


appdyn-blog-topology

The Application Flow Map for our environment (click to expand)

Drill Down into a Web Transaction

Past all the eye candy, there is another tab that peaks our interests. Clicking on Top Business Transactions neatly sorts the transactions it sees as most important. They include:

We notice business transaction catalog : product : view is taking 17.9 seconds. When we double click it, we are shown the Application Flow Map associated with just that transaction. In this case, it’s just NODE_3 and the shared RDS database. From here, we select a transaction snapshot to “Drill Down” into. These show the slowest pages related to our transaction. Finally we get to the call graph which is similar to NewRelic’s transaction trace. The call graph shows us the slow code in the app. This can help developers pinpoint performance bottlenecks in the application layer. The Hot Spots section shows the slowest methods of that transaction. In addition the SQL Calls tab shows queries sent to the database. This feature didn’t really exist in NewRelic or AppFirst, so it was nice to see for the first time in AppDynamics.

Other Features and Impressions

We’ve only gone over a few of the many features in AppDynamics. There are other features we liked but aren’t elaborating on. They include the following:

In general, AppDynamics is still a thorough and impressive app monitoring service. Despite the rocky start, the application remains a useful tool to anyone who wishes to optimize the performance of their web application.

facebooktwittergoogle_plusredditpinterestlinkedin

The post AppDynamics Review appeared first on LoadStorm.

]]> https://loadstorm.com/2014/02/appdynamics-review/feed/ 0 AppFirst Review https://loadstorm.com/2014/02/appfirst-review/ https://loadstorm.com/2014/02/appfirst-review/#comments Wed, 12 Feb 2014 21:36:36 +0000 https://loadstorm.com/?p=7523 Previously, the team checked out NewRelic and all its capabilities. Even though we were impressed with the app monitoring and UI as-is, we missed out on some detailed server resource monitoring. We looked forward to the next player in the app and server monitoring services, AppFirst. At first glance, AppFirst is more simplified and monitors some of the same apps as NewRelic such as Java, PHP, and Ruby. In this post, I will go through the team’s experiences with AppFirst as a monitoring tool. Setup Like last time, the first thing to do was get the agent. AppFirst calls them […]

The post AppFirst Review appeared first on LoadStorm.

]]> Previously, the team checked out NewRelic and all its capabilities. Even though we were impressed with the app monitoring and UI as-is, we missed out on some detailed server resource monitoring. We looked forward to the next player in the app and server monitoring services, AppFirst. At first glance, AppFirst is more simplified and monitors some of the same apps as NewRelic such as Java, PHP, and Ruby. In this post, I will go through the team’s experiences with AppFirst as a monitoring tool.

Setup

Like last time, the first thing to do was get the agent. AppFirst calls them “collectors” but they are essentially the same thing as agents. The AppFirst support team reached out to us beforehand to make sure the collectors were configured properly. This was helpful to me because I had not set up the collector to monitor MySQL properly. All in all, installing and configuring the collector was not difficult; later there were some confusing aspects about how servers were organized that I didn’t like.

Application

My first glimpse of the user interface was the Dashboard. Upon initially logging in, the Dashboard was empty. This should imply two things:

  1. There is a lot of customization needed.
  2. This is not for users who want data right away.

This is in contrast to NewRelic which had the graphs specialized for app performance. AppFirst however, makes you choose those settings on your own.

appfirst image 1

The next tab over is the Workbench, which gives in-depth data of things like servers, alert statuses, and a summary table. You can dig deeper into a selected server. In our case, we are monitoring three Magento app servers. You can then check one of those servers and see alerts, running processes, and historical CPU and memory usage.

The Dataflow tab gives us insight into how AppFirst perceives our network. It does so in an interesting way. Just take a look at the screenshot of how the dataflow is represented for our 3-server Magento store.

appfirst image 2

You can hover your mouse over the nodes and get a glimpse of data transfer. I find this presentation awkward, especially when compared to how other app monitoring tools handle this. The Browse tab mostly shows running processes. This is not as useful for us at the moment. The Correlate tab on the other hand, lets you select two of any data types present in AppFirst and compare them. For example:

These are the kind of potentially useful features that the Web Performance Lab really look for. A downside about this page is that it doesn’t give you the option to filter out unavailable data. This means you manually have to find usable data; so it’s important to make sure your collectors are getting valid data!
In Logs, you can diagnose problems with the server from the AppFirst web app. You can actually specify monitoring for any plaintext file not just logs. The functionality here is similar to running tail -f on a log file, which is nice to have.

Impressions

Overall, the monitoring software is satisfying. Most pages give a hyperlink that allows you to share your monitoring data with 3rd party sources. An AppFirst account is required though, which I think defeats the purpose of having a shareable link in the first place. In addition, there are no transaction traces in AppFirst like with NewRelic. The server scope is confusing at times and there are annoying feedback forms that keep popping up. On the other hand, the user interface is straightforward and highly usable. One metric AppFirst has that other monitoring services lack is the server cost over time. With AppFirst, you know if you’re meeting Service Level Agreements to the dollar amount.

We’ve got one more application monitoring service to review. Coincidentally, their name is AppDynamics (App just seems to be a popular prefix). We’ll be checking in with them next time!

facebooktwittergoogle_plusredditpinterestlinkedin

The post AppFirst Review appeared first on LoadStorm.

]]> https://loadstorm.com/2014/02/appfirst-review/feed/ 0 New Relic Review https://loadstorm.com/2014/02/new-relic-review/ https://loadstorm.com/2014/02/new-relic-review/#comments Tue, 11 Feb 2014 18:18:22 +0000 https://loadstorm.com/?p=7443 As the name implies, the Web Performance Lab is all about performance optimization. We felt it was our duty to investigate server-side monitoring as part of our industry. We also needed a monitor that could serve us accurate, reliable data. Read on for a review of our experiences working with New Relic. What is Server-side Monitoring? In a nutshell, it is a way for you to watch your web and app servers for performance issues using a monitoring service. The lab’s first go-to was New Relic; a big player in the application performance monitoring arena. The goal was to monitor […]

The post New Relic Review appeared first on LoadStorm.

]]> As the name implies, the Web Performance Lab is all about performance optimization. We felt it was our duty to investigate server-side monitoring as part of our industry. We also needed a monitor that could serve us accurate, reliable data. Read on for a review of our experiences working with New Relic.

What is Server-side Monitoring?

In a nutshell, it is a way for you to watch your web and app servers for performance issues using a monitoring service. The lab’s first go-to was New Relic; a big player in the application performance monitoring arena. The goal was to monitor a smaller version of my scaling Magento project’s test environment.

Monitoring agents are programs that sit on the target server and collect data about the system to send to the monitor controller (New Relic). Once the agent is set up, we will execute a 5,000 VUser test and see how the system performs. This is important because we want to have a Proof of Concept that the load tests are indeed hitting our server.

Setup

After logging in, the user interface was pretty helpful in pointing me in the right direction. On the first page, I could see a red “add more” button, which I used to add test agents. There were a lot of steps required to get the agents functioning completely. This is because we had to use the right package file, install it, edit the php.ini file and restart some services. However, within minutes I had an application agent up and running. Setting up server monitoring agent was a breeze after installing the app agent. There were two setup process for the two agents, which made me ponder: why are there even two? It would have been nice to see an all-in-one agent that could be installed.

Navigation

I began exploring the application navigation pane after the agents were successfully registered. My first impression: Wow! How about those charts? Two vital pieces of data were the Apdex chart, and Web transactions. Apdex is a simplified Service-Level Agreement that gives a single quantitative rating of how customers might be reacting to the site’s performance. This is based on response time. Find out more about Apdex here. Web Transactions allow users to go through the PHP transaction traces and pinpoint methods causing poor performance. Web transactions can be sorted by the following four conditions:

Experiment

Now that the app/server agents and test environment were ready to go, it was time to begin testing. While the test was running, the New Relic team gave us a comprehensive demo of their monitoring application.

Some of the most time consuming transactions in our test environment

Some of the most time consuming transactions in our test environment

If you’re not interested in drilling into the details of the application, you could always just inspect the response versus time graph. This helped the team see the effects of a load test on a server over long periods of time. In fact, there is some correlation between the load test results from LoadStorm and the response time graphs from New Relic together.

LoadStorm test results and New Relic response time aligned by time (approximate, click to expand)

LoadStorm test results and New Relic response time aligned by time (approximate, click to expand)

It’s not perfect because the scaling is off, but there is definitely correlation between the two data sets. This is an indicator that our system is being successfully load tested and monitored.

Conclusion

Overall,the team was impressed with the features and ease of use that New Relic offered. Their monitoring services cover both the application and server. A further benefit includes the quick-filtered transactions, which are useful for web developers and performance engineers alike. The support team was helpful in getting us well acquainted with the application. I am happy to give New Relic a positive review and would recommend it to anyone looking for optimize their web app because of the advanced detail provided. Subscribe to our feed and find out what we think of the next monitoring service provider: AppFirst.

facebooktwittergoogle_plusredditpinterestlinkedin

The post New Relic Review appeared first on LoadStorm.

]]> https://loadstorm.com/2014/02/new-relic-review/feed/ 0 The Scaling Magento Project – Part one https://loadstorm.com/2014/01/scaling-magento-1/ https://loadstorm.com/2014/01/scaling-magento-1/#comments Thu, 16 Jan 2014 17:03:27 +0000 https://loadstorm.com/?p=6694 Introduction When December rolled around, I had been with the Web Performance Lab for 4 months. By then, it was time for another realistic experiment. For months, my focus had primarily been AWS management and scaling Magento in addition to my blogging duties. I was asked to approach this project as if I had been dealing with one of our valued clients. We had one of our software engineers act as an e-commerce vendor who happened to use Magento. He laid out some requirements, then I discussed it with him and went to work. I was pretty excited about this […]

The post The Scaling Magento Project – Part one appeared first on LoadStorm.

]]> Introduction

When December rolled around, I had been with the Web Performance Lab for 4 months. By then, it was time for another realistic experiment. For months, my focus had primarily been AWS management and scaling Magento in addition to my blogging duties. I was asked to approach this project as if I had been dealing with one of our valued clients. We had one of our software engineers act as an e-commerce vendor who happened to use Magento. He laid out some requirements, then I discussed it with him and went to work. I was pretty excited about this project because it required me to use my skills as a systems admin. This gave me a chance to hammer my test environment with VUsers, while keeping scalability in mind.

Goal

When I approached our “client” he stated that I needed a 5,000 concurrent VUser store! This upgraded the challenge a bit for me, and I was willing to do the best I could. So here’s what I wrote up in the statement of work as our main goal:

The customer is expecting a Magento website that will not reach a fail-like state until at least 5,000 concurrent VUsers are reached while running a load test from the Stress account on LoadStorm PRO [testing] system.

The “fail-like” state I’m talking about is a 35-second peak response time at any time during the test.

Planning and Approach

I did a bit of digging to find some good preliminary articles. Here’s one that was extremely useful and that I ended up following. The topology was within my ability and project scope, yet still promised scalability. Since the time limit of the project was a work week, I spent half of that time working on preparation. I broke the project into two testing iterations. I would have a baseline environment (Test Environment #1) and a scaling environment (Test Environment #2).

Project Scope

In general, I was constrained to using Amazon Web Services for my infrastructure. Due to AWS pricing, we imposed a strict limit of $25 per test hour. This limited me to three of the largest instances; more than enough for our needs. However, financial constraints are an important part of any project.

Figure 1. Topology for Test #1

Figure 1. Topology for Test #1

I started with a single server as my baseline benchmark. This server also utilized a Cloudfront distribution. Since we were so comfortable with CDNs in the past, we might as well take advantage of it in our benchmark.
Here’s what’s going on in the diagram:

Test Environment #2

Figure 2. Topology for Test #2

Figure 2. Topology for Test #2

This topological structure is more involved. In general, an elastic load balancer will distribute traffic between three Magento server nodes. Cloudfront is still distributing static content, and there is also a centralized MySQL database using an RDS instance.
Here’s the components in detail:

We’ll see you soon for part two of the experiment!

facebooktwittergoogle_plusredditpinterestlinkedin

The post The Scaling Magento Project – Part one appeared first on LoadStorm.

]]> https://loadstorm.com/2014/01/scaling-magento-1/feed/ 0 The DeliverGOOD Experience https://loadstorm.com/2013/11/delivergood-experience/ https://loadstorm.com/2013/11/delivergood-experience/#comments Wed, 27 Nov 2013 23:42:34 +0000 https://loadstorm.com/?p=6051 A few months ago, shortly after joining the Web Performance Lab, I lead a project on site optimization. The site in question was, as mentioned in a couple of earlier blog posts, the Magento-based web store at Delivergood. There were some issues because the Delivergood web store could handle at most 100 concurrent users. Since the Web Performance Lab is all about web performance testing and optimization, we looked into making their web store site more scalable. We had a few ideas about what we could implement with their site to improve page speed and increase scalability. We had done […]

The post The DeliverGOOD Experience appeared first on LoadStorm.

]]>

A few months ago, shortly after joining the Web Performance Lab, I lead a project on site optimization. The site in question was, as mentioned in a couple of earlier blog posts, the Magento-based web store at Delivergood. There were some issues because the Delivergood web store could handle at most 100 concurrent users.

Since the Web Performance Lab is all about web performance testing and optimization, we looked into making their web store site more scalable. We had a few ideas about what we could implement with their site to improve page speed and increase scalability. We had done this before with our own WordPress blog! Robb Price, the founder of Delivergood, let the team and I take the wheel on this project. Our requirements were not formally defined. Robb simply expected for us to make his site scale, and we expected the same but in hopes of learning from it.

The following are some personal learning experiences I gained from the project. Consider them lessons learned:

Keep Tabs

Perhaps the most important thing you can do is keep tabs of your work. Document, record your screen, handwrite; whatever it takes to be able to come back to it later. Projects like this tend to grow in size and complexity. You find out you need to do more and more things. Time can be your enemy too. You forget small steps or entire processes if you don’t keep track.

Documenting what you do also helps you look back and determine other steps you could have taken. What if you move the CDN link to a different configuration scope in Magento? You’ll know what worked and what completely broke the admin panel backend (this has scared us a few times). Logging helps because it shows your peers what you’re doing! It proves that work is being done, the results, and what the project entails even if a team member is not around to explain it. It’s also good a team member needs to start where you left off, and
Dig Resources
Whenever you’re not writing documents, you should be reading documents. It can come in the form of blog posts, step-by-step guides, or official technical documentation. More than likely, the problem you are running into has been seen and fixed before. Referencing other people’s work allows you to save time and get more real work done. Some of the documents we were able to make use of included: linking the CDN to Magento, changing admin settings via phpMyAdmin, and creating CNAMEs.

Checklists

Checklists are a nifty tool when you deal with routine, procedural tasks. For example, I made a checklist that went through setting up an EC2 with Magento, then installing Varnish-cache on that EC2. Making checklists is like documenting, but the process is often more rigid since a checklist should definitely be reproducible. After all the documenting, reading documents and making lists, you might still find yourself at a roadblock. Once this happens, it is a good time to just take a moment to stop and think. Look back at what you’ve documented. See if it makes sense, if your workflow is staying on track, and whether you are ultimately making progress. This can even be considered a break.

Think Ahead of the Curve

Maybe it’s ironic that I mention this last, but be sure to plan ahead. One of the glaring problems we had while doing site optimization was figuring out that we needed more access than Robb had initially given us. We started out just needing access to the Magento admin panel, but then we needed the web hosting console, phpMyAdmin, and eventually SSH access. It might have helped to have all access rights in the beginning, as some time was wasted just waiting for an email. {next point about communication}Having good communication would have helped expedite the project. We also didn’t know about the kind of beast Magento would be; a bit of preliminary research could have better prepared us for the road ahead.

Working with Robb Price and Delivergood was a purely experimental procedure. We wanted to learn about how optimization could be used with Magento, a popular e commerce platform that many people use. Using some of my background in Linux and web apps, I took away knowledge about SQL, Cloudfront on AWS, and especially Magento, from this project. Like with everything in life, we all look back on events and say to ourselves “Huh… You know… I could have done that better.” Special thanks to Robb Price, and the Delivergood team for being patient and trusting the Web Performance Lab with their server.

facebooktwittergoogle_plusredditpinterestlinkedin

The post The DeliverGOOD Experience appeared first on LoadStorm.

]]> https://loadstorm.com/2013/11/delivergood-experience/feed/ 0