The post Load Testing Mobile Apps appeared first on LoadStorm.
]]> As mobile app usage increases, load testing mobile apps is becoming a key part of the software development lifecycle to make sure your application is ready for traffic. If your mobile app interacts with your application server via REST or SOAP API calls, then LoadStorm PRO can load test mobile app servers like yours.LoadStorm PRO uses HTTP archive (HAR) recordings to simulate traffic, and we normally create these recordings using a browser’s developer tools on the network tab to record all requests sent to the target server. In this article, I’m going to introduce you to two ways to make recordings of user traffic that can be used in LoadStorm. The first involves packet capturing from your mobile device, and the second involves a Chrome app called Postman used in combination with the Chrome developer tools.
If you like to keep things simple, then this method should save you the trouble of manually creating requests as shown in the next method. You’ll need a packet capturing mobile app (such as tPacketCapture) that stores requests in the PCAP file format, and allows you to share the PCAP file. The PCAP file generated by the app can then be converted to a HAR file to be uploaded into LoadStorm for use in a load test. To do this, you can use PCAP Web Performance Analyzer for free without needing any setup, or you can install your own converter from the pcap2har project on GitHub.
For Android devices, this method works as follows:
Click to Zoom
For iOS devices:
At this time, packet capturing mobile apps are only offered on Android devices. Apple products do not support direct packet capture services. However, if you connect your iOS device to a Mac via USB then you can use OS X supported software (such as Cocoa Packet Analyzer) to capture the packets, as described on the Apple developer site.
To make a recording with the Postman app, follow these steps:
Click to Zoom
REST
Making RESTful GET or POST requests will rely heavily on your knowledge of how your mobile app interacts with the application server. If you have a network log that shows the incoming requests from the mobile application, that can help simplify the reconstruction of those requests in the Postman app. Postman actually offers 11 methods for interacting with an application server, and of these we’ll typically only need to use GET and POST requests:
In some cases you will also need to add some form of authorization token in the request headers to let the application server know you have the rights to GET or POST information. Postman offers a few options to assist with authorization settings (i.e. Basic Auth, Digest, and OAuth 1.0), but you can always manually input the authorization header if you know the name of the header, the value, and format to send it in.
SOAP
Even though Postman is primarily designed to work with RESTful API calls, it can also work with SOAP.
To make SOAP requests follow these steps:
Click to Zoom
Additional information about creating SOAP requests can be found on the Postman blog.
Video Tutorial
This video is a short guide on recording a HAR using Postman in combination with the Chrome developer tools.
Also check out Ruairi Browne’s Postman tutorial covering RESTful API GETs and POSTs to Twitter as well as OAuth1.0 credentials.
To load test a mobile application using LoadStorm PRO, a HAR recording must be made to simulate the traffic. Once you’ve decided on a recording method, all you have to do is upload your HAR file into LoadStorm, and you’ll be on your way to load testing your app and ensuring end-user satisfaction. If you have questions or need assistance, please contact , visit our learning center, or leave a comment below.
The post Load Testing Mobile Apps appeared first on LoadStorm.
]]> https://loadstorm.com/2015/02/load-testing-mobile-apps/feed/ 0The post Super Bowl Site Performance appeared first on LoadStorm.
]]> Super Bowl Site Performance Game But while we may love football, we all know that this is only half of the competition. At $4.5 million dollars a piece, Super Bowl commercials only get a few seconds to vy for our attention, beat the competition, and make a valuable impression. After previous notorious site crashes that resulted in harsh social media backlash, we wondered if the companies who were willing to spend so much for 30 seconds worth of airtime also invested in their site performance and scalability that the $4.5 million dollar ads would incur. We tested each advertiser’s site with 2 virtual users, requesting the home page every 30 seconds over the duration of the game. Here’s what we found:The post Super Bowl Site Performance appeared first on LoadStorm.
]]> https://loadstorm.com/2015/02/super-bowl-site-performance/feed/ 0The post Introducing QuickStorm appeared first on LoadStorm.
]]> Here at LoadStorm, we’re happy to announce one of our newest, free features! Are you interested in web development? Have you creating your own site? QuickStorms are a great tool to get initial insight into site performance, and can also be used as a tool to benchmark performance before making performance enhancements on your site.QuickStorms are small, short load tests that can be used to evaluate site performance. All you have to do is enter your site’s URL. LoadStorm will generate a load test against the URL, scaling from 1 – 10 virtual users over 10 minutes.
LoadStorm PRO automatically creates a recording of every request made on the URL you provide. Next, we simulate the traffic escalating from one to ten virtual users, targeting that URL.
The test results summary provides totals for key metrics from the test. The total amount of requests as well as the overall average response times for those requests are displayed.
Since it’s a QuickStorm, there will only be one script. All of the servers used for the URL will be displayed, so if you want to view performance by servers, you can filter for that.
What you want to see here:
In general, you want to see good average response times, peak response times that are near the average, no errors, and low throughput. In addition, low total data transferred is a good sign. It’s a good idea to avoid page bloat.
This includes all the response times data for the site. Every request included on the site is displayed, detailing the amount of time each resource was requested, the size of the request, the average amount of time it took to get a response, and the peak amount of time it took to get a response.
What you want to see here:
Good response times for requests are under 250 ms, while 500 ms is still generally acceptable. Any longer and you’ll be in trouble if the volume of traffic increases.
If any errors occurred during the test, they will be shown here. The number of errors and the types will be shown here. The Errors by Resource table will list which resources yielded errors. Usually, errors won’t occur at such a low volume of traffic, but there are some times web developers are simply unaware of the problem. Eliminating these errors is an easy way to boost performance.
What you want to see here:
You want to see 0 errors here. Any errors here should be completely unacceptable. You can do better than that!
In LoadStorm, pages refer to each unique web page hit. Since QuickStorms are only performed on a single URL, you can expect to see only that single page listed here. Statistics for this page include the number of times each page was requested. For QuickStorms, this number usually reaches just over 100 times. This means your page was visited by virtual users just over 100 times in ten minutes!
During a regular load test, you would typically make a recording of several different pages to mimic the traffic you expect to see on your site. For example, a blog would include a recording of a user browsing to the homepage as well as many different posts.
What you want to see here:
You want to see a zero failures for your page. When it comes to average and peak completion times, you should aim for pages completing in under 5 seconds.
This is a feature you won’t be able to experience through QuickStorms, so you can expect to not see any data here. A transaction is a collection of requests that you designate in order to monitor specific site transactions. One example of this would be a search transaction. It’s important to have stellar performance in this area, and by selecting to monitor the search transaction, you can get feedback here.
What you want to see here:
Nothing. To use the transaction feature, sign up for a free account with LoadStorm PRO.
This data includes the results of the different requests, summarized by time. Here you can easily view any differences in response times between the first minute of the test (1 vuser) and the last minute of the test (10 concurrent users).
What you want to see here:
You want to see comparable response times at the beginning of the test, and the end of the test. Drastic differences can point to an underlying problem that’s just waiting to be exposed.
All of the raw test result data is available for download in a CSV file. Like the Requests by Time, this data usually becomes available within 10 minutes of test completion. If you would just like to show off your results, we also have the test statistics compiled for export as a CSV report.
What you want to see here:
Your site’s excellent results in a pretty report you can show off to your friends or enemies! Try it! It’s fun! And you get a URL you can share. Please note that QuickStorms are limited to 10 per day!
The post Introducing QuickStorm appeared first on LoadStorm.
]]> https://loadstorm.com/2014/12/quickstorms-overview/feed/ 0The post Cyber Monday Performance Evaluations appeared first on LoadStorm.
]]> From Amazon to Argos, online retailers are experiencing more traffic than ever this holiday season. This Cyber Monday reached a record high of $2.68 billion! Competition is fierce, and in this game, seconds = $$$.This year, we selected 29 major e-commerce sites and used LoadStorm to run several tests to compare their performance on the Wednesday before Thanksgiving with Cyber Monday. We created scripts for each site to model typical e-commerce user activity. Each script would hit the homepage, search for a product, add a product to the cart, and then visit the cart. Then we ran our performance tests for 10 minutes at a time, scaling from one to ten virtual users (vusers).
Here’s what we found:
Out of the 29 companies, 7 slowed down:
Walmart’s average page completion time increased from 2.9 seconds to 7.8 seconds. That’s huge! Amazon, on the other hand, remained consistent, with an average page completion time of just 1.6 seconds on both days. Average Page Completion Time remained nearly the same for the majority of the sites.
Six companies sped up:
Our preliminary load tests revealed zero performance errors across the board, with one exception. Monday, however, was a different story, as we saw five different companies experience performance errors. This included request read timeouts, request connection timeouts, and even some 503 (service unavailable) errors.
The exception to the increase in performance error rates was Best Buy. Interestingly, we saw Best Buy experience seven request read timeout errors on Wednesday (on product pages and search results), but none on Monday. This seems to corroborate the fact that they became overwhelmed with traffic over the weekend, but they appeared to have recovered gracefully by Cyber Monday.
Every site we tested experienced high peak completion times. Some of the best performers with the lowest peak page completion time on both Wednesday and Cyber Monday included Toys “R” Us, Brookstone (one of our customer’s – yay!), Ikea, and Amazon, with peak page completion time only deviating 7% from their mean.
Web page tests were on each company’s home page were performed during testing on Wednesday and Monday as well. Surprisingly, the overall trend was a decrease in page load times. The average page load time from our web page tests decreased from 9.3 seconds to 5.6 seconds. We’re impressed!
Nobody crashed while we were running our performance tests. Most sites appeared to perform reasonably well, but knowing that just one second delay could cost Amazon over $1.6 billion in sales over the course of a year those few errors matter immensely. Just 250 milliseconds, either slower or faster, is the magic number for competitive advantage on the web. So while none of the sites crashed entirely, whether or not they beat out their competition is another story.
Please note that none of the companies involved were contacted nor paid for participation in our experiments. These were just for fun. Here’s the complete list of the companies we tested:
The post Cyber Monday Performance Evaluations appeared first on LoadStorm.
]]> https://loadstorm.com/2014/12/cyber-monday-performance-evaluations/feed/ 0The post Tips and Tricks to Troubleshooting PRO Scripts appeared first on LoadStorm.
]]> OverviewWhen a recording is first uploaded, it behaves the same as when it was recorded; That is, it makes all the same requests, with no differentiating qualities. Until you parameterize it, our application will simply repeat all of the GETs, POSTs, and occasional PUT requests the same way they were recorded, but we handle a few things automatically such as cookies and most form tokens.
Often new scripts can have a bundle of requests and only a few of which that need parameterization to work as expected for every VUser that makes that request. These problems are often found when a request’s recorded status code does not match its status code from the last script execution. Problems such as a failed login request can cause subsequent requests to fail, causing a domino effect. For this reason, it’s always a good idea to start from the earliest requests and work your way down, checking that requests are behaving as expected during a single script execution. Even after it’s all fixed, a small load test is a good way to check that it is also working as expected for concurrent users.
When parameterizing a script, here are the key recommendations:
Switch to the Manage Server sub-tab and ignore all non-essential third-party servers. The goal is to test your web application, not someone else’s. If you feel the service they’re providing you is critical, please contact for help with server verification. After these have been ignored, execute the script again. This will make things easier when determining which requests you need to work on because the unnecessary requests will now show a status of ignored instead of a response code.
You can make it easier to find problematic requests by using our filtering options. To quickly bring all of the apparent errors to your attention click the All Errors radio button.
For the less obvious errors you’ll need to look for mismatched status codes by clicking the Recorded vs Last drop down and selecting the Status Code Mismatch option. As you begin looking at the Recorded Status column and comparing it to the Last Status column for each request you should disregard requests if they show cached or ignored for their last status. Typically the less obvious ones to look for change from a status 200 to a 302, or they were meant to be a 302 and now give a status 200.
If you’re unfamiliar with status code these are some of the most common:
Another option to filter down to what could be considered your most crucial requests is changing the All mime-types drop down and selecting the text/html option. These requests usually represent each page of text content that your users would need to read and interact with.
Once you’ve identified a request that you feel is causing problems double-click that request to open the details window. From here you can compare all of the details of a request from its recorded values with the values in the last execution. Any values that differ in the last execution will appear in red. Usually there are values that must appear in red because they need to be dynamic per user. Often these are tokens of some kind such as authentication, userIDs, sessionIDs, viewstates, incremental values, etc. If you find that something appears like a randomly generated token, but is colored black for the last execution then that indicates it was repeated as a static copy of the recorded value. This can often be the cause of problems with a request and cause a domino effect of problems with subsequent requests that rely on passing that token around or other chain of a events that need to occur. To fix this you’ll need to look at prior requests to see where this token is being set, and then modify the problematic request to grab the token from the prior request that would assign it.
You can often find important information regarding the behavior of a request in the response content. This information could let you know many things such as why your login attempt failed, if you received a stack trace from .NET, or if the page contains some dynamic tokens that are needed in later requests.
We also provide you with a hyperlink near the top-right above the last execution half of the response text. This link will let you preview the text of the last execution in a new browser tab. It is especially useful if you’d prefer to read what’s on the page instead of scrolling through HTML code.
If you’re concerned that a request is going to deliver a custom error page that shows a status code 200 which isn’t shown as an error during a load test, then fear not. We have a feature that will allow you to put in place a validation that will flag it as an error under the conditions that you specify. Let’s say you’re expecting a login to send you back to the homepage with some message at the top welcoming the user, but in this case the login failed and now the server is still giving us a status 200 delivering us to the homepage without a welcome message such as “Welcome to our store, Michael!”. You can select the homepage request that comes after the login POST request, and then click the Validate Response Content button. This will display a modal window that allows you to choose text that you expect to see, or that you do not wish to see. So following my example we would expect to see the words Welcome to our store in every response we get for this request, and that’s what you would enter in the expected string field. Now even though this request isn’t appearing as an error under normal conditions we can flag it as an error during a load test.
Whenever you identify a request that needs something changed to a dynamic value you’ll need to parameterize it to make use of our custom data selector. Depending on the need you’ll have to select the request and click the appropriate modification button to get started. Then in the next modal window that appears you’ll have to select the parameter that you wish to modify and change the modification type to custom and click the Select Data button. Now you’ll be in the custom data window where we gather all kinds of data for you to parameterize with from a convenient location. I’ll leave out the User Data and Generated Data tabs for now because this is really meant to focus on dynamic data that we’re grabbing from the responses to requests in your script.
ASP.NET_SessionId=
, JSESSION=
, __VIEWSTATE=
, form_id=
, etc. I’ve also seen URL paths that are built dynamically into the anchor tags of links based on your session with the server. That could look like href=”/test/user/12JzntR93/profile”
in which case you would want to look for the parts just before the string. Once you find it you’ll need to define a unique start string delimiter and an ending string delimiter to let our application now what the string is between. A quick example of this would be an anchor tag like <a href=”/some/path/uniquestring”>link</a>
which we would then enter a Start String Delimiter of [<a href=”/some/path/]
and an Ending String Delimiter of [“>link</a>]
which I’m using the [ ]
brackets to represent the input box itself.The post Tips and Tricks to Troubleshooting PRO Scripts appeared first on LoadStorm.
]]> https://loadstorm.com/2014/10/tips-and-tricks-to-troubleshooting-pro-scripts/feed/ 0The post 6 Facts About Cyber Monday Every E-Commerce Business Should Know appeared first on LoadStorm.
]]> 1. Cyber Monday is GrowingAdobe reported that Cyber Monday e-commerce sales in 2013 reached $2.29 billion – a staggering 16% increase over 2012. comScore reported that desktop sales on Cyber Monday 2013 totaled over $1.73 billion, making it the heaviest US online spending day in history.
With these kinds of numbers, only time will tell how long Cyber Monday will continue to grow. But one thing is certain, Cyber Monday is the single most important day of the year for e-commerce businesses.
Sites only designed for desktops are missing out on a huge chunk of the market. According to IBM’s Cyber Monday report, more than 18% of consumers used mobile devices to visit retailer sites. Even more impressive is the fact that mobile sales accounted for 13% of all online spending that day – an increase of 96% over 2012.
While making an e-commerce application mobile isn’t easy, its definitely not something you can skip anymore!
Even with the surge of traffic on Cyber Monday, web performance is absolutely critical to success. Did you know that studies have shown:
Customers don’t have any patience for slow sites and the fact is that if a site isn’t fast, they will spend their money elsewhere. The peak load time for conversions is 2 seconds and just a one second delay in load time causes:
For sources and more statistics of the impact of web performance on conversions, check out our full infographic.
Fast and scalable wins the race!
In 2012, the standout crash was finishline.com, in 2013 it was Motorola. In both cases, the heavy load of traffic slowed the websites to a crawl, returned lots of errors, and inevitably crashed completely.
According to Finish Line CEO Glen S. Lyon, Finish Line’s new website launched “November 19th and cost us approximately $3 million in lost sales . . . Following the launch, it became apparent that the customer experience was negatively impacted.” To read more about Motorola’s debacle in 2013, check out our recent blog post: Cyber Monday and The Impact of Web Performance.
For better or worse, many shoppers are sharing their experiences on social media. According to OfferPop, Cyber Monday accounted for 1.4% of all social media chatter that day last year. This could be excellent exposure of people sharing the great deal you are offering or a PR nightmare.
Unprepared businesses will not only lose out on business, but unhappy shoppers will also share their bad experience with friends. Since we are using Finish Line as our example of high profile website crashes, it seems fitting to illustrate how social media played into their painful weekend. Angry tweets and Facebook messages popped up throughout the weekend, here is a small list of some of the best courtesy of Retail Info Systems News:
“The site is slower than slow.”
“I have had the flashing Finish Line icon running on my page for over 30 minutes now trying to confirm my order. And I have tried to refresh and nothing.”
“I have been trying for 2 days to submit an order on your site – receive error message every time.”
“Y’all’s website is down. Are you maybe going to extend your sales because of it?”
“Wait, you schedule maintenance on Cyber Monday?”
“Extremely disappointed! Boo! You are my go to store, and the one day you have huge Internet sales your website doesn’t work.”
Want to avoid being like Finishline and Motorola? There is only one way to ensure that your website will absolutely, positively, without a doubt be fast and error free under pressure on Cyber Monday: performance testing.
Performance testing is a critical part of development and leaving it to the last minute or testing as an afterthought is a recipe for disaster (see above examples). Performance testing is best used as a part of an iterative testing cycle: Run a test, tune the application, run a test, review changes in scalability, tune the application, run a test, etc. until the desired performance is reached at the necessary scale whether that is 300 concurrent users or 300,000. Without time to tune an application after testing, the application may very well be in hot water with no way to get out before the big day.
There are tons of performance testing tools to choose from (I personally recommend LoadStorm ; ) but whatever tool you use, the moral of this story is: Test early, test often.
The post 6 Facts About Cyber Monday Every E-Commerce Business Should Know appeared first on LoadStorm.
]]> https://loadstorm.com/2014/10/6-facts-cyber-monday/feed/ 0The post LoadStorm PRO Now with Transaction Response Timing – What does this mean for you? appeared first on LoadStorm.
]]> Today, LoadStorm published a press release announcing our new Transaction Response Timing. For many professional performance testers, especially those used to products like HP Loadrunner or SOASTA CloudTest, wrapping timings around logical business processes and related transactions, is a familiar concept. For those of you that aren’t familiar, I’ll explain.Transaction Response Time represents the time taken for the application to complete a defined transaction or business process.
The objective of a performance test is to ensure that the application is working optimally under load. However, the definition of “optimally” under load may vary with different systems.
By defining an initial acceptable response time, we can benchmark the application if it is performing as anticipated.
The importance of Transaction Response Time is that it gives the project team/application team an idea of how the application is performing in the measurement of time. With this information, they can relate to the users/customers on the expected time when processing request or understanding how their application performed.
The Transaction Response Time encompasses the time taken for the request made to the web server, there after being processed by the Web Server and sent to the Application Server, which in most instances will make a request to the Database Server. All this will then be repeated again in reverse from the Database Server, Application Server, Web Server and back to the user. Take note that the time taken for the request or data in the network transmission is also factored in.
To simplify, the Transaction Response Time is comprised of the following:
Processing time on Web Server
Processing time on Application Server
Processing time on Database Server
Network latency between the servers, and the client
The following diagram illustrates Transaction Response Time.
Transaction Response Time = (t1 + t2 + t3 + t4 + t5 + t6 + t7 + t8 + t9) X 2
Note: X 2 represents factoring of the time taken for the data to return to the client.
Measurement of the Transaction Response Time begins when the defined transaction makes a request to the application. From here, until the transaction completes before proceeding with the next subsequent request (in terms of transaction), the time is been measured and will stop when the transaction completes.
Transaction Response Time allows us to identify abnormalities when performance issues surface. This will be represented as a slow response of the transaction, which differs significantly (or slightly) from the average of the Transaction Response Time. With this, we can correlate by using other measurements such as the number of virtual users that are accessing the application at that point in time and the system-related metrics (e.g. CPU Utilization) to identify the root cause.
With all the data that has been collected during the load test, we can correlate the measurements to find trends and bottlenecks between the response time, and the amount of load that was generated.
Using Transaction Response Time, the Project Team can better relate to their users by using transactions as a form of language protocol that their users can comprehend. Users will know that transactions (or business processes) are performing at an acceptable level in terms of time.
Users may be unable to understand the meaning of CPU utilization or Memory usage and thus using the common language of time is ideal to convey performance-related issues.
The post LoadStorm PRO Now with Transaction Response Timing – What does this mean for you? appeared first on LoadStorm.
]]> https://loadstorm.com/2014/06/transaction-timing-added-to-loadstorm/feed/ 1The post Steps of Performance Testing appeared first on LoadStorm.
]]> Performance testing is typically done to help identify bottlenecks in a system, establish a baseline for future testing, or support a performance tuning effort. Some performance tests are used to determine compliance with performance goals and requirements, and/or collect other performance-related data to help stakeholders make informed decisions related to the overall quality of the application being tested. In addition, the results from performance testing and analysis can help you to estimate the hardware configuration and scale required to support the application(s) when you “go live” to production. Follow these best practice steps of performance testing.Identify the physical test environment and the production environment, which includes the hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle.
Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.
Identify key scenarios, determine variability among representative users such as unique login credentials and search terms. The team must also determine how to simulate that variability, define test data, and establish metrics to be collected. Then consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
Develop the performance tests in accordance with the test design best practice.
Run and monitor your tests. Validate the tests, test data, and results collection.
Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Performance testing is a critical part of the application development process. It is very important for testing to be integrated throughout production, not just tacked on the end as an afterthought. Additionally, testing should be viewed as an iterative process of develop, test, adjust or tune, test.
The steps described above are simply a guideline, and each application will have unique needs and challenges to face when testing. Professional performance engineers, like the consultants at LoadStorm, have the knowledge and expertise to help any development team overcome these unique challenges and reach performance goals.
The post Steps of Performance Testing appeared first on LoadStorm.
]]> https://loadstorm.com/2014/04/steps-of-performance-testing/feed/ 0The post Web Performance of Ecommerce Applications appeared first on LoadStorm.
]]> You probably have been wondering why I’ve posted so infrequently over the past year. We have been bombarded with emails and phone calls demanding more blogging that includes my extremely dry, obtuse humor. So, in the interest of global stability and national security, I must acquiesce to the will of the masses.Right. That’s a joke. I’m only slightly more popular than Justin Bieber. If you are a serious tech geek, you’ll need to look up that Bieber reference.
Web performance is why you read our blog, and web performance is my life. I want our blog to contain the most interesting information about load testing, page speed, and application scalability. In order to deliver on that goal, we came up with the concept of using LoadStorm and other tools to gather actual performance data regarding as many web applications as possible.
Thus, the Web Performance Lab was born.
Amazon EC2 General Purpose Types of Server Instances
WPL (I’ll abbreviate because we geeks love acronyms) gives us a playground to satisfy our curiosity. I want to know:
There are hundreds of similar questions that I ponder with Elijah Craig. He and I are very productive in the evenings, and he helps me plan experiments to solve the riddles of web performance that appear to me in visions after a long day of work in the load testing salt mines.
U.S. Online Retail Sales 2012-2017
With over $250 billion of online sales in the U.S. alone during 2013, and with over 15% annual growth, how could we ignore ecommerce? It’s the biggest market possible for our Web Performance Lab to address. The stakes are enormous. My hope is that other people will be as interested as I am.
Cyber Monday 2013 generated $1.7 billion in sales for a single day! What ecommerce applications are generating the most money? I doubt we will ever know, nor will the WPL answer that question. However, some of you reading this blog will want to get your share of that $300 billion this year, and the $340 billion in 2015, so I’m certain that you need to understand which online retail platform is going to perform best. You need to know, right?
Cyber Monday sales data
We have been running some of these experiments during the past few months. Esteban shared some of his results in blog posts earlier. My problem with his work is that some of the conclusions aren’t as solid as I would prefer. I spent some time reviewing his results with him in a conference room recently, and I poked some holes in his logic.
Now don’t get me wrong, I am an Esteban fan. He is a friend and high character guy. That said, we all learn from experiments. We try, we ponder, we learn. That’s how humans gain understanding. As a child you figure out the world by putting your hand on a hot stove. You register that learning experience in your brain, and you don’t do that again. You find out the best ways to accomplish objectives by failing. Just ask Edison. He figured out 1,000 ways how NOT to create a functional lightbulb before he found the correct way. So it is with WPL. We are learning from trying.
Therefore, we are beginning a new series of experiments on ecommerce platforms. We will be publishing the results more quickly and with less filtering. We hope you find it useful and interesting. Please feel free to comment and make suggestions. Also, if you disagree with our statistical approach or calculations, please let us know. Recommendations are also welcome for ways to improve our scientific method employed during the experiments.
The post Web Performance of Ecommerce Applications appeared first on LoadStorm.
]]> https://loadstorm.com/2014/04/web-performance-ecommerce-applications/feed/ 0The post Stress Testing Drupal Commerce appeared first on LoadStorm.
]]> I’ve had the pleasure of working with Andy Kucharski for several years on various performance testing projects. He’s recognized as one of the top Drupal performance experts in the world. He is the Founder of Promet Source and is a frequent speaker at conferences, as well as a great client of LoadStorm. As an example of his speaking prowess, he gave the following presentation at Drupal Mid Camp in Chicago 2014.Promet Source is a Drupal web application and website development company that offers expert services and support. They specialize in building and performance tuning complex Drupal web applications. Andy’s team worked with our Web Performance Lab to conduct stress testing on Drupal Commerce in a controlled environment. He is skilled at using LoadStorm and New Relic to push Drupal implementations to the point of failure. His team tells me he is good at breaking things.
In this presentation at Drupal Mid Camp, Andy explained how his team ran several experiments in which they load tested a kickstarter drupal commerce site on an AWS instance and then compared how the site performed after several well known performance tuning enhancements were applied. They compared performance improvements after Drupal cache, aggregation, Varnish, and Nginx reverse proxy.
View the below slideshare to see the summary of the importance of web performance and to see how they used LoadStorm to prove that they were able to scale Drupal Commerce from a point of failure (POF) of 100 users to 450 users! That’s a tremendous 450% improvement in scalability.
The post Stress Testing Drupal Commerce appeared first on LoadStorm.
]]> https://loadstorm.com/2014/04/stress-testing-drupal-commerce/feed/ 0