The post Web Performance News of the Week appeared first on LoadStorm.
]]> This week in web performance, we take a look at the Mobile World Congress highlights and Google’s new developments in quantum computing.Barcelona hosts Mobile World Congress
This week, the 2015 Mobile World Congress brought 93,000 attendees to Barcelona, Spain with over 2,000 exhibitors and 3,800 analysts. Over 40 keynotes were given, featuring Mark Zuckerberg and Tom Wheeler, GSMA hosted the GSMA Seminar Program and highlighted the Connected Living innovations, and the 20th Annual Global Mobile Awards recognized outstanding industry leaders. This event reminded us that it’s the Internet of Everything era, with smart cars, appliances, vending machines and even city lights announced. Additional gadgets announced at the MWC included new phones, including the new Galaxy S6, new wearables, including the Huawei Watch, and new virtual reality headsets, including the HTC Vive.
Google improves SSL Warnings
Google has expanded its efforts to keep you safe online. New additions in Chrome, Search, and even advertisements have been made to keep you even safer from sites where unwanted software downloads are available that attempt or make undesired changes on your computer. Adrienne Porter Felt from the Google Security Team highlighted that one of the most important factors of the SSL warnings is that the browser warns only when it’s really under attack. In addition, the team had to strategize on how to best convey the threat in a way users could understand.
Apple Pay fraud increases due to lax bank ID checks
Apple Pay fraud is on the rise, ultimately highlighting a potential problem for all mobile payment systems. Using tokenized Device Account Numbers and the Touch ID fingerprint system, Apple Pay was initially praised for its increased security when it was launched in October 2014. However, reports earlier this week indicate that criminals have successfully set up iPhones with stolen personal information, gaining encrypted versions of victim’s credit cards by calling banks to authenticate a victim’s card on the new device. Banks are already responding by stepping up security measures required to verify your identity, including one-time authorization tokens, calls to customer service and logging into your online banking.
Google Tests First Error-Correction in Quantum Computing
Researchers from the University of California, Santa Barbara and Google reported on Wednesday that they had made a significant development in quantum computing, successfully creating the world’s first error-correcting quantum circuit. The system under test was able to stabilize a fragile array of nine qubits, the quantum analogue of the traditional bit. The researchers said they had accomplished this by creating circuits in which they used additional qubits to observe the state of computing qubits without altering their state. “Quantum computing becomes viable when a quantum state can be protected from environment-induced error “ said researchers in the Nature journal article.
Hear of any other interesting web or tech news this week? Let us know in a comment below!
The post Web Performance News of the Week appeared first on LoadStorm.
]]> https://loadstorm.com/2015/03/web-performance-news-of-the-week-2/feed/ 0The post Web Performance News of the Week appeared first on LoadStorm.
]]> It’s a great week on the internet! This week in web performance the preservation of net neutrality and new announcements from Google and Apple make headlines.
FCC votes to preserve net neutrality, classifying broadband as a utility
Yesterday the Federal Communications Commission voted 3-2 to approve the proposed net neutrality rules for both wireless and fixed broadband. The proposed rules will disallow paid prioritization, as well as the blocking and throttling of lawful content and services. After overwhelming public outcry, this win for advocates of net neutrality is being called “the free speech victory of our times” and “an even bigger win than SOPA”. But the debate looks to be far from over.
Response from Verizon came in both morse code and typewriter font saying the rules were “written in the era of the steam locomotive and the telegraph. In addition, a group of 21 republicans sent a response to FCC chairman Tom Wheeler threatening legislation that would “ensure the antitrust laws are the preferred enforcement method against anticompetitive conduct on the Internet” and that “may include a restriction on the FCC’s ability to regulate the Internet.”
Apple to spend $1.9 Billion on European data centers powered by renewable energy
In what will be Apple’s biggest investment in Europe to date, Apple announced plans to build and operate two new data centers in Denmark and Ireland. Running entirely on renewable energy, the data centers will power several of Apple’s online services for European customers, including the iTunes Store®, App Store℠, iMessage®, Maps and Siri®. The operations are expected to launch in 2017 and will include initiatives to restore native trees to Derrydonnell Forest, provide an outdoor education space for local schools, and create a walking trail for the community. “We believe that innovation is about leaving the world better than we found it, and that the time for tackling climate change is now,” said Lisa Jackson, Apple’s vice president of Environmental Initiatives.
Apple releases new Playgrounds
The new Xcode 6.3 beta 2 now contains improvements to Swift playgrounds, with inline results, stylized text, and a resources folder. The new playgrounds were made to be useful for authors and educators.
Google introduces a new open source HTTP/2 RPC Framework
Google has introduced a new open source (BSD-licensed) cross-platform library for making remote procedure calls. Built on the recently finalized HTTP/2 specification, gRPC will allow for bidirectional streaming, flow control, header compression, multiplexing requests over a single TCP connection and more. In addition to gRPC, Google has released a new version of Protocol Buffers, an open source binary serialization protocol intended to allow easy definition of services and automatic generation of client libraries. The project has support for several different programming languages (C, C++, Java, Go, Node.js, Python, and Ruby) with libraries for several others (Objective-C, PHP and C#) in development. Google indicated that they have begun to use gRPC internally in order to begin transitioning to HTTP/2.
A look ahead: Barcelona will host Mobile World Congress
The first week of March brings along the exciting 2015 Mobile World Congress held in Barcelona, Spain. The four day event is like the Ted Talks of mobile tech, with thought-leadership keynotes from Mark Zuckerberg and Tom Wheeler, numerous panel discussions, and 1900 technology and product exhibitors. The event will feature the Global Mobile Awards, and App Planet, an opportunity for the mobile app community to come together to learn and network. In addition, all attendees will gain access to 4 Years From Now, a 3 day event focused on startups and corporations, led by globally recognized entrepreneurship and innovation experts.
Other headlines this week:
The post Web Performance News of the Week appeared first on LoadStorm.
]]> https://loadstorm.com/2015/02/web-performance-news-of-the-week/feed/ 0The post Super Bowl Ads = Lots of Traffic. Are Sites Ready? appeared first on LoadStorm.
]]> Super Bowl ads crashing websites isn’t a new story. But it is one that deserves a bit of attention about this time every year.Back in 1999, Victoria’s Secret made a big splash with their Super Bowl Ad. The ad was one of the very first ever to tie in TV and web. It promoted an online lingerie fashion show and over a million viewers logged on to watch. . . crashing the site.
Since then, using Super Bowl ads to drive traffic to a website has become a very popular marketing technique. Many companies use the massive marketing power of a Super Bowl ad to drive viewers to their websites and take a specific action (signing up for some promotion, voting for a favorite team, etc.). The resulting massive spike in traffic that hits the websites, should be very much expected. However, every year there are a handful of websites that have website crashes resulting in massive social media and PR backlash.
In 2013, a study by Yottaa found that over 13 companies had Super Bowl ads that crashed their websites, including Coca-Cola, SodaStream, Calvin Klein, and Axe. Coca-Cola invited viewers to log on to CokeChase.com and vote for their favorite team, Axe offered a sweepstakes to send someone to space. Both resulted in massive numbers heading to the websites to find crashed websites. The result? Viewers took to social media in droves :
One notable crash of Super Bowl 2014 was from Maserati. The ad was rumored to cost between $11 million and $17 million. It announced the new Maserati Ghibli and sent masses to the website MaseratiGhibli.us, which promptly crashed.
Load testing is critical for any website expecting a rush of traffic. Whether it is a rush from a major ad campaign or a launch, it is imperative that the website be able to handle the pressure of the traffic. The only way to know for sure that your site is prepared is to test it.
As a load testing provider, we see all of these examples as avoidable problems. Load testing allows companies to simulate a large volume of traffic hitting the website or application while monitoring the sites responses (response times, throughput, errors, etc.). Therefore, if the companies had load tested, it is quite reasonable that their web development teams would have found the performance bottlenecks, addressed them, and the crashes would never have happened.
Will we see any big website crashes from Super Bowl ads this Sunday? Comment below if you have any guesses of which websites will fail and then check back next week as we analyze the hard data from our very own testing done on Super Bowl Sunday 2015!
Sources:
http://www.usatoday.com/story/money/business/2015/01/21/victorias-secret-super-bowl-ad/22127891/
http://www.yottaa.com/blog/bid/265815/Coke-SodaStream-the-13-Websites-That-Crashed-During-Super-Bowl-2013
http://geekbeat.tv/maseratis-ghibli-superbowl-ad-crashes-maserati-website/
The post Super Bowl Ads = Lots of Traffic. Are Sites Ready? appeared first on LoadStorm.
]]> https://loadstorm.com/2015/01/super-bowl-ads/feed/ 0The post Can load testing help stop hackers? appeared first on LoadStorm.
]]> In light of the recent Sony hack, security should be on every web developer’s mind. This cyber attack, which is being called “the worst cyber attack in U.S. history” by Sony’s CEO, is a perfect example of why security is something we all need to take seriously. An enormous amount of personal and financial information was revealed for millions of customers.As we grow increasingly aware of these occurrence, we as developers need to go forward with the mindset that people will be trying to access our data. As the internet and technology permeates throughout physical stores, our information is becoming even more vulnerable to criminals who use online hacking methods against organizations.
There are numerous ways you can be proactive in protecting your website. One practice that is often overlooked is a combination of penetration and stress testing. Stress testing is the practice of determining how well a website functions in deliberately adverse conditions. Penetration testing is actively trying to break down security methods and access forbidden information. Typical actions in this testing test may include:
When you evaluate and benchmark your system in this way, you can observe how your system reacts and recovers. As the array of different protocols and applications grow increasingly complex, malicious attacks can quickly bring down a site or exploit a lack of security.
A previous Sony data breach that jeopardized 77 million users was actually disguised as a DDoS attack, which is an attack characterized with very overwhelming amounts of traffic. According to a recent study from RSA Security and the Ponemon institute, 64 percent of IT professionals in the retail sector have seen an increase in attacks and fraud attempts during abnormally high traffic periods. By testing your site’s response to a simulated attack, this type of security gap can be reduced, which is a proactive step towards protecting your site.
LoadStorm can be used as a tool to determine your site’s breaking point, and possibly, your site’s performance at its most vulnerable point. By simulating an attack in conjunction with a load test, you can now evaluate how network and security devices perform under stress, and isolate and repair flaws. After determining the weak points of your site, get to work implementing a more secure infrastructure. The idea is to close the gap between the attack and the response to the attack.
Many companies make the mistake of launching before they are truly ready. It’s easy to get caught up in launch deadlines or the pressure to conserve time and resources that could be spent on testing. However, with the diverse competition that exists today, many customers will only give you one shot. If your site or your customer’s data has been compromised, don’t be surprised if they leave and do not return. It takes a lot of work to build trust with new users. Don’t lose the value of your hard work on a vulnerable system. As millions of transactions take place on the internet every day, it’s up to us to make sure that our systems are prepared for an attack; that security provisions like network firewalls, flood controls, intrusion detection and prevention, and application firewalls have all been tested thoroughly with realistic simulated traffic.
It’s up to us to ensure that our sites are ready for high traffic and that our data is secure.
The post Can load testing help stop hackers? appeared first on LoadStorm.
]]> https://loadstorm.com/2015/01/load-testing-hacking/feed/ 0The post WordPress Hosting Providers Study: Web Performance & Scalability appeared first on LoadStorm.
]]>
When it comes to web performance, study after study has proven: fast and scalable wins the race. But with thousands of WordPress hosting providers, how do you know which one is fast and scalable?
That is where ReviewSignal.com comes in. Their business is all about helping people identify which hosting provider is the best choice for them. Kevin Ohashi from ReviewSignal has been working with LoadStorm to run a series of load tests on some of the top WordPress hosting providers to determine which is the best for companies who need scalable websites.
Our performance engineers have teamed up with Kevin to analyze the multitude of data and provide this report of the top WordPress hosting providers for web performance. Providers included in this study are: A Small Orange, FlyWheel, GoDaddy, Kinsta, LightningBase, MediaTemple, Nexcess, Pagely, Pantheon, and WebSynthesis. These providers were included in the 2,000 user load test because they didn’t struggle with the first test of 1,000 concurrent users.
This analysis only looks at the final load test of 2,000 concurrent users, but Kevin’s article analyzes the results of both tests and looks at long term up-time reliability. Check out Review Signal’s report of the full study here.
All tests were performed on identical WordPress dummy websites hosted on 10 different hosting services. All sites were tested with the same plugins except in cases where hosts added extra plugins. The websites had identical scripts that included browsing and login. The load tests were run in LoadStorm PRO for 30 minutes with a linear 20 minute ramp up from 500 to 2,000 virtual users and holding at the peak for the for the remaining 10 minutes.
In order to rank the top providers, we have broken our analysis down by the key web performance metrics:
To fairly rank the top providers, we ranked each provider by each performance metric at the 20 minute mark in the test, when all sites were under full load of 2,000 users. For each metric, the providers were ranked (1st through 10th) according to their performance and then a point value was assigned to each. Then we determined our final ranking position based on their total score, the sum of all points from all the metrics.
To view the full test results with interactive graphs in LoadStorm PRO, click on each hosting provider below:
Error rate is probably the most important metric for businesses wanting to be certain that a website won’t crash under high traffic. High error rates mean one thing: Lost customers.
Surprisingly, we had a 7-way tie for first place with 0% error rates. Overall, this speaks volumes to the scalability of all the websites included in the study. Flywheel started to fail at around 1500 concurrent users and began returning 502 errors, which explains its high error rate.
Average Response Time is very significant because it directly affects the user experience and perceived load time. This metric measures the time each request takes “round trip” from the browser sending the request to the server, the server processing the request, and then the response from the server back to the browser. The Average Response Time takes into consideration every round trip request/response cycle for that minute interval and calculates the mathematical mean of all response times.
This metric also measures the same “round trip” that the Average Response Time does, but instead of averaging the time for all requests, Peak Response Time is simply the single longest (slowest) time for a single request.
Average Page Completion Time is a metric that measures the amount of time from the start of the first request to the end of the final request on a page.
In regards to the specific times in this study, the test shows unusually fast Average Page Completion times. After investigating why the pages were loading so quickly, it turns out that some of the pages on the dummy website were very simple with very few requests each. While users with real websites on these providers would expect to see slower average page completion times, the tests are still valid because all providers had the same simple pages.
Throughput is measured by the number of kilobytes per second that is being transferred. This measurement shows how data is flowing back and forth from the server(s). High throughput is a mark of good web performance under load because it shows that there aren’t any bottlenecks blocking and slowing the data transfer. Low throughput, as seen in WebSynthesis, signifies that the server is overwhelmed and is struggling to pass data to and from the server.
Interestingly, GoDaddy pushed triple the amount of data through because their admin screen had more resources being loaded. Which is why the average throughput is so high. Despite the extra data to process, they still had significantly higher average response times than most of the other providers. Anytime a site makes more requests, it slows down performance. Therefore, without so much extra data it is fair to say that GoDaddy could have possibly been faster than all the others.
From the final point tallies, we can see that there are three clear sections.
Top Performers: Pantheon, MediaTemple, GoDaddy, and Kinsta.
Good Performers: Nexcess, LightningBase, A Small Orange, and Pagely.
Fair Performers:: FlyWheel and WebSynthesis.
Overall, most of the providers did surprisingly well under the full load of 2,000 concurrent users. Even though we wanted to rank them in a definitive order, the fact of it is that most providers did not reach failure rates at all in the test. So while we were able to rank them, there were several metrics where the difference between points was negligible (ie: 1 ms average response time difference between GoDaddy and Kinsta) but still calculated in our scores.
Additionally, the test utilized in our report is only part of the full ReviewSignal study. ReviewSignal ran tests at 1,000 users and the providers that crashed were not included in the tests at 2,000. Therefore, all of the providers included in this ranking should be considered great choices for scalable WordPress hosting.
This level of high performance in all 10 providers was unexpected with such a heavy load and we were very impressed by the results.
The post WordPress Hosting Providers Study: Web Performance & Scalability appeared first on LoadStorm.
]]> https://loadstorm.com/2015/01/wordpress-hosting-web-performance/feed/ 25The post Introducing QuickStorm appeared first on LoadStorm.
]]> Here at LoadStorm, we’re happy to announce one of our newest, free features! Are you interested in web development? Have you creating your own site? QuickStorms are a great tool to get initial insight into site performance, and can also be used as a tool to benchmark performance before making performance enhancements on your site.QuickStorms are small, short load tests that can be used to evaluate site performance. All you have to do is enter your site’s URL. LoadStorm will generate a load test against the URL, scaling from 1 – 10 virtual users over 10 minutes.
LoadStorm PRO automatically creates a recording of every request made on the URL you provide. Next, we simulate the traffic escalating from one to ten virtual users, targeting that URL.
The test results summary provides totals for key metrics from the test. The total amount of requests as well as the overall average response times for those requests are displayed.
Since it’s a QuickStorm, there will only be one script. All of the servers used for the URL will be displayed, so if you want to view performance by servers, you can filter for that.
What you want to see here:
In general, you want to see good average response times, peak response times that are near the average, no errors, and low throughput. In addition, low total data transferred is a good sign. It’s a good idea to avoid page bloat.
This includes all the response times data for the site. Every request included on the site is displayed, detailing the amount of time each resource was requested, the size of the request, the average amount of time it took to get a response, and the peak amount of time it took to get a response.
What you want to see here:
Good response times for requests are under 250 ms, while 500 ms is still generally acceptable. Any longer and you’ll be in trouble if the volume of traffic increases.
If any errors occurred during the test, they will be shown here. The number of errors and the types will be shown here. The Errors by Resource table will list which resources yielded errors. Usually, errors won’t occur at such a low volume of traffic, but there are some times web developers are simply unaware of the problem. Eliminating these errors is an easy way to boost performance.
What you want to see here:
You want to see 0 errors here. Any errors here should be completely unacceptable. You can do better than that!
In LoadStorm, pages refer to each unique web page hit. Since QuickStorms are only performed on a single URL, you can expect to see only that single page listed here. Statistics for this page include the number of times each page was requested. For QuickStorms, this number usually reaches just over 100 times. This means your page was visited by virtual users just over 100 times in ten minutes!
During a regular load test, you would typically make a recording of several different pages to mimic the traffic you expect to see on your site. For example, a blog would include a recording of a user browsing to the homepage as well as many different posts.
What you want to see here:
You want to see a zero failures for your page. When it comes to average and peak completion times, you should aim for pages completing in under 5 seconds.
This is a feature you won’t be able to experience through QuickStorms, so you can expect to not see any data here. A transaction is a collection of requests that you designate in order to monitor specific site transactions. One example of this would be a search transaction. It’s important to have stellar performance in this area, and by selecting to monitor the search transaction, you can get feedback here.
What you want to see here:
Nothing. To use the transaction feature, sign up for a free account with LoadStorm PRO.
This data includes the results of the different requests, summarized by time. Here you can easily view any differences in response times between the first minute of the test (1 vuser) and the last minute of the test (10 concurrent users).
What you want to see here:
You want to see comparable response times at the beginning of the test, and the end of the test. Drastic differences can point to an underlying problem that’s just waiting to be exposed.
All of the raw test result data is available for download in a CSV file. Like the Requests by Time, this data usually becomes available within 10 minutes of test completion. If you would just like to show off your results, we also have the test statistics compiled for export as a CSV report.
What you want to see here:
Your site’s excellent results in a pretty report you can show off to your friends or enemies! Try it! It’s fun! And you get a URL you can share. Please note that QuickStorms are limited to 10 per day!
The post Introducing QuickStorm appeared first on LoadStorm.
]]> https://loadstorm.com/2014/12/quickstorms-overview/feed/ 0The post The Best and Worst of Cyber Monday Web Performance appeared first on LoadStorm.
]]> Introduction:How the big brand name e-commerce sites handle the heavy traffic on Cyber Monday is always of great interest to our team and our readers. So this year, we decided to run a short experiment on some of the top companies to bring you the best and the worst performers this Cyber Monday.
The 28 companies we chose to test included companies who had painful Cyber Monday crashes in previous years, companies who were running fantastic online deals, and companies that are known to have huge volumes of online holiday shopping traffic.
We ran WebPageTest, an open source performance testing tool, on all 28 companies. All tests were run on Chrome browsers from Dulles, VA at approximately the same times. The first set of tests were run on Wednesday, November 26, 2014 and the second set of tests were run on Cyber Monday, December 1, 2014.
As we categorized the companies based on performance, the most significant factor we considered for this article was time to first byte. Stay tuned for another article where we discuss the speed index and page load times on Cyber Monday.
The reason this article focuses on time to first byte is because it is very significantly tied to perceived load time. If a user waits several seconds and doesn’t see anything loading on the page, he or she is highly likely to abandon the website. However, even if the whole page takes over 10 seconds to load, as long as the user sees progress quickly, he or she can begin looking at the page and is much more likely to stay on the website.
We have ranked the companies into five categories:
Eight companies in our study showed top performance on both Wednesday and Cyber Monday. All of the companies in this group scored an impressive A or B first byte letter score, as assigned by WebPageTest and the highest time to first byte was only 0.315 seconds- Impressive.
Seven companies had moderate performance on both Wednesday and Cyber Monday. These companies were significantly slower than the top performers, but still maintained decent speeds. These companies had scores of B’s or C’s according to WebPageTest. By our assessment, these companies maintained acceptable, but not excellent times.
Five companies in our study had notable performance failures on both Monday and Wednesday. All sites in this group had over a 0.75 second time to first byte, which WebPageTest ranks as a F in their scoring system for time to first byte. Most of these sites had over a full second wait before the first byte was transferred- which is a sign that these sites were overwhelmed by the traffic load.
The level of performance in this category most likely had a significant impact on Thanksgiving and Cyber Monday sales. As we have seen proven time and again by various studies, web performance directly affects conversions. With such significant delays before seeing anything loading on the page, it is very likely that would-be customers left these websites for competitors.
This particular category is not one that we were expecting to see at all. In fact, we initially chose to test on Wednesday as a control group to measure against Cyber Monday. However, a quick poll in our office revealed that most of us had started our online shopping early. It is my theory that these particular companies had extra servers ready for Monday, but did not expect such a heavy load of traffic on Wednesday and were therefore unprepared. It is also possible that when the companies noticed performance failures on Wednesday, they made significant changes over the weekend and then were ready for the Cyber Monday rush.
I’m sure that each of these companies has their own story of WHY their performance was poor on Wednesday and then improved by Monday, but all we can tell you is that it happened.
The previous category was a bit of a surprise to our team. This category, however, was completely expected. As we see every year, there are some companies that just struggle to handle the amount of traffic that hits on Cyber Monday. Check out the differences:
Web performance is a top concern for any e-commerce business because it has been proven time and again to be directly tied to conversions. Just a one second delay in load time has been proven to cause a 7% decrease in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction. With stakes so high, being prepared for the rush of traffic on Cyber Monday is a must for all e-commerce businesses.
Overall, a large portion of the sites we looked at had good web performance on these important days. Even though there are always some websites with poor performance, the general trend is that most websites included in our study were prepared for the rush of traffic.
Feel free to share your Cyber Monday online shopping experiences in the comments! Did you encounter any poor performing websites?
The post The Best and Worst of Cyber Monday Web Performance appeared first on LoadStorm.
]]> https://loadstorm.com/2014/12/best-worst-cyber-monday-web-performance/feed/ 0The post Cyber Monday and the Impact of Web Performance appeared first on LoadStorm.
]]> Cyber Monday is a pretty big deal for online retailers.In fact, the previous statement is much too modest. Cyber Monday is the biggest day of the year for ecommerce in the United States and beyond. As the statistics show, Cyber Monday has become a billion dollar juggernaut since 2010 – and it has only continued to grow. Last year alone, Cyber Monday was responsible for over $1.7 billion spent by online consumers in the US, a shocking 18% jump from the year before!
Since its inception in 2005, the Monday after Thanksgiving has become a potential goldmine for those with an online presence. It has helped to significantly boost revenue during the Christmas period for savvy businesses that have taken advantage of using the promotion. The “cannot-be-missed” deals are important to any Cyber Monday campaign, but having the website ready to maintain consistent and fast performance with the traffic rush is absolutely critical.
An unprepared business would expect an increase in business on Cyber Monday, but overlook the fact that more visitors = more strain on the performance-side of their website. And the more strain on a website, the more it will begin to falter when it matters most.
During the mad rush of consumers looking to snap up some bargain deals, your website has to be prepared for the sudden visitor increase – otherwise your Cyber Monday will crumble before your eyes.
Last year Cyber Monday website crashes cost several large companies thousands of dollars in revenue. Motorola was offering a special price on their new MotoX, but the site was not prepared for the rush of traffic it would bring. Many customers experienced a very slow website, errors showing prices without the discount, and then the website crashed entirely.
In addition to losing customers who would have otherwise purchased that weekend, Motorola also had to deal with the PR aftermath. Unhappy would-be customers and the tech media took to social media, posting tweets such as:
In an effort to mitigate the damage, Motorola CEO issued the statement:
Moral of the story? Motorola lost out on thousands of dollars of sales and lost thousands of potential new customers forever, all of which could have been avoided if load and performance testing had been performed early. If they had load tested, Motorola would have been aware of the problems, found the causes, and fixed them before real users experienced them.
While many companies didn’t see full website crashes like Motorola, the rush of traffic still led to painfully slow websites and therefore a loss in revenue. A website must not only remain up and available, but also remain fast to navigate around. Just think of the amount of pages a potential customer might have to go through on your website. Now imagine if there were delays in between each page loading. Internet users are an impatient bunch, a one second delay can cause a 7% decrease in conversions and 11% fewer page views. And 74% of people will leave a mobile site if the delay is longer than five seconds!
Clearly, ensuring your website is constantly up and stable is imperative to maximizing profits for your business this Cyber Monday. Because the last thing you want to do is miss out on the most important day of the year for ecommerce and present your competitors with an opening to snag that business.
LoadStorm wants to help you prepare for the storm of traffic with a Cyber Monday special. We are offering one free hour of consulting and one free load test of up to 5,000 users. Check out our Cyber Monday page to learn more and to request your free test today!
The post Cyber Monday and the Impact of Web Performance appeared first on LoadStorm.
]]> https://loadstorm.com/2014/09/cyber-monday-impact-of-web-performance/feed/ 0The post LoadStorm PRO Now with Transaction Response Timing – What does this mean for you? appeared first on LoadStorm.
]]> Today, LoadStorm published a press release announcing our new Transaction Response Timing. For many professional performance testers, especially those used to products like HP Loadrunner or SOASTA CloudTest, wrapping timings around logical business processes and related transactions, is a familiar concept. For those of you that aren’t familiar, I’ll explain.Transaction Response Time represents the time taken for the application to complete a defined transaction or business process.
The objective of a performance test is to ensure that the application is working optimally under load. However, the definition of “optimally” under load may vary with different systems.
By defining an initial acceptable response time, we can benchmark the application if it is performing as anticipated.
The importance of Transaction Response Time is that it gives the project team/application team an idea of how the application is performing in the measurement of time. With this information, they can relate to the users/customers on the expected time when processing request or understanding how their application performed.
The Transaction Response Time encompasses the time taken for the request made to the web server, there after being processed by the Web Server and sent to the Application Server, which in most instances will make a request to the Database Server. All this will then be repeated again in reverse from the Database Server, Application Server, Web Server and back to the user. Take note that the time taken for the request or data in the network transmission is also factored in.
To simplify, the Transaction Response Time is comprised of the following:
Processing time on Web Server
Processing time on Application Server
Processing time on Database Server
Network latency between the servers, and the client
The following diagram illustrates Transaction Response Time.
Transaction Response Time = (t1 + t2 + t3 + t4 + t5 + t6 + t7 + t8 + t9) X 2
Note: X 2 represents factoring of the time taken for the data to return to the client.
Measurement of the Transaction Response Time begins when the defined transaction makes a request to the application. From here, until the transaction completes before proceeding with the next subsequent request (in terms of transaction), the time is been measured and will stop when the transaction completes.
Transaction Response Time allows us to identify abnormalities when performance issues surface. This will be represented as a slow response of the transaction, which differs significantly (or slightly) from the average of the Transaction Response Time. With this, we can correlate by using other measurements such as the number of virtual users that are accessing the application at that point in time and the system-related metrics (e.g. CPU Utilization) to identify the root cause.
With all the data that has been collected during the load test, we can correlate the measurements to find trends and bottlenecks between the response time, and the amount of load that was generated.
Using Transaction Response Time, the Project Team can better relate to their users by using transactions as a form of language protocol that their users can comprehend. Users will know that transactions (or business processes) are performing at an acceptable level in terms of time.
Users may be unable to understand the meaning of CPU utilization or Memory usage and thus using the common language of time is ideal to convey performance-related issues.
The post LoadStorm PRO Now with Transaction Response Timing – What does this mean for you? appeared first on LoadStorm.
]]> https://loadstorm.com/2014/06/transaction-timing-added-to-loadstorm/feed/ 1The post Editorial on End of Net Neutrality appeared first on LoadStorm.
]]> The FCC Proposes to End Net NeutralityThis week’s web performance news hits close to home for performance engineers. The concept of “the end of net neutrality” leaked a couple weeks ago and the public response was very strong. Learn more about what net neutrality means to you from our blog post last week. Last Thursday (5/15) the FCC voted on a proposal that will allow internet “fast lanes” for companies willing and able to pay for them. The proposal was accepted by a 3-2 majority vote. The vote was divided down party lines with the 3 democratic commissioners voting for the proposal and the 2 republican commissioners voting against. The FCC will now be taking comments from the public before a final vote on the proposal takes place in July. Read on for an editorial of my opinion of why we all need to rally together and fight this proposal and protect net neutrality:
The principle of net neutrality is that all internet content should be treated equally by internet service providers. The implementation of an open environment and the free exchange of information between people is regarded as sacred and if this proposal were enacted, internet service providers would be able to give some content and websites preferential treatment.
The proposal made by Tom Wheeler, the chairman of the FCC, moves to allow internet service providers (Comcast, Time Warner, and Verizon) to offer “fast lanes” to companies who pay for them. Many argue that allowing the ISP’s to determine which sites and content are seen at what speed could lead to both censorship and monopolization of the internet. Censorship is a possible outcome of the proposal as ISP’s will have the legal ability to give some content faster speeds than others. Additionally, an ISP could prevent controversial content from reaching a large audience by denying it access to the “fast lanes”. A form of monopolization could occur as large corporations are able to afford the fast highways while small businesses and start ups are left with just the bumpy back roads. These small businesses have the slowest (and therefore poorer performing) websites and will easily be squelched by large corporations who can afford to pay for the “fast lanes.”
Optimization requires understanding what you’re changing and why, as well as knowledge of useful testing tools and metrics that matter. Each website contains a different framework and serves a different purpose. The innovation and research into the neuroscience of attaining great performance will lose it’s relevance and the progress that has been made will come to a standstill. The piece of web development that is affected by this proposal is how the ISP’s handle the delivery of information to the end users, which is really a critical portion of web development.
For example, if two companies have the same good infrastructure and back-end performance, and Company A can afford to pay for the “fast lane,” its end users will continue to see excellent web performance. However, if Company B cannot or will not pay the ISP’s for the “fast lane,” its end users will see markedly decreased web performance. Therefore, prepare as you might, developing a truly stellar web application will lose its value to large corporations who are able to afford to pay the ISP’s for preferred service.
Quality development and internet competition is at risk. A popular example of the importance of equal internet opportunity is the replacement of MySpace with Facebook. This change would most likely not have been possible without net neutrality as Facebook would not have been able to compete financially. Likewise, new e-commerce sites or social media platforms will not stand a competitive chance, regardless of the benefits they may bring to market.
Nearly 75,000 people have petitioned the White House to protect and maintain net neutrality. You can sign it here now. In addition, the FCC has offered an email address for people to voice their thoughts on the neutrality plans. Please take a minute to email [email protected], which has been set up by the FCC to take public comments on this issue. We want to know what you think about net neutrality too! Share your thoughts in the comments below!
The post Editorial on End of Net Neutrality appeared first on LoadStorm.
]]> https://loadstorm.com/2014/05/net-neutrality-editorial/feed/ 0