web performance optimization

“My name is Jovana.”

Usually when I receive an email that starts out like the sentence above, I figure that I’m about to be solicited for something to wit my wife would strongly object. Then upon further reading, I realized this email was legit and became intrigued at the request.

“I found your article extremely interesting and would like to spread the word for people from Ex Yugoslavia.”

That was very cool. Everyone enjoys hearing that their writing is valuable to someone – even better if they want to pass it on to their friends!

Faster sites make more money. It’s a statistical fact that is supported by objective studies and data gathering from such organizations as Google, Amazon, Aberdeen, and Gartner. Here are data points found in Akamai’s new Slideshare presentation entitled: “Performance Implications of Mobile Design“.

83% of consumers expect a website page to load in 3 seconds or less

source: Akamai

25% of visitors abandon a page after 4 seconds or less

As we’ve noted in previous articles, Web site developers and system administrators are paying greater attention to mobile performance these days. Wading into these waters requires a new approach to Web performance measurement, as well as new tools to support it. In this article we will review the difference between mobile testing and desktop testing, the tools (both free and commercial) currently available for measuring mobile performance, and, finally, some tips and tricks for developing Web sites that exhibit high performance on mobile operating systems such as iOS, Android, Windows Phone, and Blackberry.

Client-Side vs. Server-Side Performance

The first factor in improving mobile performance is figuring out exactly how much improvement your site needs. This requires an accurate measurement of the current state of performance. But what do you measure, and how? As Web Performance Today recently pointed out, over 90% of performance gains on mobile devices are to be found on the mobile client’s front end, and not on the server side. This suggests the need for tools that measure mobile performance on the device side.

Measuring Mobile Performance

Given the importance of client-side performance for mobile sites, how do developers capture client performance data? This isn’t an easy task, given the plethora of devices and operating systems (iOS, Windows, and multiple flavors of Android) in today’s market.

The pcapperf toolkit goes a long way to meeting the need for a mobile performance measurement tool. Using pcapperf, developers can upload PCAP files captured from a private wireless network and analyze them using the online tool to pinpoint performance issues.

Development teams can also take inline measurements using LoadTimer, an IFRAME-based test harness that can be used from your mobile Web browser of choice. Besides capturing page load measurements, LoadTimer also supports recording and submitting data to creator Steve Souder’s live database of crowdsourced mobile performance results.

A third option for measuring mobile performance is Mobitest, an online tool from Blaze.io that runs a given URL through a variety of mobile operating system devices. This sophisticated but simple tool gathers performance information for iOS 5.0, a number of Android versions, and Blackberry OS 6.0. The downside is that Blaze.io has a limited number of devices available for testing, so users may have to wait a few minutes for their results.

A couple weeks ago, we looked at some of the best Web performance articles written in 2011. There was so much good material out there that we couldn’t stop with just a single round-up!

Selecting Tools and Designing Tests

Jason at Performance Testing Professional published a couple of great guides last year on the subject of choosing the right testing framework to fit your company’s needs. In his first article, Jason discussed some guiding principles for selecting a performance testing vendor. Most companies only adapt a performance testing framework after experience a major issue in production, so Jason gears several of his 11 questions towards investigating this instigating incident. His other questions focus on factors such as how your company plans to integrate performance testing into its existing QA cycle, and how often new system builds are deployed into production. He also lays out seven factors for making a final decision, including initial price, overall maintenance cost, and the platform’s scripting capabilities and overall learning curve, among others.

Continuing his analysis in a separate article, Jason reviews the three major types of load testing platforms – freeware, commercial, and cloud-based – and discusses which tools tend to suit which types of projects. He also bemoans that no one has yet pulled together the existing open source tools into a freeware platform that provides a workable alternative to the commercial standards. (As we will see below, though, at least a few people are thinking about how to do this in the mobile performance space.)

Both webmasters and web designers always need to keep a close eye on website loading times. A slow response time will result in less visitors and less profits. Load testing is done to ensure that a website remains responsive under heavy loads. Most webmasters don’t perform load tests, and they discover that their website cannot handle a sudden influx of visitors at the worst possible moment — when it actually occurs!

Many a website has been “slashdotted”. Getting featured on a popular website like Slashdot or going viral on social media like Twitter should be a moment of triumph. But if that traffic increase causes a website to slow down or even temporarily close, then their webmaster will have the heartbreak of watching both extra exposure and profits disappear down the drain.

By load testing a website before it is tested by a torrent of real visitors, such a scenario can be avoided. Load tests enable web developers to simulate the visitors, thereby telling the webmasters what level of traffic begins to reduce the response times of their websites. Therefore, testing facilitates a process of web performance optimization and allows the website to deliver a superior user experience for visitors.

Speed Beats Glamor

In the late 1990s, when Adobe Flash was first emerging as a favorite web technology, it seemed the Internet would soon be full of Flash websites. But this didn’t happen. Impressive as those websites might be visually, it was soon found that most web surfers didn’t have the patience to sit through Flash websites’ extended loading times. Users wouldn’t wait, and they are growing more impatient every day.

The need for balance between response time and new website technologies continues to this day. Just as engineers must ensure a bridge won’t fall down under the strain of extra cars and lorries, web developers must ensure their websites remain responsive under a flood of visitors.

The fact is that a snappy user experience beats a splendorous one hands-down. For the most part, users want to be able to engage with a website’s content, not admire its fancy animations and appearance. They are unwilling to wait for a great website design.

The web performance firm Strangeloop offers several astonishing facts about how a website’s loading time effects its visitors’ behavior:

  • Just three seconds wait was enough for 57% of web surfers to turn away from a website, and 80% of those will never return!
  • A one second delay in page-loading time led to 11% fewer page views and a 7% loss in conversions.
  • By speeding up their average page load time from 6 seconds to 1.2 seconds, Shopzilla’s revenues increased by a whopping 12%!
  • A 100 millisecond improvement in responsiveness at Amazon.com increased their revenue by 1%.
  • Yahoo! reported that a 400 milliseconds slow down in their pages loading resulted in 9% less traffic.

While we’ve touched upon client side caching in our series on Web performance, we haven’t discussed how client caching has grown more rich and useful over the years. In the initial days of the Web and the HTTP/1.0 protocol, caching was mostly limited to a handful of headers, including Expires, If-Modified-Since, and Pragma: no-cache. Since then, client caching has evolved to embrace greater granularity. Some new technologies even permit the deployment of offline-aware, browser-based applications.

Browser Request Caching

The most common and oldest type of client-side caching on the client is browser request caching. Built into the HTTP protocol standard, browser request caching allows the server to control how often the browser requests new copies of files from the server. We discussed the major aspects of browser request caching in part 1 of our series. Over time, Webmasters have taken to using different headers to improve caching on their site, including:

Pragma: no-cache. This old directive is used mostly by HTTP/1.0 servers, and instructs a client that a specific response’s contents should never be cached. It is used for highly dynamic content that is apt to change from request to request.

Expires. Supported since HTTP/1.0, this header specifies an explicit expiration date for cached content. It can be superseded by the value of the Cache-Control header. For example, if Cache-Control: no-cache is sent in a response, this will take precedence over any value of the Expires header.

If-Modified-Since: Since the HTTP/1.0 protocol, clients have been able to use this header to request that the server only send data if the resource has been changed since the specified date. If there have been no changed, the server returns an HTTP 304 Not Modified response.

In our past installments on Web performance optimization, we’ve seen how caching, server configuration, and the use of Content Delivery Networks (CDNs) can increase a Web site’s responsiveness and improve Web performance metrics. Most of the techniques we’ve reviewed have focused on configuring the Web server or optimizing server applications. Unfortunately, a Web page that downloads quickly but is slow to parse or execute on the client will appear just as slow to a user as if the Web server were on its last megabyte of memory. In this article, we’ll discuss some ways that Web page content can be streamlined for an optimal client-side experience.

Streamline JavaScript Includes

JavaScript abounds on the Web. From jQuery to Dojo, the Web is full of JavaScript libraries that can easily be dropped into a Web application. And any site whose developers are actively adding features is going to accrue its own storehouse of .js files. Unless a site’s JavaScript is carefully managed, its Web pages could end up making a dozen or more separate requests for scripts. As we’ve already discussed in our article on
web performance optimization non-caching strategies
, the more requests your site makes, the slower it will load.

Tip of the hat to TechAttitude.com for the graphic showing how Web page sizes and number of objects have grown tremendously over the past 16 years. In the interesting article they state, “…the average size of a web page has increased by more than five times since 2003” and “the use of mulitmedia is increasing by 100% each year”.

Follow these guidelines to manage and reduce the burden of JavaScript on your Web pages.

So far in our series on Web Performance Optimization, we’ve focused on how to reduce the number of requests between client and server through caching, and how to make requests more efficient by managing server resources. Another strategy in Web optimization is intelligent distribution of resources across the Internet, which can greatly reduce request latency by locating redundant copies of Web content on multiple servers spread across the Internet. In this installment of our series, we focus on content delivery networks (CDN), a technology that increases throughput by bringing content closer to the people requesting it.
 

What is a CDN?

In the simplest Web site configuration, a single Web server services requests from multiple clients. While this is often good enough for the simplest, lowest traffic Web sites, complex Web sites that need to scale to thousands or millions of visitors require more processing power. This is why many sites have resorted to using Web server farms, which are clusters of multiple Web servers offering redundant copies of a site’s content. Web farms use load balancing software to monitor the amount of load on any one server. They can also use this information to route requests to the server with the least load at a given point in time.

A CDN is a type of Web farm or server cluster, except that instead of using a single farm or cluster, the servers are spread out over the Internet in multiple geographical locations. These are called edge servers, because they are located at the extremes, or edges, of the Internet, instead of all being located off of a central Internet backbone link. The goal of a CDN is to decrease the time it takes to deliver content to a specific user based on that user’s location.

Let’s say, for example, that a company based in New York receives a request for Web content from a user in Seattle, WA. In a traditional setup, the Seattle computer’s request would have to find the most efficient route on the Internet to New York, usually via a busy backbone link. In a CDN configuration, the CDN could tell the Seattle client that its nearest edge server is on a subnet in Portland, Oregon. By obtaining the content from a server closer on the network, the client greatly reduces request latency.
 

In recent years, the good folks at WordPress have made it easier to use their free software not just as a blog, but as the hub of a rich content management system (CMS), complete with static content and custom data types. Given that, it’s no surprise that Webmasters and businesses around the world are increasingly basing entire sites around the platform. (And did we mention the “free” thing?)

While WordPress runs decently out of the box, site operators who employ a few tweaks and follow a few rules of thumb will achieve much better performance in the long run. In this article, we look at the best practices that will keep your WordPress site humming efficiently.
 

Limit the Number of Plugins

WordPress plugins are great. With a few clicks, ordinary users can add complex functionality to WordPress that otherwise might have taken hundreds or thousands of hours of programming.

But plugins also present a performance danger. Each plugin in the WordPress plugins directory must be loaded every time a new request is made. Even if the plugin is disabled, it will still load. This performance “gotcha” particularly affects sites on shared virtual hosting systems (e.g., Dreamhost, HostGator). Performance degradation can be dramatic on shared hosts; in some situations, user requests may never finish completing.

The number of plugins that will cause a site to slow down will vary based on a variety of factors. The author has been told by representatives of HostGator that they encourage customers to limit the number of installed plugins on their virtual hosting service to seven. Webmasters should select the WordPress plugins they employ carefully, and use a load testing service to measure the performance impact of any new plugins that they add to their system.

In our last article on performance tuning, we examined how to squeeze the most performance out of an Apache server. In this installment, we’ll take a look at how to apply some of these same principles to Microsoft’s Internet Information Server (IIS), which ships as part of Windows Server.

While its percentage of the Web server market share has declined in recent years relative to Apache, IIS still remains the second most deployed Web server on the Internet. Its deep integration with Windows and host of management utilities make it a great choice for anyone hosting Web content and applications in a Windows environment. With a little performance tuning (aided, of course, by load testing), an IIS machine can perform just as well as an equivalent Apache configuration under high load.

The Usual Suspects: Compression and Connection Management

Many of the techniques a team might use to enhance IIS performance are similar to the techniques used on Apache, as they involve configuration of the HTTP protocol.

Your website is a little slow – so what? Well, it is probably costing you money. I have been researching published facts about web performance because we are always trying to understand our industry better. This post should help you realize that improving your web application performance can directly impact your bottom line by 10% or more. Don’t believe me? Read on…

Similar Posts