Welcome to the fourth part of our Web Performance Optimization series. It is our hope that these suggestions and best practices will assist you in improving the speed of your site. We view performance engineering as an iterative process whereby developers and testers will run load tests, analyze measurements, and tune the system incrementally. Our goal is to help you make your site faster and handle higher traffic.

We’ve talked extensively on this blog about how server applications can use caching to improve Web server performance, often dramatically. In addition to caching, Web site developers and administrators can employ other techniques to reduce the size of wire transmissions and increase document delivery speed.

File Consolidation

Web servers can reduce the number of requests generated by the client by reducing the number of separate files the client must fetch. Servers can facilitate this by combining separate files of the same type. From a maintainability standpoint, it often makes sense for a Web site developer to store the Cascading Style Sheet (CSS) code for her site in several separate files. A Web browser, however, doesn’t care if the CSS code is contained in four small files, or one monstrous file; all of the elements share the same namespace once they’re loaded into the browser.

According to The Exceptional Performance Team

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.”

Minify is a PHP5 application that combines multiple Javascript and CSS files into a single file. This simple utility can eliminate anywhere from two to 12 HTTP requests for a single page. Minify goes the extra mile and applies GZip compression and cache control headers to these unified files for maximum performance.

Another key technique for consolidating multiple files is to take advantage of CSS sprites. This technique puts multiple images into one composite image that can be used by the browser for many different individual parts of a page. CSS background positioning is used to display only the image you need from the sprite.

The HTTP requests are greatly reduced because one request replaces potentially hundreds of requests. I have seen some customers’ ecommerce pages that contain over 300 images. A sprite could produce a 300 to 1 reduction in requests for that page. Multiply that overhead savings by say 10,000 concurrent users, and the result is a tremendous performance improvement. Most new versions of browsers support CSS backgrounds and positioning, which has allowed developers to adopt this performance technique.

Additionally, many times the combined sprite has a smaller file size than the sum of the individual images because the overhead of multiple color tables and formatting information required by separate images is reduced or eliminated. That efficiency is gained even though whitespace has been added between images in the sprite.

Connection Keep-Alive

Before we proceed, it should be noted that a major performance enhancement is already built into the HyperText Transport Protocol (HTTP) protocol itself. By design, HTTP is a stateless protocol. Each request is independent of all subsequent requests. If a Web page references a total of 12 scripts and images, for example, an HTTP client must make 12 separate requests, for which it must open and tear down 12 separate TCP/IP connections.

To reduce this inefficiency, clients and Web servers that implemented the HTTP/1.0 protocol began to use the Connection: Keep-alive header to indicate that a single TCP/IP connection should be reused for subsequent file requests. HTTP/1.1 made persistent connections the default. Clients and servers can exert control over this behavior by using the HTTP header Connection: close to force the shutdown of a persistent connection.

Clients can gain a further speed advantage with persistent connections by pipelining requests – i.e., issuing multiple requests without waiting for a response to the last request. Pipelining only applies to GET and HEAD requests; POST requests cannot be pipelined. Unfortunately, only the Opera browser fully supports pipelining. Firefox supports it through a configuration option, but it is not enabled by default, while neither Google Chrome nor Internet Explorer 8 support it in any form.

HTTP Compression

Compression is another HTTP/1.1 performance technology NOT enabled by default. HTTP compression uses one of several standard compression algorithms (usually GZip or Deflate) to compress data at the server prior to transmission to the client. To use compression, a client sends an Accept-Encoding HTTP header to the server, listing the compression methods it supports. In its response, the server includes a Content-Encoding header that specifies which compression algorithm was used.

All major Web browsers and Web servers implement HTTP compression, and can have a marked impact on performance. As Jeff Atwood noted long ago, compression can reduce the size of plain text by up to 75%.

Some browsers use their own custom compression schemes. For example, Google Chrome implements the Shared Dictionary Compression over HTTP (SDCH) compression protocol, a proposed extension to the HTTP protocol that uses delta compression to reduce the tranmission of shared elements such as headers and footers.

Image Size Reduction

As good as HTTP compression is, it only applies to text files. Image data is already compressed, so further compression produces meager gains. For images, Web site administrators are better off reducing the total size of images. The folks at Yahoo! Developer offer some killer tips for reducing image size, including:

  • GIF color palette reduction;
  • Using PNGs instead of GIFs; and
  • Using lossless compression with JPEG images.

Another tool available through the Yahoo! Developer Network for image performance optimization is Smush.it. This tool uses optimization techniques specific to image format to remove unnecessary bytes from image files. It is a “lossless” tool, which means it optimizes the images without changing their look or visual quality. After Smush.it runs on a web page it reports how many bytes would be saved by optimizing the page’s images and provides a downloadable zip file with the minimized image files.

Summary

There are many ways to improve the speed of your site. This Web Performance Optimization series, of which this is the fourth part, is designed to give you practical suggestions to try in your system environment.

Our load testing tool is great at measuring the response times and errors from your site, but it doesn’t tell you how to tune the components of your architecture to improve performance. These tips and tricks will hopefully assist you in taking the metrics LoadStorm provides and make tangible changes.

By tweaking an aspect of your Web application, load testing to get performance results, and repeating this cycle, we know your site will reach the high volume objectives of your CEO and marketing department!

Please submit your comments regarding positive or negative feedback. If you have a good performance tip, we beg you to share it with us. I’ll put 500 Storm on Demand users in your account if you provide some useful web performance optimization techniques! Thank you in advance for contributing. 😉

Similar Posts