“My name is Jovana.”
Usually when I receive an email that starts out like the sentence above, I figure that I’m about to be solicited for something to wit my wife would strongly object. Then upon further reading, I realized this email was legit and became intrigued at the request.
“I found your article extremely interesting and would like to spread the word for people from Ex Yugoslavia.”
That was very cool. Everyone enjoys hearing that their writing is valuable to someone – even better if they want to pass it on to their friends!
As Web sites grow in complexity, the amount of content they host from third party sources continues to climb. “Third party content” refers to any content hosted by a separate company and integrated into a Web site using server-side HTML injection, IFRAME hosting, or a client-side AJAX include.
This trend has its advantages and disadvantages. On the one hand, hosting third party content enables sites to add functionality that might otherwise take them months or years to build themselves. On the other hand, that additional functionality comes at the cost of performance. The more third party content a site hosts, the greater the risk it runs of sluggish load times, request timeouts, and client-side parsing errors. The graphic above is a simple illustration of the potential performance hit you can experience by adding one simple third party widget to your site. I know because these numbers are from my real measurements based on adding a Quora follow me widget. More below…
Let’s examine:
- The types of third party content typically hosted on a Web site
- Assess the performance impact of each type of third party content
- Explore ways sites can manage their third party content for optimal performance
Welcome to the fourth part of our Web Performance Optimization series. It is our hope that these suggestions and best practices will assist you in improving the speed of your site. We view performance engineering as an iterative process whereby developers and testers will run load tests, analyze measurements, and tune the system incrementally. Our goal is to help you make your site faster and handle higher traffic.
We’ve talked extensively on this blog about how server applications can use caching to improve Web server performance, often dramatically. In addition to caching, Web site developers and administrators can employ other techniques to reduce the size of wire transmissions and increase document delivery speed.
File Consolidation
Web servers can reduce the number of requests generated by the client by reducing the number of separate files the client must fetch. Servers can facilitate this by combining separate files of the same type. From a maintainability standpoint, it often makes sense for a Web site developer to store the Cascading Style Sheet (CSS) code for her site in several separate files. A Web browser, however, doesn’t care if the CSS code is contained in four small files, or one monstrous file; all of the elements share the same namespace once they’re loaded into the browser.
According to The Exceptional Performance Team
80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.”
Minify is a PHP5 application that combines multiple Javascript and CSS files into a single file. This simple utility can eliminate anywhere from two to 12 HTTP requests for a single page. Minify goes the extra mile and applies GZip compression and cache control headers to these unified files for maximum performance.
Another key technique for consolidating multiple files is to take advantage of CSS sprites. This technique puts multiple images into one composite image that can be used by the browser for many different individual parts of a page. CSS background positioning is used to display only the image you need from the sprite.
The HTTP requests are greatly reduced because one request replaces potentially hundreds of requests. I have seen some customers’ ecommerce pages that contain over 300 images. A sprite could produce a 300 to 1 reduction in requests for that page. Multiply that overhead savings by say 10,000 concurrent users, and the result is a tremendous performance improvement. Most new versions of browsers support CSS backgrounds and positioning, which has allowed developers to adopt this performance technique.
In the previous installments of our Web performance series, we’ve examined how developers can employ Web server caching and application caching to speed up their Web applications. In this installment, we’ll see how caching entire data sets can increase platform efficiency.
What is Data Caching?
Database caching is a species of application caching that caches the result of one or more database queries in the application server’s memory. For our purposes, we use application caching to refer to caching any component in an application server’s memory, and data caching to refer to caching a collection of data for future querying and sorting.
There are two main approaches to data caching:
- Data set caching. Data returned from a database is stored in a data set, an in-memory representation of data that mimics the column/row structure of a relational database.
- In-memory databases. An in-memory database is a relational database that operates completely in a server’s RAM, as opposed to storing data to a hard drive. In-memory databases can be used for fast access to data previously retrieved from a traditional, disk-based RDBMS.
Let’s dig deeper into each of these.
In our first installment of this series, we examined how Web server caching is implemented. Caching stores critical pieces of information in memory or on a local hard drive for subsequent rapid retrieval. In this installment, we’ll look at how Web application environments can use application caching to reduce load and increase performance.
What is Application Caching?
Application caching stores calculated components in memory for future use, either by the same user or multiple users. Because of the complex architecture of modern Web applications, application caching can produce huge gains in Web site response times.
A Web application is usually comprised of multiple architectural layers. At the simplest level, Web applications consist of the client, which runs the application (usually a series of HTML pages and associated technology); and the server, which processes data and generates output. When we zoom in on the server side, however, we find that most non-trivial applications consist of several additional layers. Typically, these layers include the UI layer, which generates the user interface (usually HTML); the business logic layer, which implements business rules; and the data layer, which stores and retrieves data from one or multiple data sources.
Have you ever had a Krispy Kreme Burger? It’s definitely over the top. Too much of a good thing.
I read a good article this morning that presents a case study of scaling a web site. 6 Ways to Kill Your Servers – Learning How to Scale the Hard Way by Steffen Konerow presents some excellent points about how to avoid system crashes in your web application. Surprisingly, there are not direct mentions of load testing the site re-launch before going live. Load testing is implied throughout, yet never specifically addressed.