If you could drop the page rendering time to 1/5th of its current time, would you do it?

Wouldn’t that be a 5x improvement? As a web performance geek, are you interested in speeding up your user experience by 500%?

Sounds good, but how? Twitter engineers have offered us a tremendous case study to consider. It centers on the cost of running JavaScript on the browser.

Distributed Processing – Put More on the Client Side

Web applications have been increasingly including more and more JavaScript in order to improve the user experience. The terms “Web 2.0” and “Rich Internet Application” are now old news, but the underlying technologies continue to grow quickly in both deployment and tools available. The browsers are becoming much more complex as they provide capabilities to handle sophisticated processing of client-side programming.

It sure seems to me that the functionality of browsers and operating systems are converging. So what? That all sounds good…warm and fuzzy for web developers. New toys! Better interactivity! Cool features on my app!

Yeah, but aren’t you a bit skeptical? Have you been coding long enough to recognize the downside of complexity? There is always a cost associated with the new technologies being deployed, but most of us web geeks have overlooked the performance hit of RIA/JavaScript because it fits the big umbrella architecture improvement best described as “distributed processing”.

Back in 1999, I was teaching XML courses where it was common to hear me say something like:

“If we push as much of the processing overhead to the individual PC or handheld device, then we can improve overall system performance. Rather than make our server farm 100x more powerful, let’s shift the running code to a 10,000 machines on the client side.”

That was a good theory. It worked for a decade or so. But now we are seeing the limitations of this architectural philosophy. The client-side can’t keep up. We have over-loaded the capability. We are assuming too much regarding the browsers’ ability to process efficiently. Smart phones and tablets aren’t as fast as we think they are.

If we analyze what components comprise the rendering speed, many times today’s Rich Internet Applications will have very significant performance hits caused by JavaScript execution. In Twitter’s fully client-side JavaScript design, users wouldn’t see anything until the JavaScript was downloaded and processed. Result? Slower user experience. Lower user satisfaction.

The Twitter engineers concluded: “The bottom line is that a client-side architecture leads to slower performance because most of the code is being executed on our users’ machines rather than our own.”

Let me say that another way: Too much JavaScript in web pages is killing performance!

Heresy? Is Scott is out of his mind? Can empirical evidence be shown to prove this conclusion? Probably a little of each.

Twitter Innovates by Returning to Server Side

Consider what Twitter is doing now. They have proven that web performance can be greatly enhanced by moving more of the processing to the server-side.

According to Dan Webb, Engineering Manager of Twitter’s Web Core team:

“We took the execution of JavaScript completely out of our render path. By rendering our page content on the server and deferring all JavaScript execution until well after that content has been rendered, we’ve dropped the time to first Tweet to one-fifth of what it was.”

The result was a 500% improvement in performance! The metric used is a direct reflection of what their users need to see in their browser – Twitter’s key measurement of usability relating to performance.

So, speed can be greatly improved by going “old school” with more of the processing on the server. In Twitter’s case, the old school was really only a couple of years ago. Webb shares some insight into the history of their decisions prior to this innovative performance optimization:

“When we shipped #NewTwitter in September 2010, we built it around a web application architecture that pushed all of the UI rendering and logic to JavaScript running on our users’ browsers and consumed the Twitter REST API directly, in a similar way to our mobile clients. That architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server.”

Ok, they bought-in two years ago to the heavier client concept. What Webb is saying fits perfectly with what I was evangelizing back in 1999. It fits with what my computer science professors preached back in the 1980’s about distributed processing.

Brilliant! Shift the processing to the consumer! Performance problems solved!

Perhaps not.

Take Control of your Front-end Performance

When I click a link, my expectation is to see content as soon as possible. Recent studies say that we want to see something in less than 2 seconds or we start to have an emotional letdown that affects our “state of satisfaction” with the site.

Twitter engineers were focused on that speed. They went searching for ways to tune their architecture to produce performance gains. They experimented and found the answer in what would seem a step backward in technology. Move rendering to the server?! Huh? You’ve got to be kidding!

Webb says:

“To improve the twitter.com experience for everyone, we’ve been working to take back control of our front-end performance by moving the rendering to the server. This has allowed us to drop our initial page load times to 1/5th of what they were previously and reduce differences in performance across browsers.”

Not only does this approach result in getting content to the user in 20% of the original time, it greatly helps with browser inequality. Very cool side effect, don’t you think?

It makes sense when I think about it because older versions of IE are notoriously poor performers, and they make us web developers look bad. Users don’t really know or care that the reason the page takes 15 seconds to load isn’t my fault. Mostly, they assume I’m a bad coder. They subconsciously conclude that owner of this site is stupid because it moves so slow. Slow is a synonym for “dumb” or “poor quality” or “low value” in many contexts. That’s why I get mad at users that hang on to IE 6 or 7 – it hurts my reputation.

What a wonderful way to overcome the “IE penalty”! Take away much of the thinking that is difficult (slow) for the browser.

Conclusion: Re-think Your Architecture, Tune, Test, Iterate

All web applications are different in the way they perform, and their requirements aren’t the same. Will your users accept a 5-second render time? Does your app need sub-second appearance of some content?

I recommend you explore the impact of reducing JavaScript control over your UI. Experiment with moving more pre-processing of the page to your server-side. Will Twitter’s model get you significant performance gains?

There is only one way to properly attack your web application performance optimization – scientifically. You need to analyze your needs and plan a project well. Iterate! Test, tune, test, tune, test…

Limit your tuning to one change, then run a performance test.

Why should you give energy to load and performance testing? Why is an iterative process of testing and tuning a better approach? Because you can’t control what you don’t measure. If you don’t test, you won’t know the speed. If you don’t tune one item at a time between tests, you won’t know what tweak produced what result.

Take note of how Twitter engineers identified their most important goals, “Before starting any of this work we added instrumentation to find the performance pain points and identify which categories of users we could serve better.”

Perhaps you can get a 500% gain in speed, and that will result in more success for your web app.

“Faster sites make more money. Period.”

Bottom line, isn’t more money from your site what you really want?!

Similar Posts