LoadStorm™

Every time you make a change to your website, its speed is affected. Every new plug in, every new picture, change to server settings, or additional fun feature will affect your website speed. Although there are conscious tweaks you can make to a website to speed it up, most often the changes that happen are unintended consequences that can slow down your site. Sometimes the difference is negligible, like a fraction of a second. Other times, the difference can be multiple seconds.

But what is website speed really? What does it mean for your website and your business? This post will take a deeper look at what it means to you and your site’s success.

Consider these statistics:

  • 47% web users expect a website to load in less than two seconds.
  • 40% of web users will leave a website that takes three seconds or more to load.
  • 14% of web users will look for an alternative e-commerce websites if the one they are on loads slow
  • 88% of website users are likely to never return to a website where they felt they had a bad user experience.

As we’ve noted in previous articles, Web site developers and system administrators are paying greater attention to mobile performance these days. Wading into these waters requires a new approach to Web performance measurement, as well as new tools to support it. In this article we will review the difference between mobile testing and desktop testing, the tools (both free and commercial) currently available for measuring mobile performance, and, finally, some tips and tricks for developing Web sites that exhibit high performance on mobile operating systems such as iOS, Android, Windows Phone, and Blackberry.

Client-Side vs. Server-Side Performance

The first factor in improving mobile performance is figuring out exactly how much improvement your site needs. This requires an accurate measurement of the current state of performance. But what do you measure, and how? As Web Performance Today recently pointed out, over 90% of performance gains on mobile devices are to be found on the mobile client’s front end, and not on the server side. This suggests the need for tools that measure mobile performance on the device side.

Measuring Mobile Performance

Given the importance of client-side performance for mobile sites, how do developers capture client performance data? This isn’t an easy task, given the plethora of devices and operating systems (iOS, Windows, and multiple flavors of Android) in today’s market.

The pcapperf toolkit goes a long way to meeting the need for a mobile performance measurement tool. Using pcapperf, developers can upload PCAP files captured from a private wireless network and analyze them using the online tool to pinpoint performance issues.

Development teams can also take inline measurements using LoadTimer, an IFRAME-based test harness that can be used from your mobile Web browser of choice. Besides capturing page load measurements, LoadTimer also supports recording and submitting data to creator Steve Souder’s live database of crowdsourced mobile performance results.

A third option for measuring mobile performance is Mobitest, an online tool from Blaze.io that runs a given URL through a variety of mobile operating system devices. This sophisticated but simple tool gathers performance information for iOS 5.0, a number of Android versions, and Blackberry OS 6.0. The downside is that Blaze.io has a limited number of devices available for testing, so users may have to wait a few minutes for their results.

Ok, now I just want to get silly for a couple of moments. I know you come to the LoadStorm blog to read insightful, unique articles about load testing and web performance. I get it. You are a geek like me. There is a 93.6% probability that you are a Star Trek fan too. Yep, I’ve got Worf as my ringtone.

Please indulge me with a little diversion from our normal deeply technical tips & tricks to bring you a funny picture. To set the stage for why this appeals to me, I was born and raised in Lexington, Kentucky. That makes me a University of Kentucky Wildcat basketball fan by heritage. It’s in my blood. I cannot help being a fanatic any more than Yoda could take credit for his big pointy ears. It is a given in this universe.

This special Google search image is a tribute to this year’s college basketball player of the year: Anthony Davis. He is a Wildcat. His performance on the court is extraordinary. He is already the consensus #1 pick in the next NBA draft.

If he was a web application, he would have sub-second average response time with over 1 million concurrent users. If he was a Constitution-class Federation starship, he would have a top speed of warp 23. If he was on the cast of Big Bang Theory, he would be Sheldon’s more intelligent younger brother who leaves in multiple parallel universes simultaneously. If he was an Android app, iPhone would shutdown manufacturing immediately and leave thousands of children around the world unemployed. If he was a survivor on Battlestar Galactica, the fleet would have found Earth in a week. If he was president of a third-world country, the Justice League of America would move it’s headquarters to his country faster than you can say “invisible plane”.

Here is a young man with amazing humility and impeccable teamwork, so what do people use as his defining characteristic? His eyebrows. Or more accurately, his eyebrow. Earning a nickname from announcers and bloggers – “The Brow” – which is not used in derision, but with admiration. It has gone so far that kids make signs for ESPN SportCenter saying things like “Florida is gonna get Browdown”. It’s insane. Funny, but insane. Perhaps I just have a twisted sense of humor. Google jumped on that bandwagon because it was a powerful energy surge in the universe, but they are premature in calling the Cats this year’s champions. That won’t happen for another 6 weeks!

A couple weeks ago, we looked at some of the best Web performance articles written in 2011. There was so much good material out there that we couldn’t stop with just a single round-up!

Selecting Tools and Designing Tests

Jason at Performance Testing Professional published a couple of great guides last year on the subject of choosing the right testing framework to fit your company’s needs. In his first article, Jason discussed some guiding principles for selecting a performance testing vendor. Most companies only adapt a performance testing framework after experience a major issue in production, so Jason gears several of his 11 questions towards investigating this instigating incident. His other questions focus on factors such as how your company plans to integrate performance testing into its existing QA cycle, and how often new system builds are deployed into production. He also lays out seven factors for making a final decision, including initial price, overall maintenance cost, and the platform’s scripting capabilities and overall learning curve, among others.

Continuing his analysis in a separate article, Jason reviews the three major types of load testing platforms – freeware, commercial, and cloud-based – and discusses which tools tend to suit which types of projects. He also bemoans that no one has yet pulled together the existing open source tools into a freeware platform that provides a workable alternative to the commercial standards. (As we will see below, though, at least a few people are thinking about how to do this in the mobile performance space.)

Both webmasters and web designers always need to keep a close eye on website loading times. A slow response time will result in less visitors and less profits. Load testing is done to ensure that a website remains responsive under heavy loads. Most webmasters don’t perform load tests, and they discover that their website cannot handle a sudden influx of visitors at the worst possible moment — when it actually occurs!

Many a website has been “slashdotted”. Getting featured on a popular website like Slashdot or going viral on social media like Twitter should be a moment of triumph. But if that traffic increase causes a website to slow down or even temporarily close, then their webmaster will have the heartbreak of watching both extra exposure and profits disappear down the drain.

By load testing a website before it is tested by a torrent of real visitors, such a scenario can be avoided. Load tests enable web developers to simulate the visitors, thereby telling the webmasters what level of traffic begins to reduce the response times of their websites. Therefore, testing facilitates a process of web performance optimization and allows the website to deliver a superior user experience for visitors.

Speed Beats Glamor

In the late 1990s, when Adobe Flash was first emerging as a favorite web technology, it seemed the Internet would soon be full of Flash websites. But this didn’t happen. Impressive as those websites might be visually, it was soon found that most web surfers didn’t have the patience to sit through Flash websites’ extended loading times. Users wouldn’t wait, and they are growing more impatient every day.

The need for balance between response time and new website technologies continues to this day. Just as engineers must ensure a bridge won’t fall down under the strain of extra cars and lorries, web developers must ensure their websites remain responsive under a flood of visitors.

The fact is that a snappy user experience beats a splendorous one hands-down. For the most part, users want to be able to engage with a website’s content, not admire its fancy animations and appearance. They are unwilling to wait for a great website design.

The web performance firm Strangeloop offers several astonishing facts about how a website’s loading time effects its visitors’ behavior:

  • Just three seconds wait was enough for 57% of web surfers to turn away from a website, and 80% of those will never return!
  • A one second delay in page-loading time led to 11% fewer page views and a 7% loss in conversions.
  • By speeding up their average page load time from 6 seconds to 1.2 seconds, Shopzilla’s revenues increased by a whopping 12%!
  • A 100 millisecond improvement in responsiveness at Amazon.com increased their revenue by 1%.
  • Yahoo! reported that a 400 milliseconds slow down in their pages loading resulted in 9% less traffic.

While we’ve touched upon client side caching in our series on Web performance, we haven’t discussed how client caching has grown more rich and useful over the years. In the initial days of the Web and the HTTP/1.0 protocol, caching was mostly limited to a handful of headers, including Expires, If-Modified-Since, and Pragma: no-cache. Since then, client caching has evolved to embrace greater granularity. Some new technologies even permit the deployment of offline-aware, browser-based applications.

Browser Request Caching

The most common and oldest type of client-side caching on the client is browser request caching. Built into the HTTP protocol standard, browser request caching allows the server to control how often the browser requests new copies of files from the server. We discussed the major aspects of browser request caching in part 1 of our series. Over time, Webmasters have taken to using different headers to improve caching on their site, including:

Pragma: no-cache. This old directive is used mostly by HTTP/1.0 servers, and instructs a client that a specific response’s contents should never be cached. It is used for highly dynamic content that is apt to change from request to request.

Expires. Supported since HTTP/1.0, this header specifies an explicit expiration date for cached content. It can be superseded by the value of the Cache-Control header. For example, if Cache-Control: no-cache is sent in a response, this will take precedence over any value of the Expires header.

If-Modified-Since: Since the HTTP/1.0 protocol, clients have been able to use this header to request that the server only send data if the resource has been changed since the specified date. If there have been no changed, the server returns an HTTP 304 Not Modified response.

2011 was an exceptional year for articles and information about web application performance. At Loadstorm, we read a lot of great articles on the subject because it interests us deeply. Since you are reading this, you must be a perf geek too – and we are glad you are here.

We went back to pick some of our favorite sources from last year, and selected 10 really good ones. They may not be the most high-profile because those tend to be about/for the big corporations and many times are paid for by sponsors (e.g. HP, IBM). Those sponsorships have an influence on the people they interview, the technologies they recommend, and the author’s perspective. We prefer digging around for the more techie, smaller sites to find data supported from facts.

I hope you find the following resources helpful about the importance of performance, benchmarking, and the mechanics of performance optimization for mobile computing.

The Importance of Web Performance

How important is good performance? The folks at Web Performance Today ran an informal analysis of how consumers on Twitter responded to under-performing sites. Answer: not well at all!

Companies whose sites lagged were often savaged on social media. The lesson is clear: poor performance can have an instant impact on a company’s reputation. WPT’s ad hoc analysis echoes a more formal study back from 2010, when Foviance and CA EMEA hooked up users to EEG skull caps and measured their stress levels in response to slow sites. Foviance found that stress and agitation increase dramatically when users are dealing with poorly performing sites. 71% of such users end up blaming the Web site owner or Web host for their pain and suffering. Additionally, if consumers encounter problems online, 40% will go to a rival website and 37% will abandon the transaction entirely. Only 18% said they would report a problem to a company,

According to a Web Performance Today survey, 74% of users said they would leave a mobile site that takes more than 5 seconds to load. That is compared to 20% of users who were surveyed two years prior.

Expectations for faster sites are increasing faster than web developers are optimizing their architecture!

Happy New Year! I mean that literally…it is a happy new year for web performance geeks like you and me.

Did you ever read The Hobbit? There was a clever exchange between Gandalf and Bilbo where the meaning of “Good Morning” was discussed. Are you wishing me to have a good morning or are you saying it’s a good morning whether I want it or not?

Well, 2012 is going to be a great year for the web performance and load testing industry. I wish that you have a happy new year, AND it will be a happy web performance new year whether you want it or not.

Load Testing for Holiday Season was Awesome

LoadStorm begins 2012 with an upbeat outlook because the last few months of 2011 showed over a 400% increase in load testing volume.

The actual number of load tests being executed by our customers were maybe only double the number in 2010, but the scale of tests was much bigger. It is also interesting to note that many more online retailers were running tests of 25,000+ concurrent users. We had many calls with traditional brick and mortar companies that were putting significant investment into their web store capabilities – including speed and scalability.

Anecdotally, we can share with you that it was a good investment because some of those customers told us that their online sales had risen as much as tenfold (10x) over the previous year! That’s great news for all of us web performance engineers.

Not only does it seem clear to us that web stores are getting high priority for e-commerce, but it is also clear that web performance has gotten much more attention from the C-suite. Several of our customers mentioned their load testing projects were being driven from executives worried about site crashing under heavy traffic. That’s outstanding! Finally, the stories of Web site performance failures is getting the attention it deserves. I guess it was tough to ignore all the headlines blasting companies like Target when their site crashed in 2011.

Operations and marketing leaders are starting to understand the correlation between web performance tuning and profitability. Web sites are not just online brochures, nor are they just a secondary revenue channel. The message that is coming through loud and clear is that faster sites make more money.

In more than one project-related conference call with companies running load tests of 50,000-100,000 concurrent users, there was a VP of Marketing being very active in driving her/his team regarding the results he/she was expecting. It was refreshing (somewhat shocking) to hear a 60 year old traditional advertising agency veteran telling everyone on the call that “sub-second response time is imperative to success!” I loved it. The web coders…not so much because they had lots of optimization ahead of them.

In our past installments on Web performance optimization, we’ve seen how caching, server configuration, and the use of Content Delivery Networks (CDNs) can increase a Web site’s responsiveness and improve Web performance metrics. Most of the techniques we’ve reviewed have focused on configuring the Web server or optimizing server applications. Unfortunately, a Web page that downloads quickly but is slow to parse or execute on the client will appear just as slow to a user as if the Web server were on its last megabyte of memory. In this article, we’ll discuss some ways that Web page content can be streamlined for an optimal client-side experience.

Streamline JavaScript Includes

JavaScript abounds on the Web. From jQuery to Dojo, the Web is full of JavaScript libraries that can easily be dropped into a Web application. And any site whose developers are actively adding features is going to accrue its own storehouse of .js files. Unless a site’s JavaScript is carefully managed, its Web pages could end up making a dozen or more separate requests for scripts. As we’ve already discussed in our article on
web performance optimization non-caching strategies
, the more requests your site makes, the slower it will load.

Tip of the hat to TechAttitude.com for the graphic showing how Web page sizes and number of objects have grown tremendously over the past 16 years. In the interesting article they state, “…the average size of a web page has increased by more than five times since 2003” and “the use of mulitmedia is increasing by 100% each year”.

Follow these guidelines to manage and reduce the burden of JavaScript on your Web pages.

So far in our series on Web Performance Optimization, we’ve focused on how to reduce the number of requests between client and server through caching, and how to make requests more efficient by managing server resources. Another strategy in Web optimization is intelligent distribution of resources across the Internet, which can greatly reduce request latency by locating redundant copies of Web content on multiple servers spread across the Internet. In this installment of our series, we focus on content delivery networks (CDN), a technology that increases throughput by bringing content closer to the people requesting it.
 

What is a CDN?

In the simplest Web site configuration, a single Web server services requests from multiple clients. While this is often good enough for the simplest, lowest traffic Web sites, complex Web sites that need to scale to thousands or millions of visitors require more processing power. This is why many sites have resorted to using Web server farms, which are clusters of multiple Web servers offering redundant copies of a site’s content. Web farms use load balancing software to monitor the amount of load on any one server. They can also use this information to route requests to the server with the least load at a given point in time.

A CDN is a type of Web farm or server cluster, except that instead of using a single farm or cluster, the servers are spread out over the Internet in multiple geographical locations. These are called edge servers, because they are located at the extremes, or edges, of the Internet, instead of all being located off of a central Internet backbone link. The goal of a CDN is to decrease the time it takes to deliver content to a specific user based on that user’s location.

Let’s say, for example, that a company based in New York receives a request for Web content from a user in Seattle, WA. In a traditional setup, the Seattle computer’s request would have to find the most efficient route on the Internet to New York, usually via a busy backbone link. In a CDN configuration, the CDN could tell the Seattle client that its nearest edge server is on a subnet in Portland, Oregon. By obtaining the content from a server closer on the network, the client greatly reduces request latency.
 

In recent years, the good folks at WordPress have made it easier to use their free software not just as a blog, but as the hub of a rich content management system (CMS), complete with static content and custom data types. Given that, it’s no surprise that Webmasters and businesses around the world are increasingly basing entire sites around the platform. (And did we mention the “free” thing?)

While WordPress runs decently out of the box, site operators who employ a few tweaks and follow a few rules of thumb will achieve much better performance in the long run. In this article, we look at the best practices that will keep your WordPress site humming efficiently.
 

Limit the Number of Plugins

WordPress plugins are great. With a few clicks, ordinary users can add complex functionality to WordPress that otherwise might have taken hundreds or thousands of hours of programming.

But plugins also present a performance danger. Each plugin in the WordPress plugins directory must be loaded every time a new request is made. Even if the plugin is disabled, it will still load. This performance “gotcha” particularly affects sites on shared virtual hosting systems (e.g., Dreamhost, HostGator). Performance degradation can be dramatic on shared hosts; in some situations, user requests may never finish completing.

The number of plugins that will cause a site to slow down will vary based on a variety of factors. The author has been told by representatives of HostGator that they encourage customers to limit the number of installed plugins on their virtual hosting service to seven. Webmasters should select the WordPress plugins they employ carefully, and use a load testing service to measure the performance impact of any new plugins that they add to their system.

In my last article, I wrote about the paradigm shift in web application architecture and why performance testers have to re-think their strategy around testing Rich Internet Applications (RIA) for performance. Web application development processes and user expectations continue to grow by leaps and bounds. Sadly, the techniques and approaches employed to test those applications have not kept up with the same growth rate. But the good news is that newer tools are coming up and methodologies are being defined to close in on that gap. Hence, it is essential that performance testers make use of them at every phase of the performance testing lifecycle.   Early on in the performance testing lifecycle, testers gather requirements and collecting application usage statistics is typically one of the primary tasks. In this article, I will explain how “Web Analytics tools” can be a great source of information to gather historical data about the application usage and user behavior.

  

Traditional Web Server Log Approach


Traditionally, performance testers have been relying on the web server log files to collect historical application usage data. A web server log was and still is a great source of information. They contain enormous data on web usage activity and server errors. Downloading log files from the web server and running report generation tools will help testers get meaningful info out of them. However, web server logs have their limitations. For example,

  • Usage data contained in the web server logs do not include most “page re-visits” due to browser caching. For e.g. If a user re-visits a page, no request is received by the web server as the page is retrieved from browser cache. 
  • 
While data contained in the Web server logs can provide insights into system behavior, it does not help much in understanding “user/human behavior”. 
  • 
Web server logs do not provide user’s geographical info, the browser they used and the device/platform they accessed the application from. All of which are vital metrics to understand user behavior on the application. 

While Web Server log files are still a great way to measure user statistics, new ways to measure web traffic have propped up that provide information from a user-perspective rather than a system perspective. A large number of organizations are implementing what is called “Web Analytics Tools” as part of their Web application infrastructure. For example: Industry reports suggest that Google analytics, a leading Web Analytics tool provider is used on 57% of the top 10,000 websites.

In our last article on performance tuning, we examined how to squeeze the most performance out of an Apache server. In this installment, we’ll take a look at how to apply some of these same principles to Microsoft’s Internet Information Server (IIS), which ships as part of Windows Server.

While its percentage of the Web server market share has declined in recent years relative to Apache, IIS still remains the second most deployed Web server on the Internet. Its deep integration with Windows and host of management utilities make it a great choice for anyone hosting Web content and applications in a Windows environment. With a little performance tuning (aided, of course, by load testing), an IIS machine can perform just as well as an equivalent Apache configuration under high load.

The Usual Suspects: Compression and Connection Management

Many of the techniques a team might use to enhance IIS performance are similar to the techniques used on Apache, as they involve configuration of the HTTP protocol.

This article will explain the risks associated with using a scaled (aka downsized) environment for Performance Testing. I’ve been a little off topic lately and I thought I would jump back into the realms of Performance Testing. I thought I would attempt to answer one of the most complicated questions I’m faced with when Load Testing. The question is this “If we half the size of the performance/load testing environment can’t we just multiple the figures up?” This is a straightforward question and the answer is simple – ‘NO’. But justifying the answer and explaining in simple terms is more difficult. Particularly to PM’s and people not directly attached to the technology. So I’m going to attempt to answer in simple terms why scaled load testing environments tend not to work and highlight the risks to be considered when using them. Point people at this article if you struggle to answer this question – and let me know what they think.

First lets take an object: A square – if we half the square, do we get half the size? Well yes and no – the square is half the size, but its capacity is 1/4 of the original square.

This is a very simplistic view – but it illustrates if the environment is ‘halved’ the capacity will not. I’m setting the scene so please bear with me …..

Have you ever wondered why your web access speed is sometimes blazing fast, while at other times you are waiting for response at a slow-as-molasses pace? Yeah, me too. I’ve been obsessed with web performance and page speed since 1996, so I’m sensitive to slowdowns.

Would you be surprised to find out that telecom/cable providers are intentionally and deliberately slowing down web access? Yeah, me too. Empirical data from test results show that at least one provider is chocking users access 85% of the time! U.S. providers are slowing you down twenty-three percent (23%) of the time. Globally, it’s even worse – 32% of the tests show provider slowdown.

There is software that proves it. Internet access operators don’t want to talk about it openly, but when you dig into their standard service contracts, it’s confirmed. They “throttle” your speed whenever they want based on their needs…not yours.

NY Times Sheds Light on Throttling Slowdown Sources

Data speed was critical to me in 1991 because I wrote proprietary healthcare information exchange technology back then using 2400 baud modems. The kicker? The programming language was Clipper & Foxpro. Sounds stupid to me too…looking back. But it was fun programming, a bit cutting edge, and I got ego boosts whenever my geeky friends would ask, “You did WHAT?! With a room of 286 PCs and a Clipper?!”

It was a glorious day for me and my rag-tag team of basement-dwelling coders when we finally received a 9600 baud modem in the UPS shipment. We had a party that day!

Twenty years later, and I am still ultra-sensitive to data transfer slowdowns. I’m usually very happy with my Comcast connection, but occasionally my web applications are slowing to a crawl. It’s obvious from my app server monitoring that the problem is somewhere other than the back-end. Thanks to this article published today by Kevin O’Brien at the NYTimes.com I have a better idea of why my Internet access speed seems to fluctuate inexplicably.

Kevin shares with us how the Networked Systems Research Group concluded:

…the blame often lies with the telecom operator, which is selectively slowing broadband speeds to keep traffic flowing on its network, using a sorting technique called throttling.

Since most of our readers are software/web developers with global reach, we want to share with you info about an upcoming international conference where you can learn cool new coding techniques and hang out with some rock stars in the developer industry. LoadStorm is a proud sponsor of this software development conference and is offering 5,000 free virtual users in our stress testing tool to attendees!

Modern browsers are turning into miniature operating systems. They can multi-task browsing processes, allocate and manage memory, collect garbage and what not. They are capable of running complex Web applications on their own with minimal Server interaction. There is now a paradigm shift in the Web Application architecture as a majority of application processing is shifting from the Server to the Web browser. The Web browser, once called a “Thin” client has become a big fat cat lately.

The Browser Wars

Meanwhile, leading browser makers are fiercely competing against each for dominance in the Web browser market share. This so-called “browser wars” has set off major developments in the capabilities of popular browsers like Internet Explorer, Firefox and Chrome as more and more features are built into them. Browsers are now capable of processing data from multiple sources like Content Delivery Networks (CDNs), Ad Networks and Analytics providers and present them to the user. Browser makers are also scrambling hard to bundle as many new features and enhancements as possible to their browsers to stay ahead in the race. Mozilla, for example, recently announced a new rapid release schedule in order to bring faster updates to their Firefox browser. Google has been doing this to its Chrome browser for a while now. However, as browser capabilities are improved, it has also introduced additional complexity to Web application architecture.

Mobile Computing and the Rapid adoption of newer Web Standards

On the other hand, W3C, the organization that sets Internet standards has also realized the need for newer Web standards in this era of Mobile computing. HTML, the core technology for structuring and presenting content for the Web is undergoing a major upgrade as part of W3C’s new HTML5 specification. Among other things, the new HTML5 standards will make it possible for users to view multimedia and graphical content on the Web without having to install proprietary plug-ins and APIs. Related standards like CSS (that defines the layout) and DOM (that defines interaction with data objects) are also getting an overhaul. Technologies like CSS3 and XMLHttpRequest (XHR) are gaining wide adoption and popularity. These newer Web standards have put the onus on the web developers and front-end engineers to build interactive web applications that are fast, highly responsive and that behave like a traditional software application.

So far in our series of Web Performance articles, we’ve addressed the three majors types of caching that Web server application developers can employ: server file caching, application caching, and data caching. We’ve looked at additional performance enhancements that Web server administrators can can activate, such as HTTP compression, file consolidation, and connection pipelining.

In this latest installment of our series, we’re going a little deeper and focusing on Apache. The world’s most popular Web server, Apache currently powers over 63% of sites on the World Wide Web. While Apache runs decently out of the box, development teams and system administrators should combine frequent load testing with the following tuning recommendations to ensure their applications perform well under heavy traffic.

Memory Models: Prefork vs. Worker

A Web server must be able to respond to anywhere from dozens to thousands of concurrent user requests. Each request, obviously, must be fulfilled by either a separate application process, or a separate thread running within a single process.

Apache can handle concurrent requests in two different ways, depending on which Multi-Processing Module (MPM) was selected at compile time. The default MPM on Unix is prefork, which handles each separate user request in a separate instance of Apache. The other MPM module available for Unix systems is worker, which launches fewer instances of Apache, and handles multiple requests in separate threads within each process.

Worker is the preferred MPM module for Apache under Unix, as it is faster and uses less memory. So why is prefork the default? Because any team that deploys worker must ensure that any modules that run under Apache are thread-safe. The thread-safe requirement extends to any libraries used by these modules, which makes running a programming environment such as PHP under worker tricky: while PHP is itself thread-safe, there is no guarantee that its various extensions are. Some developers, such as Brian Moon, have reported success in running PHP under worker using a minimal set of extension libraries. Your mileage, however, may vary.

I love this graphic on Altom Consulting’s home page that shows the relationship between when a bug is found and the cost of resolving the problem.

Their tagline for the company that relates to this graphic: “We believe in testing as early as possible to minimize the impact and cost of fixing defects.”

Similar Posts