So far in our series of Web Performance articles, we’ve addressed the three majors types of caching that Web server application developers can employ: server file caching, application caching, and data caching. We’ve looked at additional performance enhancements that Web server administrators can can activate, such as HTTP compression, file consolidation, and connection pipelining.

In this latest installment of our series, we’re going a little deeper and focusing on Apache. The world’s most popular Web server, Apache currently powers over 63% of sites on the World Wide Web. While Apache runs decently out of the box, development teams and system administrators should combine frequent load testing with the following tuning recommendations to ensure their applications perform well under heavy traffic.

Memory Models: Prefork vs. Worker

A Web server must be able to respond to anywhere from dozens to thousands of concurrent user requests. Each request, obviously, must be fulfilled by either a separate application process, or a separate thread running within a single process.

Apache can handle concurrent requests in two different ways, depending on which Multi-Processing Module (MPM) was selected at compile time. The default MPM on Unix is prefork, which handles each separate user request in a separate instance of Apache. The other MPM module available for Unix systems is worker, which launches fewer instances of Apache, and handles multiple requests in separate threads within each process.

Worker is the preferred MPM module for Apache under Unix, as it is faster and uses less memory. So why is prefork the default? Because any team that deploys worker must ensure that any modules that run under Apache are thread-safe. The thread-safe requirement extends to any libraries used by these modules, which makes running a programming environment such as PHP under worker tricky: while PHP is itself thread-safe, there is no guarantee that its various extensions are. Some developers, such as Brian Moon, have reported success in running PHP under worker using a minimal set of extension libraries. Your mileage, however, may vary.

For teams running Apache under Windows, the only available MPM is mpm_winnt, which serves concurrent users on multiple threads within a single Apache process.

Calculating How Many Connections Your Application Requires

When calculating the number of “clients” a Web application will serve, it helps to bear in mind that most popular Web browsers do not implement HTTP pipelining, which sends multiple HTTP requests and responses over a single TCP connection. If a Web browser is obeying the HTTP/1.1 standard (section 8.1.4), it will confine its requests for images, scripts, and style sheets referenced by a Web page to at most two concurrent server connections. Web developers should keep in mind that any single end user may occupy up to two threads (worker) or two processes (prefork) until the user’s request is complete.

Worker: Configure ThreadsPerChild and MaxClients

When configuring Apache for the worker MPM, the most important settings are ThreadsPerChild and MaxClients. ThreadsPerChild will control how many individual worker threads are created per process, while MaxClients sets the upper limit on the number of distinct processes. ThreadsPerChild * MaxClients represents the number of concurrent requests that a Web server box can handle under worker. The number of threads designated by ThreadsPerChild are created at startup, and remain idle until they are required to fulfill an HTTP request.

Development teams should tweak the values of these two parameters to find the ideal settings for performance under heavy loads, using a testing suite such as LoadStorm to simulate a large number of concurrent users. The value of MaxClients should be set to a number that allows the maximum number of processes, while leaving enough memory for the operating system and other critical server processes.

Prefork: Configure MaxClients

MaxClients is an especially critical setting when running under the prefork MPM, because each new user request will result in a new process. When the number of running httpd processes on the server hits MaxClients, any new requests will be queued until an existing process finishes serving its current request. As with worker, MaxClients should not be set so high that Apache grabs memory required by other critical system processes. On the other hand, if this setting is too low, clients will be left waiting needlessly.

Configure StartServers, MinSpareServers, and MaxSpareServers

The StartServers parameter controls how many instances of the Apache httpd process are launched at startup. MinSpareServers and MaxSpareServers determine the lowest and highest number of servers that should be kept waiting idle for new application requests.

Configure MaxRequestsPerChild

MaxRequestsPerChild determines how many lifetime requests a server should handle before it is killed and restarted. As an Apache process runs, it will continue to consume additional memory. Restarting processes frees this memory pool, and resets Apache to its initial (and usually far lower) memory consumption level. This parameter can usually be set safely somewhere in the thousands. Again, load testing is the best way to calculate an ideal value for such parameters.

Reduce KeepAliveTimeout and Timeout

The KeepAlive parameter in httpd.conf determines how long a persistent connection is kept open before it is forcibly disconnected by the server. The longer persistent connections are kept open, the higher the odds that Apache will run out of threads or processes, forcing it to queue requests. For worker and mpm_winnt, this value can bet set safely around 15 to 20 second. For prefork, many Apache gurus suggest either setting it to a very low value (like 2), or setting it to 0.

According to the Apache docs, the Timeout parameter dictates how long Apache will wait on one of three actions: the client transmitting a GET request over an active connection; the interval between TCP packets on a POST or PUT request; and the interval between ACKs of TCP packets sent from the server to the client. Lowering this value ensures that threads and processes aren’t forced idle by nonresponsive clients.

(Incidentally, reducing these two values also renders Apache less susceptible to Denial of Service attacks.)

Other Tuning Options

These tips barely scratch the surface of Apache performance tuning. The Apache team maintains a large article chock full of potential performance enhancements. Vishnu Ram V at Linux Gazette highlights some of the more important ancillary settings, such as HostNameLookups, AllowOverride, and FollowSymLinks.

At the risk of sounding like a broken record, we must emphasize: the best values for all of these parameters are best determined by load testing. When testing ideal Apache settings for a Web application, developers and sysadmins should change only one or two settings at a time, and then test the results under heavy load using a tool such as LoadStorm. With careful and controlled testing, every team can find the Apache configuration guaranteed to help their withstand an onslaught of popularity.

Similar Posts