Apache is the number one Web server running on Linux systems. There are a number of little things that can be done to tune Apache performance and to lessen its impact on system resources. One of these things is tweaking the memory usage, which can be a difficult thing to profile. For instance, using ps to determine memory usage of httpd threads, use: Code: # ps -U apache -u apache u USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND apache 13067 0.0 5.3 149704 54504 ? S Oct07 1:53 /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -DAPACHE2 The above indicates that a single httpd process is using 50 MB of RSS (Resident Set Size) memory (or non-swapped physical memory) and that it is using 149 MB of VSZ (Virtual Size) memory. This, of course, largely depends on the number of modules you have loaded and running in Apache. This is by no means a definitive number. Due to the fact that shared libraries are included in this number, it's not 100 percent accurate. We can assume that half the RSS number is "real" memory, which is probably conservative, but close enough for our purposes. In this instance, assume that each httpd process is using 27 MB of memory. Next, you need to determine the allowable amount of memory that httpd can actually use. Depending on what else is running on the machine, you may want to dedicate 50 percent of the physical memory to Apache's use. In the case of a system with 1GB of RAM that would be 512 MB of RAM divided by 27 MB of memory, which leaves room for approximately 19 concurrent httpd processes. Some individuals maintain that each httpd thread uses about 5 MB of "real" memory, so you could theoretically accommodate 102 concurrent processes at any given moment with 512 MB of physical RAM dedicated to Apache (keeping in mind that unless your site obtains extreme amounts of traffic, this is likely to be rare). By default, Apache allocates a maximum number of 256 simultaneous client connections, or 256 processes (one to serve each request). With this setting, a heavily-trafficked site would be taken down in moments (even if you assume 5 MB per process, 1.3 GB of RAM would be required to satisfy that number of requests). If nothing else, it would cause the system to thrash the hard disk by attempting to use swap to handle what can't fit into physical memory. Other settings to tweak include the KeepAlive, KeepAliveTimeout, and MaxKeepAliveRequests settings. Recommended settings, which can all be set in the httpd.conf file, would be: Code: ServerLimit 128 MaxClients 128 KeepAlive On KeepAliveTimeout 2 MaxKeepAliveRequests 100 By decreasing the KeepAliveTimeout from 15 seconds to 2 seconds, the MaxClients directive can be increased; 19 is pretty small, and 128 is much better. By reducing the number of seconds that a process can live, you can enable more connections in the same amount of time. Of course, numbers mean nothing without some real-world testing behind it, which is where ab comes in. Testing a stock Apache configuration (MaxClients is 256, ServerLimit is 256, KeepAliveTimeout is 15) using ab configured to make 1000 requests with a concurrency of 100 consecutive requests would be as follows. (Be sure to have a terminal open on the server to observe the system load while executing the test.) Code: $ ab -n 1000 -c 100 -k http://go4expert.com/index.php Now change the server settings to the more conservative settings above, restart Apache, and try the benchmark again (always from a remote machine and not the localhost). In testing here, the tweaked settings result in the execution time taking twice as long (27.8s vs 16.8s), but the load average was 0.03 vs 0.30. This may make your site a little bit slower, but it will ensure it doesn't fail under high load. Keep in mind as well that you will want to make multiple test attempts and take the average of all the tests in each instance. Using ab is a fantastic way to benchmark any tweaks you make to your Apache configuration and should be used every time you make any changes that could impact performance.