On Mon, 3 Apr 1995, Andrew Wilson wrote:
> 1. Actually I call this problem "automounter bug". You see I have
> document root tree in automounted directory (I want it there,
> because it's backed up daily in my place). I've found that the
> server spends a lot of time trying to mount
> /automounteddir/.htaccess. And this turned to be really "expensive"
> operation, I'm talking about ~1.5 seconds per request! And indeed,
> httpd looks for .htaccess files in the whole tree starting in /. I do
> believe the real intension was to check .htaccess files in document
> tree only. So I did as following:
Hmm - I've used that "misfeature" more than once on both hotwired and
hyperreal. Given that looking for .htaccess files is a performance hit
anyways, I don't see that looking for them in the one or two directories
under the document root is a large penalty. If this is put in it should
be well documented, and access.conf should still be able to denote
restrictions based on low-level dirs.
> 2. The second workaround takes its origin in the fact that I run NIS+.
> Well, the problem would be still there even if I would run YP.
> Referencies to NIS are also rather "expensive", it's not seconds,
> but it's not milliseconds either, it's tenths of second with NIS+
> and one can feel it, if I have a number of for example GIF pictures
> in my HTML page. Workaround is to cache group IDs set for user ID
> assigned to serve http requests:
Does the initgroups() patch solve this?
> 3. This workaround is not actually workaround, but rather matter of
> taste. Actually it was the first thing I've thought of and
> implemented. Idea is to get rid of numerious read(fd,&c,1) system
> calls in utils.c:getline() function and replace it with function
> calls. Arithmetic is simple: instead of having >1000
> going-to-kernel system calls, I have just few system calls and >1000
> stay-in-user-mode function calls. Well, the cost of a system call
> (at least under Solaris 2.x) not only context switching, it's rising
> process priority as well, so that patched following way httpd would
> leave more CPU time for interactive processes under heavy loads:
I think we patched this one already too....
BTW, Apache has been running on hotwired all weekend, working mostly
without a hitch. The only problems were perl CGI scripts with buffered
i/o calling programs without buffered i/o, such that the output of the
programs without buffering would appear before the "Content-type:
text/html" line, resulting in "500 server error"s. Fixing the scripts to
do nonbuffering (use "$| = 1") fixed that problem, but I even had to do
this on a script or two that I didn't write (like Digicash's shop
software) so it's possible that others will have this problem. It should
get documented.... other than that, it's been working like a champ! My
workstation, an Indy R4400 running 5.3, in the last minute got 283 http
accesses and the load average is 1.3, whereas normally with that kind of
load it'd be well over 2. The average number of concurrent
child processes has decreased as well, from 20 to 7 (comparing 1.3R and
apache on different machines with the same load). I've noticed a lot
fewer 403 responses, though.... I'll do a more detailed report later.
Brian
--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@hotwired.com brian@hyperreal.com
http://www.hotwired.com/Staff/brian/