Mailing List Archive

NCSA non-forking model doesn't fix the problem
Using some code which Rob T's been working on, I have the following
theory about the NCSA approach to non-forking that I'd like to
air for comments...

The NCSA 1.4 model for a non-forking httpd is fundamentally flawed..

and here's why *I* think so,

1.4 requires the root-httpd to do all the "accept()" calls, and to
feed the child-httpd's with file descriptors to complete the
request.

This effectively make root-httpd a process scheduler.

All systems already have an optimized process scheduler, and bolting
another one on top of this to be run in user-mode is a bad idea.

I did some quick tests with a NCSA 1.4 like configuration, and setup
5 child-httpds. With continuous requests coming in from 11 different
sources, one or more of the child-https sat idle for long periods
while the others were left to do all the work.

What I think was happening was that root-httpd was iteslf not being
scheduled frequently enough for it to do a decent job of scheduling
the child-httpds. In the time it took root-httpd to be recheduled, one
of the "busy" child-httpds had finished it's latest request and had
become "ready". But during this time, there were requests coming into port
80 that were not being serviced, but which had idle child-httpds sitting
there eager to do some work.

A single processor system doesn't need a root-httpd to act as
a scheduler. All the work can be performed inside the http-child
processes. MP systems only need a locking mechanism around accept()
to make this simpler model work.

I base this on empirical data and a belief that the theory
it highlights is sound.

While forking is known to be costly, the biggest bottleneck that
a non-forking model should attack is the single process responsible
for the accept(). Remove that from the equation and things should
really fly.

comments ?

robh