Mailing List Archive

HTTP/1.1 implementation speeds (was Re: votes for 0.6.3)
On Thu, 4 May 1995, Randy Terbush wrote:
> With the lag in browsers adopting the newer standards, can we really
> expect to take advantage of these features before a majority of the
> browsers have begun using them? I still have not found MultiViews
> as useful as it could be without the browsers doing the right thing.

Rob McC could probably answer here when he's not busy, but I'm guessing
that the benefits of HTTP/1.1 won't be nearly as subtle as content
negotiation and that once 1.1 is officially proposed they'll start
implementing it on both the server and browser side.

Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com brian@hyperreal.com http://www.[hyperreal,organic].com/
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Thu, 4 May 1995, Brian Behlendorf wrote:

> Rob McC could probably answer here when he's not busy, but I'm guessing
> that the benefits of HTTP/1.1 won't be nearly as subtle as content
> negotiation and that once 1.1 is officially proposed they'll start
> implementing it on both the server and browser side.
>

I wish I could reproduce my live-action HTTP/1.0 vs HTTP/1.1 vs HTTP-NG
performance piece from Darmstadt (Tim and Henrik were great in the role
of client and server :-)..

You can get a lot of the benefits of persistent connections from these
new protocols even if only two machines in the entire world ever ran
them- just set up two proxy servers at each end of the intercontinental
link, and you can avoid latency over the longest pipe.

One suggestion I did push which had a fair reception (though there are
some good arguments against it) is using HTTP-NG in its simplest form as
the packetiser and multi-plexing component of HTTP/1.1.

HTTP-NG uses a simple packetising protocol (SCP). SCP basically just
sticks a channel identifier and a length count in front of each chunk of
data, together with a few flags to do things like close or abort a
channel. HTTP-NG also has a request "HTTP-TOS" which is basically
designed to carry an HTTP/1.0 request, and then recieve an HTTP/1.0
response (This request type was originally included to solve a problem
with gateway S-HTTP and MDA (which I just have to support in MDMA)), but
turns out to make retrofitting existing servers really easy (little bit
of wrapper magic to get the packets, and a little bit of search and
replace on the send_foo files to not send straight to the file
descriptor).

HTTP-NG allows a lot of the cool stuff like interleaving streams and
responding to requests out of order to be disabled; it also negotiates
the type of requests which each system handles (it's designed for
incremental implementation).

If anybody's interested I'll knock of a quick draft of NG-lite containing
just the bits needed to run in this profile. I could also include the
necessary parsing code (it's not rocket science - hell, it's barely
safety match science :-)

My current idle-task is trying to change apache to make the back end more
protocol independent (less writing straight to the socket) and to make
the code re-runnable (fixing all the dies). I should have the java-ng
client going soon, and I'd like to be able to test against a non-threaded
server.


Simon
p.s.

Warning: Do not write code in java and immediately after try to write
code in C++; you don't realise how broken C++ is until you use a language
so similar to C++, but which does the right thing for so many of
those things that C++ gets wrong.....
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Thu, 4 May 1995, Simon Spero wrote:
> One suggestion I did push which had a fair reception (though there are
> some good arguments against it) is using HTTP-NG in its simplest form as
> the packetiser and multi-plexing component of HTTP/1.1.

If some public-domain reference code could get out there in short order,
this not be too hard to accomplish....

> If anybody's interested I'll knock of a quick draft of NG-lite containing
> just the bits needed to run in this profile. I could also include the
> necessary parsing code (it's not rocket science - hell, it's barely
> safety match science :-)

Yes!

> My current idle-task is trying to change apache to make the back end more
> protocol independent (less writing straight to the socket) and to make
> the code re-runnable (fixing all the dies).

Make sure you coordinate with Rob H's "Fork-free" patch, as it sounds
like that's a large part of that patch as well. Most of us had just
presumed that the core code would just be dumped when it's time to
go multi-threaded, but hopefully a lot of the modules could be reused.

> I should have the java-ng
> client going soon, and I'd like to be able to test against a non-threaded
> server.

Word on the street is that someone is working on a java-based HTTP server
which is already 10 times faster than netsite... shshshsh :)

Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com brian@hyperreal.com http://www.[hyperreal,organic].com/
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Thu, 4 May 1995, Brian Behlendorf wrote:
>
> Word on the street is that someone is working on a java-based HTTP server
> which is already 10 times faster than netsite... shshshsh :)

That sounds more or less right, though I'd like to know whether the code
is running on 2.4 or 2.5. The numbers are pretty similar to the old MDMA,
which was spending virtually none of its time in user-based code.


Java bleeds a bit of cpu performance, but for "gissa file" requests, all
the cpu is going on TCP in the kernel (there's a fair bit of lossage in
2.4 that is reportedly fixed in 2.5; hopefully I'll get to try 2.5 out RSN
on some meaty hardware and see how well it flies).

bash# /etc/mount /dev/peeve /hobby-horse
bash#
It's very easy to get bogus numbers on benchmarks, because there's no
agreed on standard. The CommerceNet WebStones RFP should be out soon, and
that looks like it should end up with something pretty sane- however
doing benchmarks right gets pretty hard. A lot depends on the speed of
the client's network connection; loads that are fine if everyone is on a
T1 can knock a server completely out of shape if you change to V32bis
modems. It's kind of hard to distill this sort of performance curve into
a single '10x' ratio.

bash# /etc/umount /hobby-horse
bash#


Simon
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
/*
* "Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3)"
* written Thu, 4 May 1995 20:33:44 -0700 (PDT)
*
* It's very easy to
* get bogus numbers on benchmarks, because there's no agreed on
* standard. The CommerceNet WebStones RFP should be out soon, and
* that looks like it should end up with something pretty sane-
* however doing benchmarks right gets pretty hard. A lot depends on
* the speed of the client's network connection; loads that are fine
* if everyone is on a T1 can knock a server completely out of shape
* if you change to V32bis modems. It's kind of hard to distill this
* sort of performance curve into a single '10x' ratio.
*
*/

[okay, my turn then]

And I agree. We found that problems that never showed up in the lab
(simulated load) suddenly created very real problems for
home.netscape.com. Nearly all of them turned out to be things that
needed to be fixed at the TCP level; most of them weren't at the
application level. The benchmarks I've seen thus far can't effectively
capture the parallelism and bursts inherent in real-world TCP loads.

Contrary to what people might think, Netsite was not designed to be
the fastest HTTP server ever. It was designed with more modest goals
in mind: build a high performance HTTP server that's good enough for
the loads seen by real sites both today and in the near
future. Functionality has always taken precedence over performance. We
could have done some things to make it faster; servers which use
kernel threading where available or some type of non-preemptive user
level threads will probably win over multiprocess models.

But we had reasons unrelated to performance to avoid a single process
design. Lack of portability, the possibility for catastrophic
instability, and an abnormally short development cycle were very real
concerns for us. I actually wrote several papers on the design of
Netsite a while back, I don't know if they'll ever be published
outside the company.

And how many customers are going to benefit from that kind of a
difference? Less than 1%? Less than 5%? Netsite is thread ready, runs
threaded under NT with very little additional code, and when
persistent connections are added (probably for the next release if
standards go well) we'll undoubtedly start using them under UNIX.

But at this point, the interesting performance work is not going to
take place at the application level, it's going to take place at the
protocol level. We've routinely done over three million hits a day on
one machine with our software, and that's more traffic than just about
anybody on the Web today is going to get in the near future.

--Rob
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Thu, 4 May 1995, Rob McCool wrote:
> But at this point, the interesting performance work is not going to
> take place at the application level, it's going to take place at the
> protocol level. We've routinely done over three million hits a day on
> one machine with our software, and that's more traffic than just about
> anybody on the Web today is going to get in the near future.

Is the problem the TCP/IP protocol or the broken implementations out
there? How much will IPNG help, you think?

3 million hits... Yahoo is at the 2 million/day mark now, and AOL has
just launched its web access now (we're getting *tons* of hits from
*.proxy.aol.com, even in the middle of the night!)

Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com brian@hyperreal.com http://www.[hyperreal,organic].com/
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
/*
* "Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3)" by Brian B
* written Fri, 5 May 1995 00:18:48 -0700 (PDT)
*
* Is the problem the TCP/IP protocol or the broken implementations
* out there? How much will IPNG help, you think?

The problem lies not with TCP or IP, but with HTTP. That's why I think
the work being done on HTTP/1.1 and HTTP-NG are the most
interesting. It's work that has needed to be done for a very long
time. To answer your earlier question about how soon new protocol
developments are going to be deployed, we are working on the ones that
exist, and we will be working on the up and coming ones in our
browsers and servers.

* 3 million hits... Yahoo is at the 2 million/day mark now, and AOL
* has just launched its web access now (we're getting *tons* of hits
* from *.proxy.aol.com, even in the middle of the night!)
*/

Yup. We're also finding that under real-world conditions, most TCP
implementations just can't keep up with more than 100 new connections
per second. In a lab, they can. But in a lab, you don't have high
retransmission rates, slow clients, or anything else that makes HTTP
difficult for the kernels to keep up with.

In this regard, both a new protocols as well as new cluster
managements techniques to help site managers automatically handle
having more than one server machine will help popular sites as Web
availability (and thus traffic) grows.

--Rob
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Fri, 5 May 1995, Rob McCool wrote:
> /*
> * "Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3)" by Brian B
> * written Fri, 5 May 1995 00:18:48 -0700 (PDT)
> *
> * Is the problem the TCP/IP protocol or the broken implementations
> * out there? How much will IPNG help, you think?
>
> The problem lies not with TCP or IP, but with HTTP. That's why I think
> the work being done on HTTP/1.1 and HTTP-NG are the most
> interesting.

Ah, okay, by your earlier message I thought you were saying the opposite
(isn't HTTP at the "application layer"? hmmm)

> It's work that has needed to be done for a very long
> time. To answer your earlier question about how soon new protocol
> developments are going to be deployed, we are working on the ones that
> exist, and we will be working on the up and coming ones in our
> browsers and servers.

Cool.

> In this regard, both a new protocols as well as new cluster
> managements techniques to help site managers automatically handle
> having more than one server machine will help popular sites as Web
> availability (and thus traffic) grows.

But bandwidth will still be a killer - which is why I've turned into a
proxy cop on www-talk recently :) When we move from the 5 million
browsers mark to the 500 million browsers mark (15 years?), proxies are
going to be the only thing that holds this whole thing together.

You've been listening to "Late Night Musings" with your hosts, Rob McCool
and Brian Behlendorf. We now return you to your regularly scheduled
mail queue, already in progress.

Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com brian@hyperreal.com http://www.[hyperreal,organic].com/
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
/*
* "Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3)" by Brian
* written Fri, 5 May 1995 01:08:56 -0700 (PDT)
*
* Ah, okay, by your earlier message I thought you were saying the
* opposite (isn't HTTP at the "application layer"? hmmm)

Network wise, yes... what I meant was application == HTTP server code
instead of kernel.

* But bandwidth will still be a killer - which is why I've turned
* into a proxy cop on www-talk recently :) When we move from the 5
* million browsers mark to the 500 million browsers mark (15 years?),
* proxies are going to be the only thing that holds this whole thing
* together.
*/

15 years? Nah, give it another six months.

I'd also agree about the importance of caching proxies and
bandwidth. For people that are setting up servers, most of the time
the question isn't how fast should my machine be, it's how large
should my network pipe be?

--Rob
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Thu, 4 May 1995, Rob McCool wrote:

>
> And I agree. We found that problems that never showed up in the lab
> (simulated load) suddenly created very real problems for
> home.netscape.com. Nearly all of them turned out to be things that
> needed to be fixed at the TCP level; most of them weren't at the
> application level. The benchmarks I've seen thus far can't effectively
> capture the parallelism and bursts inherent in real-world TCP loads.

<double-mocha driven preaching-to-converted mode activated>

That's the point that the technical people were making at the RFP
meeting (actually, mostly me and Brendan Netscape). People
don't want to know how many 300 byte documents they can server per
second- they want to know how many big I Internet users can be browsing
at any given time without their having to buy a new machine or upgrade
their connection.

As you say, the protocol is still the problem; when I was visting with
Dave Raggett at HP-Labs, I was able to get TPS rates out of a
trivial HTTP-NG mockup on a spare machine that no-one else wanted that
far exceeded the best numbers I can get out of massively tweaked servers
running on far faster machines. There's a limit to how fast you can make
a steam engine go... Dammit Jim - you canna change the laws of physics.

Simon
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
Date: Thu, 4 May 1995 23:55:21 -0700
From: Rob McCool <robm@netscape.com>

But at this point, the interesting performance work is not going to
take place at the application level, it's going to take place at the
protocol level. We've routinely done over three million hits a day on
one machine with our software, and that's more traffic than just about
anybody on the Web today is going to get in the near future.

In general, sure... for most purposes, ANY of the recent crop of
servers offers more than adequate performance when run on reasonable
hardware (Netsite, NCSA 1.4... hell, even Apache ;-). This does make
breast-beating from anyone about raw performance a bit besides the
point.

There is an exception, however, which is sites which do a whole lot of
CGI. The inherent overhead of the CGI mechanism (particularly if the
scripts themselves are written in something like Perl, or even if they
just link a whole lot of shared libraries) does drag things down. Of
course, the only cure for that is a good server-internal API... and
Rob M's done some pretty neat things there. (Anyone who hasn't looked
at the NSAPIs should... Netscape put the reference on the "standards
page", but what you know to look there, it's not hard to find).

rst
Re: HTTP/1.1 implementation speeds (was Re: votes for 0.6.3) [ In reply to ]
On Fri, 5 May 1995, Brian Behlendorf wrote:

> You've been listening to "Late Night Musings" with your hosts, Rob McCool
> and Brian Behlendorf. We now return you to your regularly scheduled
> mail queue, already in progress.

100% Protocol - Geek 100FM

> But bandwidth will still be a killer - which is why I've turned into a
> proxy cop on www-talk recently :) When we move from the 5 million
> browsers mark to the 500 million browsers mark (15 years?), proxies are
> going to be the only thing that holds this whole thing together.

Bandwidth is ready to make a jump of way more than 100 fold; switching to
600Mbits is here now - is just the computers that can't keep up :-) The
real problem is going to be latency. The only thing that can hold the
whole thing together will be proxies. Hmmm :-)

The other thing that can really help are media specific protocols for
time sensitive information. It turns out that it isn't just isochronous
media like audio and video that fit this description - icons on a page
are also time critical; there's nothing more frustrating than just
sitting there waiting for an icon to render. If you transfer icons over a
lossy protocol, and use a real time transport protocol (er, like RTP),
you adapt to the conditions over the path you're using, the image can be
displayed even with packet loss, and the level of detail throttled back
to support the line conditions available.

Simon
Re: HTTP/1.1 implementation speeds [ In reply to ]
> * But bandwidth will still be a killer - which is why I've turned
> * into a proxy cop on www-talk recently :) When we move from the 5
> * million browsers mark to the 500 million browsers mark (15 years?),
> * proxies are going to be the only thing that holds this whole thing
> * together.
> */
>
> 15 years? Nah, give it another six months.

Rob McCool.


Rob, what are you basing this on ? Everyone using 100 browsers
instead of 1 in the near future ???

Maybe Ncom plan to bundle Netscape with each box
of corn flakes :-) ... send 3 tokens for Netscape, 4 if
you want SSL.

robh
Re: HTTP/1.1 implementation speeds [ In reply to ]
[warning: this message is content free]

/*
* "Re: HTTP/1.1 implementation speeds " by Rob Hartill <hartill@ooo.lanl.gov>
* written Fri, 5 May 95 11:31:19 MDT
*
** 15 years? Nah, give it another six months.
*
* Rob, what are you basing this on ? Everyone using 100 browsers
* instead of 1 in the near future ???

I was mostly kidding. But I think you're on to something. Ten browsers
for every finger?

* Maybe Ncom plan to bundle Netscape with each box of corn flakes :-)
* ... send 3 tokens for Netscape, 4 if you want SSL.
*/

Now there's an idea. Floppy disks in specially marked boxes of Cap'n
Crunch.

--Rob
Re: HTTP/1.1 implementation speeds [ In reply to ]
On Fri, 5 May 1995, Rob McCool wrote:

> *
> * Rob, what are you basing this on ? Everyone using 100 browsers
> * instead of 1 in the near future ???
>
> I was mostly kidding. But I think you're on to something. Ten browsers
> for every finger?

Hell- who needs fingers. I have one browser under voice control, and
another two for mouse control.

Simon
Re: HTTP/1.1 implementation speeds [ In reply to ]
Date: Fri, 5 May 1995 12:50:01 -0700
From: Rob McCool <robm@netscape.com>

* Rob, what are you basing this on ? Everyone using 100 browsers
* instead of 1 in the near future ???

I was mostly kidding. But I think you're on to something. Ten browsers
for every finger?

Danny Hillis has a story he tells about a conversation in the hotel
lobby at a computer conference around 1972. The topic is where the
exponential curve in computer sales is going to top off --- no one's
sure, but there has to be a limit. After all, quips one of the guys,
what are people going to do with them all --- put a computer in every
doorknob?

Twenty years later. Same lobby. Same people. Same hotel --- but
they've just installed a keycard system with programmable locks.
There is a computer in every doorknob.

* Maybe Ncom plan to bundle Netscape with each box of corn flakes :-)
* ... send 3 tokens for Netscape, 4 if you want SSL.
*/

Now there's an idea. Floppy disks in specially marked boxes of Cap'n
Crunch.

I though AOL was already using this strategy.

rst