Mailing List Archive

Keynote/Boardwatch Results
I hate to say I told you so. Actually, I don't, but I can be such a pain
in the ass anyway what difference does it make.

It would appear that everyone is pretty smugly satisfied by concensus that
the performance series we ran actually measures server performance and that

since all ISPs run weeny home servers, this was not "really" a test, flawed
methodology, etc. I corresponded with Doug the Hump at Digex about this.
I've liked this guy since I first met him largely because he's funny and
doesn't take himself too seriously. He's got a yen for black helicopters
that still has me in stitches.

In any event, he didn't appear to be emotionally involved, but noted that
he did think their web server was the problem in their case and that it was
a real issue with the other backbones as well. He said that if we had
measured one of their honking customer web servers that it would have all
been better.

Now I was clear with Doug, as I have been in this mailing list, that
servers DO have impact. When you are attempting to measure end to end, you
certainly hope that the results cummulatively represent everything that has
an effect. But I have been very clear that I didn't buy the "measuring
server" theory. It will have an effect, but not near the effect you all
have apparently dreamed up amongst you with no data at all.

Anyway, Doug coughed the names of a couple of "honkin" customer sites. One
on a UNIX machine running Apache. One on an NT server. As it so happens
they are the NIKE site and the FORBES site. I agreed to run them all for a
few days and publish the results rather openly. Here they are:

Www.digex.com

Metro Area Abbr Population Mean Std Dev Data Pts. Rating
Omaha OMA 640000 - - - -
Norfolk ORF 1443000 0.917 0.787 477 -
Milwaukee MKE 1607000 2.641 5.545 427 -
Cleveland CLE 2860000 2.893 4.155 432 -
Washington D.C WAS 6727000 3.326 10.274 436 -
Kansas City MKC 1583000 3.334 3.881 440 -
Detroit DTT 5187000 3.424 11.593 470 -
Atlanta ATL 2960000 3.434 5.774 447 -
Denver DEN 1980000 3.497 2.725 433 -
Tampa TPA 2068000 3.541 9.753 440 -
Minneapolis-St. Paul MSP 2539000 3.697 7.263 420 -
Pittsburgh PIT 2395000 4.039 9.484 441 -
Miami MIA 3193000 4.336 11.227 409 -
Chicago CHI 8240000 4.433 7.977 432 -
Philadelphia PHL 5893000 5.687 36.621 446 -
Columbus CMH 1345000 6.582 10.275 352 -
San Diego SAN 2498000 7.124 6.563 111 -
Houston HOU 3731000 7.156 19.197 426 -
Boston BOS 5455000 8.147 68.522 342 -
New York NYC 19550000 8.911 48.549 436 -
San Francisco SFO 6253000 9.462 70.834 367 -
Phoenix PHX 2238000 12.226 26.802 447 -
Seattle SEA 2970000 14.989 74.818 201 -
Dallas-Ft. Worth DFW 4037000 16.24 36.844 421 -
Los Angeles LAX 14532000 33.879 78.169 572 -
Salt Lake City SLC 1072000 48.719 80.656 383 -
Portland PDX 1793000 65.23 88.641 446 -

11.287 42.959 10654
www.nike.com

Metro Area Abbr Population Mean Std Dev Data Pts. Rating
Omaha OMA 640000 - - - -
Norfolk ORF 1443000 1.359 6.704 385 -
Washington D.C WAS 6727000 2.838 6.486 426 -
Cleveland CLE 2860000 2.892 5.958 424 -
Detroit DTT 5187000 3.032 8.818 380 -
Milwaukee MKE 1607000 3.262 10.087 420 -
Tampa TPA 2068000 3.263 5.876 427 -
Philadelphia PHL 5893000 3.697 7.305 433 -
Kansas City MKC 1583000 3.82 7.477 429 -
Los Angeles LAX 14532000 3.979 7.142 441 -
Denver DEN 1980000 4.22 8.725 421 -
Miami MIA 3193000 4.338 13.774 398 -
Pittsburgh PIT 2395000 4.392 10.68 431 -
Minneapolis-St. Paul MSP 2539000 4.545 8.688 409 -
Atlanta ATL 2960000 4.577 16.279 438 -
New York NYC 19550000 5.163 13.025 427 -
Boston BOS 5455000 5.725 16.02 347 -
Chicago CHI 8240000 5.95 11.926 425 -
San Francisco SFO 6253000 7.254 21.74 353 -
Houston HOU 3731000 7.557 14.675 421 -
Seattle SEA 2970000 9.359 22.033 184 -
Columbus CMH 1345000 9.991 21.586 358 -
Phoenix PHX 2238000 14.089 25.896 432 -
Dallas-Ft. Worth DFW 4037000 17.677 44.361 407 -
San Diego SAN 2498000 19.029 10.833 16 -
Salt Lake City SLC 1072000 37.156 92.765 302 -
Portland PDX 1793000 79.404 166.148 437 -

9.868 44.374 9971
www.forbes.com

Metro Area Abbr Population Mean Std Dev Data Pts. Rating
Omaha OMA 640000 - - - -
Kansas City MKC 1583000 0.0050 0.0 14 -
Norfolk ORF 1443000 2.165 11.616 380 -
Miami MIA 3193000 2.715 10.382 396 -
Washington D.C WAS 6727000 3.151 22.964 419 -
Philadelphia PHL 5893000 3.177 14.547 423 -
Milwaukee MKE 1607000 3.204 19.791 416 -
Atlanta ATL 2960000 3.364 17.068 434 -
Minneapolis-St. Paul MSP 2539000 3.847 10.974 405 -
Cleveland CLE 2860000 3.879 20.768 415 -
Denver DEN 1980000 3.974 22.303 417 -
Tampa TPA 2068000 4.001 19.877 421 -
Pittsburgh PIT 2395000 4.333 25.179 426 -
Detroit DTT 5187000 4.428 20.891 376 -
New York NYC 19550000 4.912 21.919 423 -
Boston BOS 5455000 5.347 30.106 340 -
Seattle SEA 2970000 6.324 12.105 179 -
Chicago CHI 8240000 6.378 26.431 421 -
San Francisco SFO 6253000 6.575 35.339 351 -
Houston HOU 3731000 7.49 20.885 417 -
Columbus CMH 1345000 7.726 19.924 351 -
Dallas-Ft. Worth DFW 4037000 13.068 22.705 404 -
Phoenix PHX 2238000 15.445 50.526 427 -
Los Angeles LAX 14532000 23.117 150.429 460 -
Salt Lake City SLC 1072000 47.801 227.832 297 -
San Diego SAN 2498000 85.378 153.109 14 -
Portland PDX 1793000 89.043 241.734 425 -

11.53 79.068 9451


The bottom line is that there is some slight variation, but as I predicted,
not much. And as it so happens, it was generally in the wrong direction.
Nike was a little better on the mean and a little worse on the standard
deviation. Forbes was a little worse (in fractions) from Digex on the
mean, and more so on the deviation. Digex's original figures for the April
20- May 20 period were 9.162 seconds on the mean and 31.752 on the standard
deviation - slightly better than average. Note that 30 days and five days
are apples and oranges if you comprendo fruit. The means for the five day
period were roughly 9.9, 11.3, and 11.5 with the "weeny" Digex server in
the middle.

So yeah, servers do have an impact, but not nearly what you had hoped and
believed. I would say miniscule in a universe where our results ran from
1.5 to 26.8 seconds. And while I'm not shy about "I told you so's" my real
reason for putting this out is that I have heard from several backbones
that are scrambling to upgrade and move their home page servers etc. I
personally would get them up to what you think they ought to be anyway, but
if you go to extraordinary measures, you're probably going to be
disappointed in how little the numbers move - as I predicted. It just
won't move the numbers much. A little perhaps, and if you're not careful -
potentially the wrong direction. Doug was pretty emphatic that these
customer servers were the "good" ones on the good part of the net, and the
home server was the weeny one. Maybe Doug Mohney can jump in and remind me
which was which as far as NT and UNIX goes if anyone is interested. I
would guess off hand that Forbes is taking a little more load than Nike,
but I may be reading messages from God in standard deviation cloud
formations. There just isn't that much difference - certainly not in the
mean.

The bottom line is that if we are actually measuring server performance, we
should be able to measure three different servers, two avowed muscle boxes
and one avowed weeny one, all on the same network (connected differently
I'm told) and get at least as wide a variation as we saw between networks.
We didn't by about a mile and a fortnight. At LEAST it ought to be in the
predicted direction. This clearly was not. Good theory - but not so -
even in the lab.

Again, I think you guys should take a look at this stuff a little more open
mindedly and professionally. It's certainly NOT to scientific laboratory
standards, but it is certainly interesting and I would claim very VALID
information. Better information than you have previously had at your
disposal. It's an attempt to look at the FOREST, not tree limb diameters,
leaf patterns and nutrient flows - all very interesting though those may be
I do grant you. A great deal of this network has operated on theories that
once scaled up, nobody really knows if they work that way or not. I can
tell you from personal experience that most of what I know is wrong, and I
find that out over, and over, and over again. I might also mention that a
lot of what I'm told turns out to be wrong as well. I'm only going to
SUGGEST that I may not be alone.

We'll continue to work on it. I discussed the universal test page
suggestion with Gene Shklar this afternoon and we will make it so. Again,
I don't think it will move any numbers around much, but certainly, as
Forest Gump says, ONE LESS THING.... And if you can make a case for a
different server ON YOUR OWN NETWORK, we will certainly entertain requests
to shoot at another machine. Nominally July 15-August 15th though I'm not
signing up to those precise dates at this time.

Regards


===================================================================
Jack Rickard Boardwatch Magazine
Editor/Publisher 8500 West Bowles Ave., Ste. 210
jack.rickard@boardwatch.com Littleton, CO 80123
www.boardwatch.com Voice: (303)973-6038
===================================================================
Re: Keynote/Boardwatch Results [ In reply to ]
jack.rickard@boardwatch.com (Jack Rickard) writes:

> Again, I think you guys should take a look at this stuff a little more open
> mindedly and professionally. It's certainly NOT to scientific laboratory
> standards,

You'll pardon me, I'm sure, but I must point out that our professional
standards _are_ scientific laboratory standards. These appear to be
distinct from magazine standards.

Tony
Re: Keynote/Boardwatch Results [ In reply to ]
Jack,

If you supplied a test page, we would be willing to make it available on
our server. Assuming that server testing is the best thing available at
this time, we should try to reduce the variables. If you selected a page
whose content loaded slightly faster than pages selected now, you
would encourage others to make your test page available.

I am still wondering how the 27 sites are connected, not so much what the
cities are but which path is their connectivity to my network. IE whose
network do they have to go thru before reaching me. I do not have my copy
of the complete study yet, I guess it's on its way...:)

Best Regards,
Robert Laughlin

----------------------------------------------------------------------------
DataXchange sales: 800-863-1550 http://www.dx.net
Network Operations Center: 703-903-7412 -or- 888-903-7412
----------------------------------------------------------------------------

On Mon, 7 Jul 1997, Jack Rickard wrote:
> We'll continue to work on it. I discussed the universal test page
> suggestion with Gene Shklar this afternoon and we will make it so. Again,
> I don't think it will move any numbers around much, but certainly, as
> Forest Gump says, ONE LESS THING.... And if you can make a case for a
> different server ON YOUR OWN NETWORK, we will certainly entertain requests
> to shoot at another machine. Nominally July 15-August 15th though I'm not
> signing up to those precise dates at this time.
>
> Regards
>
>
> ===================================================================
> Jack Rickard Boardwatch Magazine
> Editor/Publisher 8500 West Bowles Ave., Ste. 210
> jack.rickard@boardwatch.com Littleton, CO 80123
> www.boardwatch.com Voice: (303)973-6038
> ===================================================================
>
>
Re: Keynote/Boardwatch Results [ In reply to ]
Below you might find more information about the server locations which
Keynote uses for their connectivity data. The first three columns you
can find for yourself at the URL: http://www.keynote.com/kn/agents.html
and the fourth column I added based on digging into either who the IP
block was assigned/swipped to or (if that failed) via a traceroute.

The following list is Keynote's Agent Network locations.

<------ KEYNOTE PROVIDED INFORMATION -------> <my own digging notes>
City Backbone IP Address Connected at or via...

Atlanta, GA MCI 205.218.210.14 Tronco
Boston, MA UUNET 207.76.168.100 Quest Technologies
Charleston, SC Sprint 207.30.48.12 SCEscape, Inc.
Chicago, IL Good Net, MCI 208.133.72.144 MegsInet, Inc.
Cleveland, OH Sprint 207.40.34.200 Marinar/Harbor Comm.
Columbus, OH MCI, Sprint 206.31.38.106 eNET Inc.
Dallas, TX CRL 207.211.44.4 Crystalball Software
Denver, CO BBN, Sprint 206.168.144.12 Colorado Internet Coop
Detroit, MI MCI 198.108.102.9 Merit Network, Inc.
Houston, TX AGIS 206.42.40.25 4GL Corporation
Kansas City, KA MCI 208.128.115.198 Teranet Corp.
Los Angeles, CA UUNET 207.217.112.2 Earthlink Network
Los Angeles, CA Sprint xx.xx.xx.xx ** UNKNOWN **
Miami, FL BBN 207.243.133.253 PG&C Leasing
Milwaukee, WI BBN 156.46.146.19 Global Dialog/Alpha Net
Minneapolis, MN MCI 205.164.72.21 Orbis Internet/AGIS
Norfolk, VA Sprint 206.246.194.4 Visionary Systems
New York, NY UUNET, Sprint 204.141.86.156 New York Connect
New York, NY MCI 206.42.130.4 Matrix Online/AGIS
Omaha, NB Sprint 204.248.24.1 Novia, LLC
Philadelphia, PA AGIS, CRL 206.84.211.5 OpNet Inc./AGIS
Phoenix, AZ MCI 206.103.184.250 Innovative System Des.
Pittsburgh, PA MCI 206.31.12.122 Aswell Corp.
Portland, OR ELI 207.173.11.82 NWPowerNet, Inc.
Salt Lake City, UT ELI 198.60.58.11 MicroSystems Comnet
San Diego, CA AGIS 206.170.114.2 Infonex/PacBellInternet
San Francisco, CA MCI, Sprint 204.71.192.160 Internet Systems/MCI
Seattle, WA UUNET, MCI 206.129.112.29 Connect Northwest/IXA
St. Louis, MO MCI 207.230.62.16 Cybercon/Internet 1st
Tampa, FL Good Net 207.204.208.167 Combase Comm/Good Net
Washington D.C. AGIS 205.177.6.2 Hermes Internet/CAIS
Amsterdam Global One 195.61.98.18 Cybercomm/Frame Relay?

---

I am making the assumption that since they have the following network
for use in their other Keynote services and since it seems clear that these
cities are the same list of cities it is likely that these are the same
machines being used, or at least same networked facility, for the
Backbone Bandwidth tests which they are using and as of yet have only
provided city names.

--
Pete Bowden ... pete@missing.com 805-928-9000
NIC: PB8 - All opinions are my own and not that of my employer
Re: Keynote/Boardwatch Results [ In reply to ]
Re: Keynote/Boardwatch Results [ In reply to ]
Avi:

"Cheating" is of course encouraged. This isn't an academic test at your
local university. We're all out of school now. If you can figure out a way
to beat the game, you have ipso facto figured out a way to make the web
look faster to end users. As it appears to be, so it is. As Martha Stewart
says, that would be a "good thing." It would be a good thing for you and
your product line. It would be a good thing for your web site customers.
It would be a good thing for end-users viewing such web sites. If the net
effect of all this is that all the smart people on the net get mad at me
and go figure out a way to make the web look faster, it will be well and
good enough for me as well.


I had previously agreed NOT to publish IP numbers of the Keynote host
machines. Keynote does make this information available on their web site,
so I myself was a little bemused by the request, but I did agree to honor
it. In any event, someone else has already posted the locations and
networks (which we DID publish), along with the IP numbers, here on the
list. So you should have them.

If mirroring/caching moves the numbers definitively, it then establishes a
"real" value to such a technique, and it can be offered to customers at a
higher price with some actual data comparing how they will "look" to the
world using both the less expensive simple hosting as compared to the more
expensive geographic mirroring technique. I personally think this would
move the numbers more than anything else that could be done, but that's
what looks LOGICAL to me, not anything I know or have tested. I am rather
convinced that moving a single web site to another location, putting it on
larger iron, or using different OS will have very minor impact on the
numbers.

My own personal theory is a little vaguely formed. But I think the heart
of the performance problems lie in a visceral connundrum of the network.
It is and should be one network. The perceptual disjuncture I already
detect, even among NANOG members, between straight "routes" as viewed by
traceroute and ping, and the way data actually moves on the network (at
least vis a vis the mental model or "theory" I have of it) is a somewhat
shocking part of the problem. I was actually unaware that many of the
network engineers actually viewed the world in this way until this
exercise. I was even a bit flip about not dealing with ping/traceroute at
any level of comparison. Perhaps an article on this is in order.

But I think most of it has to do with interconnects between networks, and
architectural decisions accummulated over the years based on concepts of
what should be "fair" with regards to the insoluble, but ever moronic
"settlements" concept and who gets value from whom based on what. If
decisions had been more based on optimizing data flows, and less on whose
packets transit who's backbones and why, performance would have been
improved. I don't know how much, but certainly some. When the main thing
on the minds of CEOs of networks is preventing anyone from having a "free
ride" (ala SIdgemore's theory of optimizing Internet performance by it
being owned totally by UUNET), or the relatively mindless routing algorithm
of moving a packet to the destination network at the earliest opportunity
to make sure "I" am not paying for it's transit, if it goes to "your"
location, I suspect performance suffers. My sense is larger numbers of
smaller networks, interconnected at ever more granular locations, would be
a good thing. This will get me in big caca with the "hop counting" mind
set, and of course at about 254 hops a minor little problem arises, but I
think so nonetheless. Very small ISPs know this viscerally. They all want
to multihome to multiple backbones, and have done some work to interconnect
among themselves locally.

Savvis actually has a very interesting concept though it upsets everybody.
It kind of upsets me because it makes my head hurt, literally. They've
carried it almost to another level of abstraction. If you ponder it long
beyond the obvious, it has some interesting economic consequences.
Checkbook NAPS lead to an inverted power structure where the further away
you move from centralized networks such as internetMCI and Sprint, by
blending layers after the fashion of a winemaker, the better your product
becomes and the better apparent connectivity your customers have. The head
hurting part is that if you extend this infinitely we would all wind up
dancing to the tune of a single dialup AOL customer in Casper Wyoming
somewhere in the end. But there is a huge clue in here somewhere. In all
cases, Savvis numbers were better than the either UUNET, Sprint, or
internetMCI numbers individually. Would it then be true, that if there
were three Savvis's each aggregating three networks with a private NAP
matrix, and I developed a private NAP matrix using the three Savvis level
meshes, would my performance be yet better again?

And what if Savvis opened the gates and allowed UUNET end users to connect
to Sprint IP Services web sites transiting via Savvis network?

More vaguely, if you have four barrels of wine with one a bit acidic, one a
bit sweet, one a bit oakey, and one a bit tannic, and you blend them all
together, it would appear apparent that you would have a least common
denominator wine that is acidic, sweet, oakey, and tannic. You don't. You
get a fifth wine that is infinitely superior than the sum of the parts or
any one component barrel. It is an entirely "new thing." This is
sufficiently true that it is in almost all cases how wines are made now.

Are networks like wine?

Jack Rickard
==============================================================
Jack Rickard
Boardwatch Magazine
Editor
8500 West Bowles Ave
jack.rickard@boardwatch.com
Littleton, CO 80123
(303)973-6038 voice (303)973-3731 fax
http://www.boardwatch.com
==============================================================

----------
> From: Avi Freedman <freedman@netaxs.com>
> To: Jack Rickard <jack.rickard@boardwatch.com>
> Cc: mohney@access.digex.net; nanog@merit.edu; GeneShklar@keynote.com
> Subject: Re: Keynote/Boardwatch Results
> Date: Tuesday, July 08, 1997 9:40 PM
>
> > It would appear that everyone is pretty smugly satisfied by concensus
that
> > the performance series we ran actually measures server performance and
that
> > since all ISPs run weeny home servers, this was not "really" a test,
flawed
> > methodology, etc. I corresponded with Doug the Hump at Digex about
this.
> > I've liked this guy since I first met him largely because he's funny
and
> > doesn't take himself too seriously. He's got a yen for black
helicopters
> > that still has me in stitches.
>
> Humph. Doug the Hump indeed. Well, Alex @ our shop wants an Apache
> Helicopter for our NOC as well. I'm laughing because I'm not sure
anyone's
> called him that before...
>
> Anyway, I'm thinking of putting www.netaxs.net on one of our core routers
:)
>
> Think that'd help?
>
> Actually, what we'd do is make it a loopback interface on all of our
> core routers and thus you'd hit whatever the closest router to the
> querying machine is, bypassing much of the network.
>
> Hmm. Have to try that one out.
>
> Anyway, I have to take a look at some of the test sites (saw some of
> them listed in the new Directory) and see if I can figure out some
> of the topology of the testing.
>
> Jack - could you put up IPs or whatnot of the sources (the test sites)
> for people to "tune" and test to?
>
> > jack.rickard@boardwatch.com Littleton, CO 80123
>
> Avi
Re: Keynote/Boardwatch Results [ In reply to ]
n Mon, 7 Jul 1997, Jack Rickard wrote:
> I discussed the universal test page
> > suggestion with Gene Shklar this afternoon and we will make it so.

Universal test page is certainly a [good] step - but it is a step (one
among many) towards [comparative] testing the performance between the
test machine and a *SPECIFIC* web server .... but not the peformance of
the backbone (even though it clearly is a major component of the
result). There are simply too many uncontrolled variables still
outstanding.

By the way, we have given considerable thought on the attributes of such
standardised test page. ... starting with what is the test about. It
would be a good idea if a sub group of interested parties can get off
line and work on this under some frame work (IETF, NANOG, CAIDA, ... or
just an ad hoc BOF group) and then report the finding back to the
mailing list.

So, if anyone is interested to work on a standardized web page, send me
mail and I will try to get a seperate discussion list going, do some
serious work ... and reduces the S/N ratio of this list.

Regards,
John Leong
Re: Keynote/Boardwatch Results [ In reply to ]
On Wed, 9 Jul 1997, Jack Rickard wrote:

Before you start with your claims, Jack, that I have something to lose,
you should realize that I am an independent consultant, and work for none
of the people in the study.

==>what looks LOGICAL to me, not anything I know or have tested. I am rather
==>convinced that moving a single web site to another location, putting it on
==>larger iron, or using different OS will have very minor impact on the
==>numbers.

You may be convinced because of the theory you've developed to match the
flawed methodology with which the tests were performed. However, I have
some tests that I did to measure the connect delays on sites.

Here's the average for 200 web sites that were given to me when I polled
some people for their favorite sites (duplicates excluded):

(because in a lot of cases we're talking milliseconds, percentage is not
really fine enough, but this was to satisfy personal curiosity)

SYN -> SYN/ACK time (actual connection) 22%
Web browser says "Contacting www.website.com..."

SYN/ACK -> first data (web server work-- 78%
getting material, processing material)
Web browser says "www.website.com contacted, waiting for response"

Note that this didn't include different types of content. But it *did*
truly measure one thing--that the delay caused by web servers is
considerably higher than that of "network performance" (or actual connect
time).

And, the biggest beef is that you claimed Boardwatch's test was BACKBONE
NETWORK performance, not end-to-end user-perception performance. You
threw in about 20 extra variables that cloud exactly what you were
measureing. Not to mention completely misrepresenting what you actually
measured.

/cah
Re: Keynote/Boardwatch Results [ In reply to ]
I guess along these lines the following question came up.

If this was supposed to be an end-to-end user performance idea, why were
backbone provider sites being hit instead of sites more typical end users
would be using? Say, a major search engine? It smacks me that the
article's results were slanted to make a comment on backbones, not
user-viewed performance, where the test has been argued to be measuring
the latter.

DISCLOSURE: Then again, we do a god awful amount of web traffic and like
people looking at "real world" performance over any particular path
through any particular cloud.

-Deepak.

On Wed, 9 Jul 1997, Craig A. Huegen wrote:

> On Wed, 9 Jul 1997, Jack Rickard wrote:
>
> Before you start with your claims, Jack, that I have something to lose,
> you should realize that I am an independent consultant, and work for none
> of the people in the study.
>
> ==>what looks LOGICAL to me, not anything I know or have tested. I am rather
> ==>convinced that moving a single web site to another location, putting it on
> ==>larger iron, or using different OS will have very minor impact on the
> ==>numbers.
>
> You may be convinced because of the theory you've developed to match the
> flawed methodology with which the tests were performed. However, I have
> some tests that I did to measure the connect delays on sites.
>
> Here's the average for 200 web sites that were given to me when I polled
> some people for their favorite sites (duplicates excluded):
>
> (because in a lot of cases we're talking milliseconds, percentage is not
> really fine enough, but this was to satisfy personal curiosity)
>
> SYN -> SYN/ACK time (actual connection) 22%
> Web browser says "Contacting www.website.com..."
>
> SYN/ACK -> first data (web server work-- 78%
> getting material, processing material)
> Web browser says "www.website.com contacted, waiting for response"
>
> Note that this didn't include different types of content. But it *did*
> truly measure one thing--that the delay caused by web servers is
> considerably higher than that of "network performance" (or actual connect
> time).
>
> And, the biggest beef is that you claimed Boardwatch's test was BACKBONE
> NETWORK performance, not end-to-end user-perception performance. You
> threw in about 20 extra variables that cloud exactly what you were
> measureing. Not to mention completely misrepresenting what you actually
> measured.
>
> /cah
>
>
Re: Keynote/Boardwatch Results [ In reply to ]
> SYN -> SYN/ACK time (actual connection) 22%
> Web browser says "Contacting www.website.com..."
>
> SYN/ACK -> first data (web server work-- 78%
> getting material, processing material)
> Web browser says "www.website.com contacted, waiting for response"
>
> Note that this didn't include different types of content. But it *did*
> truly measure one thing--that the delay caused by web servers is
> considerably higher than that of "network performance" (or actual connect
> time).

Urm, maybe I'm missing something here, but taking an incredibly
simplistic model where you have a probability p of losing any packet,
there is a (1-p)^3 chance of 3 way handshake losing a packet and stalling,
and a (1-p)^(2 * no packets reqd for 1st data) for the latter. With
slow start etc there are bound to be more than two packets back before
it starts processing the response, so the latter is always going to
have a higher chance of failing. Now add to the fact that with technology
such as ATM, it is more likely large packets will be dropped than small
ones (with a given cell loss probability), and being careful to remember
all that good stuff at the last but one NANOG about broken client stacks,
and I think you might find the above is a "non measurement".

I *think* (and am not sure) that if you have a proxy set up, you
always get the latter once you have connected to the proxy.

Oh, and to skew the figures in the other direction, doesn't the first
prompt come up while the DNS lookup is being done?

Alex Bligh
Xara Networks
Re: Keynote/Boardwatch Results [ In reply to ]
If you really want to test raw performance, why not just enable the chargen
service on a router in the core and let him suck that down and measure
throughput. No more effective DOS attach than 4 simultaneous chargen
sessions on a T1. I suspect a 7513 should be able to spit a few characters
per second :)

Eric


At 11:24 AM 7/9/97 -0700, John Leong wrote:
>n Mon, 7 Jul 1997, Jack Rickard wrote:
>> I discussed the universal test page
>> > suggestion with Gene Shklar this afternoon and we will make it so.
>
>Universal test page is certainly a [good] step - but it is a step (one
>among many) towards [comparative] testing the performance between the
>test machine and a *SPECIFIC* web server .... but not the peformance of
>the backbone (even though it clearly is a major component of the
>result). There are simply too many uncontrolled variables still
>outstanding.
>
>By the way, we have given considerable thought on the attributes of such
>standardised test page. ... starting with what is the test about. It
>would be a good idea if a sub group of interested parties can get off
>line and work on this under some frame work (IETF, NANOG, CAIDA, ... or
>just an ad hoc BOF group) and then report the finding back to the
>mailing list.
>
>So, if anyone is interested to work on a standardized web page, send me
>mail and I will try to get a seperate discussion list going, do some
>serious work ... and reduces the S/N ratio of this list.
>
>Regards,
>John Leong
>
>
Re: Keynote/Boardwatch Results [ In reply to ]
On Wed, 9 Jul 1997, Alex.Bligh wrote:

==>ones (with a given cell loss probability), and being careful to remember
==>all that good stuff at the last but one NANOG about broken client stacks,
==>and I think you might find the above is a "non measurement".

It's a rough measurement, and if you'd go so far as to assign a 20% error
margin, you'd stillsee that a web server still owns a *significant* piece
of the click-to-data time, over 50%.

I think that a 20% error margin would be fair for this, provided neither I
nor my provider was having network problems at the time. At the time,
this was intended as a rough measurement to determine how much time was
wasted in waiting for inefficient web servers.

==>I *think* (and am not sure) that if you have a proxy set up, you
==>always get the latter once you have connected to the proxy.
==>
==>Oh, and to skew the figures in the other direction, doesn't the first
==>prompt come up while the DNS lookup is being done?

Nope. You'll see "Looking up host www.website.com..." in most browsers.
(I didn't use a browser to measure this; those "web browser says" lines
were there for the reference--a lot of people ask me why it sits there a
while after saying "contacted, waiting for response".

/cah
Re: Keynote/Boardwatch Results [ In reply to ]
On Wed, 9 Jul 1997, Eric Germann wrote:

> If you really want to test raw performance, why not just enable the chargen
> service on a router in the core and let him suck that down and measure
> throughput. No more effective DOS attach than 4 simultaneous chargen
> sessions on a T1. I suspect a 7513 should be able to spit a few characters
> per second :)

The tests that I've run in the past wouldn't support your statement. In
most cases, people leave those ports disabled, and appropriately so since
the routers have better things to do, and don't echo or generate traffic
nearly as quickly as the line rate would support.
Re: Keynote/Boardwatch Results [ In reply to ]
Grumble. This is really starting to grate.

Wasn't the point of this ~study to find the best value for your dollar
when buying leased lines? What exactly does putting web servers in your
pop have to do with backbone performance. Furthermore, exactly how would
you economically scale and support such a spaghetti operation?

It seems to me the unintended goal of the study was to find access
providers on whose networks web sites have a snappier "user experience".

Am I the only one here who see this as measuring apples to judge oranges?

Will someone please point me to the scientific method being used here,
'cause I sure as hell can't see it.

--
JMC

On Wed, 9 Jul 1997, Jack Rickard wrote:
> "Cheating" is of course encouraged. This isn't an academic test at your
> local university. We're all out of school now. If you can figure out a way
Re: Keynote/Boardwatch Results [ In reply to ]
c-huegen@quadrunner.com (Craig A. Huegen) writes:
> It's a rough measurement, and if you'd go so far as to assign a 20% error
> margin, you'd stillsee that a web server still owns a *significant* piece
> of the click-to-data time, over 50%.

Especially if it's using the particularly sub-optimal (aka 'broken')
network stack that a very popular server operating system has.

In fact, some recent measurements of mine show a large variance even
for a connection setup on a local network depending upon what IP
stacks are involved: varying between 0.39s and 0.007s. (Unsurprisingly
the broken stack referred to above works quite well with itself, and
not too bad with an earlier OS from the same company - if I didn't
know better I might believe that they only tested with their own
systems.)

(this isn't really operational so if you want names let me know
privately. Perhaps someone out there has some clout.)

James.