Mailing List Archive

Inter-exchange media types
Folks,

I know that many of the Inter-exchange points are using FDDI and
switched FDDI for inter-exchange in lieu (or anticipation) of ATM.

I am wondering if anyone has considered using switched 100baseFx or
similar technology for an inter-exchange. Given the relative
newness of these interfaces on high-end routers I would not be
suprised to hear that no one is using them right now. But has
anyone looked into using them? Is there a good reason to prefer
switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s
switched)?

<tex@isc.upenn.edu>
Re: Inter-exchange media types [ In reply to ]
Curtis Villamizar <curtis@ans.net> wrote:

>The problem for the major exchanges may soon be what to do when the
>gigaswitch runs out of bandwidth.

a) access links will run out of gas first.

b) if shared infrastructure gets overloaded you can always
drop point-to-point FDDI to the peers you have the largest
traffic with, and so offload large chunks of tarffic from
the shared infrastructure. That was one of the reasons
why colocation was choosen as The Way To Go.

--vadim
Re: Inter-exchange media types [ In reply to ]
I know of plans for at least 2 exchanges that will use 100baseTx
as a connection option. The only real problem is
the backplane limit of boxes that switch 100baseTX,
(Cisco Cat 5000, 1.2gbps). This is only slightly
better than the Dec Gigaswitch limit of 800mbps.

Certainly the hardware to do switched ethernet is much cheaper
than FDDI, and with full duplex 100mbps there is very little
advantage to FDDI. (FDDI does do full duplex now, but I don't
know who offers interfaces that support this.) FDDI
does offer larger frame sizes, assuming you have FDDI each
end with no "MTU 1500"'s in between.

To me it seems like a big cost win to drop in a C5000
with 4 slots at 24 switched ethernet/slot, or 12 switched fast
ethernet/slot. Particularlly since the router interfaces are
about 1/2 to 1/4 as much.

The exchanges with the most traffic are the ones
based on the oldest technology, because there are always bugs.

>Folks,
>
> I know that many of the Inter-exchange points are using FDDI and
> switched FDDI for inter-exchange in lieu (or anticipation) of ATM.
>
> I am wondering if anyone has considered using switched 100baseFx or
> similar technology for an inter-exchange. Given the relative
> newness of these interfaces on high-end routers I would not be
> suprised to hear that no one is using them right now. But has
> anyone looked into using them? Is there a good reason to prefer
> switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s
> switched)?
>
> <tex@isc.upenn.edu>
>


--
Jeremy Porter, Freeside Communications, Inc. jerry@fc.net
PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-339-6094
http://www.fc.net
Re: Inter-exchange media types [ In reply to ]
>
> I know of plans for at least 2 exchanges that will use 100baseTx
> as a connection option. The only real problem is
> the backplane limit of boxes that switch 100baseTX,
> (Cisco Cat 5000, 1.2gbps). This is only slightly
> better than the Dec Gigaswitch limit of 800mbps.
>
> Certainly the hardware to do switched ethernet is much cheaper
> than FDDI, and with full duplex 100mbps there is very little
> advantage to FDDI. (FDDI does do full duplex now, but I don't
> know who offers interfaces that support this.) FDDI
> does offer larger frame sizes, assuming you have FDDI each
> end with no "MTU 1500"'s in between.

Don't gloss over the MTU issue. Fragmenting packets is a serious
performance hit. Also, DS3 customers want a hssi mtu end-to-end.

> To me it seems like a big cost win to drop in a C5000
> with 4 slots at 24 switched ethernet/slot, or 12 switched fast
> ethernet/slot. Particularlly since the router interfaces are
> about 1/2 to 1/4 as much.

Compared to the other costs in running an ISP it is just noise...

> The exchanges with the most traffic are the ones
> based on the oldest technology, because there are always bugs.

??!! Exchanges with the most traffic are the ones with the new technology
because they have no choice.

Erik
Re: Inter-exchange media types [ In reply to ]
In message <199604250621.BAA09394@freeside.fc.net>, Jeremy Porter writes:
>
> I know of plans for at least 2 exchanges that will use 100baseTx
> as a connection option. The only real problem is
> the backplane limit of boxes that switch 100baseTX,
> (Cisco Cat 5000, 1.2gbps). This is only slightly
> better than the Dec Gigaswitch limit of 800mbps.
>
> Certainly the hardware to do switched ethernet is much cheaper
> than FDDI, and with full duplex 100mbps there is very little
> advantage to FDDI. (FDDI does do full duplex now, but I don't
> know who offers interfaces that support this.) FDDI
> does offer larger frame sizes, assuming you have FDDI each
> end with no "MTU 1500"'s in between.
>
> To me it seems like a big cost win to drop in a C5000
> with 4 slots at 24 switched ethernet/slot, or 12 switched fast
> ethernet/slot. Particularlly since the router interfaces are
> about 1/2 to 1/4 as much.
>
> The exchanges with the most traffic are the ones
> based on the oldest technology, because there are always bugs.


For many products, there is a big difference between how many
interfaces physically fit in the card cage and how many actually work
under a fairly heavy load. Have you tested a 12 port switched 100
Mb/s ethernet under load? The DEC gigaswitch has been tested under
load and has held up so far under load. The Cisco 5000 may bridge
better than it routes since there is no route change to deal with, but
I'd be a bit worried about deploying without stress testing.

The problem for the major exchanges may soon be what to do when the
gigaswitch runs out of bandwidth.

Curtis


> >Folks,
> >
> > I know that many of the Inter-exchange points are using FDDI and
> > switched FDDI for inter-exchange in lieu (or anticipation) of ATM.
> >
> > I am wondering if anyone has considered using switched 100baseFx or
> > similar technology for an inter-exchange. Given the relative
> > newness of these interfaces on high-end routers I would not be
> > suprised to hear that no one is using them right now. But has
> > anyone looked into using them? Is there a good reason to prefer
> > switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s
> > switched)?
> >
> > <tex@isc.upenn.edu>
> >
>
>
> --
> Jeremy Porter, Freeside Communications, Inc. jerry@fc.net
> PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-339-6094
> http://www.fc.net
Re: Inter-exchange media types [ In reply to ]
> The problem for the major exchanges may soon be what to do when the
> gigaswitch runs out of bandwidth.
>
> Curtis

There are rumors/rumblings afoot that we have a operational
600M/b thingie which now has PCI interfaces in addition
to the existing sbus, nubus and EISA bus cards. Now if we
can just get infrastructure vendors to provide something
that we can plug into, this would potentially make a
reasonable ExchangeNG (tm). Imagine a lan technology that
would keep up with multiple incoming OC3s! (where is that
gigabit lan technology when you need it... :)

--bill
Re: Inter-exchange media types [ In reply to ]
> The problem for the major exchanges may soon be what to do when the
> gigaswitch runs out of bandwidth.

Right. I don't believe the 800Mb/s claim of the GIGAswitch's backplane,
since I thought it was 3.5Gb/s or something. But either way, the bottleneck
is going to be the 100Mb/s port speed, and the ISP backhaul. Even if we had
a 200 port GIGAswitch with a 200*100Mb/s full cross bar backplane, we would
soon reach the point where the individual 100Mb/s ports were just too full.
And if we had 1000Mb/s Ethernet with a full cross bar switch in the middle,
we'd discover that OC12 intercarrier backhaul is very difficult to get.

As I've said before, I believe that this is going to push us in the direction
of more and more NAPs so that everyone can do the hot potato routing thing as
early and as often as possible. Many medium pipes will add up to the nec'y
aggregate bit rate, with some great cost in complexity compared to a few fat
pipes.

Ceding the "core" to the people who own their own trenches and who can there-
fore build out reasonable worldwide OC12 or OC48 nets is another approach but
I'm not entirely comfortable with the transit rates they'd probably charge if
their corporate officers knew they had a monopoly.

I've chosen to leave my usual dire predictions about the inevitability of ATM
out of this particular message; you've all heard it before, anyway.

There ought to be a business opportunity in here somewhere.
Re: Inter-exchange media types [ In reply to ]
On Thu, 25 Apr 1996 bmanning@isi.edu wrote:

> There are rumors/rumblings afoot that we have a operational
> 600M/b thingie which now has PCI interfaces in addition
> to the existing sbus, nubus and EISA bus cards. Now if we
> can just get infrastructure vendors to provide something
> that we can plug into, this would potentially make a
> reasonable ExchangeNG (tm). Imagine a lan technology that
> would keep up with multiple incoming OC3s! (where is that
> gigabit lan technology when you need it... :)

Yell at the router/switch/xxx vendor of your choice. :)

-dorian
Re: Inter-exchange media types [ In reply to ]
The problem for the major exchanges may soon be what to do when the
gigaswitch runs out of bandwidth.


It seems to suggest more exchange points; like for example the recent
interest in new regional packet exchanges in California and Utah.

When large 'tier 1' providers run into bottleneck problems, do we
have to rely more on the intermediate and smaller ones to connect us?


Wolfgang
Re: Inter-exchange media types [ In reply to ]
Re: Inter-exchange media types [ In reply to ]
On Thu, 25 Apr 1996, Jeremy Porter wrote:

>
> I know of plans for at least 2 exchanges that will use 100baseTx
> as a connection option. The only real problem is
> the backplane limit of boxes that switch 100baseTX,
> (Cisco Cat 5000, 1.2gbps). This is only slightly
> better than the Dec Gigaswitch limit of 800mbps.


DEC Gigaswitch is actually a 3.6Gb/s switch




--Ismat
Re: Inter-exchange media types [ In reply to ]
On Thu, 25 Apr 1996, Paul A Vixie wrote:

> As I've said before, I believe that this is going to push us in the direction
> of more and more NAPs so that everyone can do the hot potato routing thing as
> early and as often as possible. Many medium pipes will add up to the nec'y
> aggregate bit rate, with some great cost in complexity compared to a few fat
> pipes.

Bingo!
And this is precisely what is happening now as regional exchanges are
being set up all over the place in Utah, Pennsylvania, Texas, the
Phillipines and so on. Not to mention the number of large ISP's that are
moving towards things like national frame-relay networks of their own.

If you want to understand how this is all going to play out, study the
shape of soap bubbles in a foam, the way to connect power systems in 4
cities arranged at the corners of a square using the shortest wires and
the distribution of market towns in any of the ancient civilizations.

Michael Dillon Voice: +1-604-546-8022
Memra Software Inc. Fax: +1-604-546-3049
http://www.memra.com E-mail: michael@memra.com
Re: what to do when the gigaswitches run out of gas??? [ In reply to ]
uh, not use them??

why we insist on having a contest to see how much technology we
can melt down into slag is beyond me. if people are exchanging that
much traffic, there are better, more straightforward ways to do it.

-mo
Re: Inter-exchange media types [ In reply to ]
At 02:23 PM 4/25/96 -0400, ipasha@sprintlink.net wrote:

>
>On Thu, 25 Apr 1996, Jeremy Porter wrote:
>
>>
>> I know of plans for at least 2 exchanges that will use 100baseTx
>> as a connection option. The only real problem is
>> the backplane limit of boxes that switch 100baseTX,
>> (Cisco Cat 5000, 1.2gbps). This is only slightly
>> better than the Dec Gigaswitch limit of 800mbps.
>
>
> DEC Gigaswitch is actually a 3.6Gb/s switch
>
>

Ye olde trick aggregate bandwidth answer. I know that one. ;-)

- paul


>
>
> --Ismat
>
>
>
>
Re: Inter-exchange media types [ In reply to ]
Wolfgang brings up a good point; there are certainly regional interests
in creating exchange points within certain geographic areas. This is
a Good Thing (tm). We should encourage this is every way, shape or form.
Especially in SE Asia, and other areas where traffic traverses already
congested links back to the US just to reach its final destination in
SE Asia.

I realize this is borderline NANOG topic, but it really *does* affect
North American network Ops when traffic has to transit NA links
just to reach its final destination 25 km away from where it originated.
This is insane.

While I do not mean to point out SE Asia as the most violent offender
of this transient problem, the problem exists elsewhere in the world;
SE Asia just happens the be the most handy real-world reference.

Food for thought.

- paul


At 10:48 AM 4/25/96 -0700, Wolfgang Henke wrote:

>
> The problem for the major exchanges may soon be what to do when the
> gigaswitch runs out of bandwidth.
>
>
>It seems to suggest more exchange points; like for example the recent
>interest in new regional packet exchanges in California and Utah.
>
>When large 'tier 1' providers run into bottleneck problems, do we
>have to rely more on the intermediate and smaller ones to connect us?
>
>
>Wolfgang
>
Re: Inter-exchange media types [ In reply to ]
Re: Inter-exchange media types [ In reply to ]
On Thu, 25 Apr 1996 12:34:25 -0400 Curtis Villamizar wrote:

>For many products, there is a big difference between how many
>interfaces physically fit in the card cage and how many actually work
>under a fairly heavy load. Have you tested a 12 port switched 100
>Mb/s ethernet under load? The DEC gigaswitch has been tested under
>load and has held up so far under load. The Cisco 5000 may bridge
>better than it routes since there is no route change to deal with, but
>I'd be a bit worried about deploying without stress testing.
>
>The problem for the major exchanges may soon be what to do when the
>gigaswitch runs out of bandwidth.
>
>Curtis
>

The Feb 1996 issue of Data Communications ran a benchmark on these switches
along with a few others. I will just summerize the Digital and Cisco info
that they state:

Cisco Catalyst 5000 - Full duplex Ethernet, half duplex FDDI, full duplex fast
Ethernet; Max: 96 Ethernet, 4 FDDI, 50 Fast Ethernet; Price: $22K for chassis, 24
Ethernet ports, 2 FDDI

Digital Gigaswitch - Ethernet, Full Duplex FDDI; Max: 12 Ethernet, 8 FDDI; Price:
$21K for chassis, 24 Ethernet ports, 2 FDDI

Benchmarks:

a) % of frames delivered without loss (100% load on 40 ports):
Burst size (in 64 byte frames)
24: Cisco 99% DEC 76%
62: Cisco 100% DEC 75%
124: Cisco 100% DEC 65%
372: Cisco 100% DEC 51%
744 Cisco 100% DEC 49%


b) % of frames delivered without loss (64 byte frames, 24 frame bursts on 40 ports):
70%: Cisco: 100% DEC: 81%
80% Cisco 99% DEC 79%
90% Cisco 99% DEC 85%
100% Cisco 99% DEC 75%
150% Cisco 99% DEC 22%

c) Per port thruput: 64 byte frame per second (24 frame bursts on 40 ports)
Cisco: 4891
DEC: 4391

d) Latency (microseconds) - 64 byte unidirectional traffic across 100Mb backbone
Cisco: 79
DEC 179

Bottom line: The Gigaswitch performed the almost the worst (Fibronics took that
honor) and the Cisco performed the best.

Perhaps it is time to revisit the Gigaswitch technology?

----------------------------------------------------------------------
Hank Nussbacher Manager, Internet Technology Programs
Telephone: +972 3 6978852 Vnet: HANK at TELVM1
Fax: +972 3 6978115 Internet: hank@ibm.net.il
----------------------------------------------------------------------
IBM Israel
2, Weizmann St.
Tel Aviv 61336 ====== ======= === ===
http://www.ibm.net.il/ ====== ======== ==== ====
Dialup registration: 177-022-3993 == == == ==== ====
Company services: 03-6978663 == ====== == === ==
Internet sales fax: 03-6978115 == == === == = ==
Enquiries: info@ibm.net.il ====== ======== === ===
Technical support: noc@ibm.net.il ====== ======= === ===
----------------------------------------------------------------------
Re: Inter-exchange media types [ In reply to ]
In message <Chameleon.960428080814.hank@hank.tlv.ibm.net.il>, Hank Nussbacher w
rites:
>
[. ... lots of stats for 40 input ports deleted ... ]
>
> c) Per port thruput: 64 byte frame per second (24 frame bursts on 40 ports)
> Cisco: 4891
> DEC: 4391

Hmm. 5 kpps per port. Not really useful for a major interconnect is
it? For the smaller interconnects the Cisco 5000 looks very good.

> Bottom line: The Gigaswitch performed the almost the worst (Fibronics took t
> hat
> honor) and the Cisco performed the best.
>
> Perhaps it is time to revisit the Gigaswitch technology?

Or at least avoid using the DEC 24 port ethernet card which seems to
be what was tested. Data on the dual FDDI card would be more useful
for considering what to expect at the major interconnects.

Curtis
Re: Inter-exchange media types [ In reply to ]
Dtatacom tests were performed in the 10/100 setup. Obviously
the Fast ethernet switches had an advantage over the FDDI
switches since Fast ethernet and conventional ethernet work
with the same frame types. FDDI switches on the other hand
has to convert ethernet frames to FDDI frames and vice versa.
Todays NAPs in most cases are not 10/100 set up. It is more like
DS3/100/100 setup where routers are feeding traffic into the Gigaswitch
using FDDI and since HSSI and FDDI is using same MTU size, no
fragmentation is involved.


--Ismat


On Sun, 28 Apr 1996, Hank Nussbacher wrote:

>
> On Thu, 25 Apr 1996 12:34:25 -0400 Curtis Villamizar wrote:
>
> >For many products, there is a big difference between how many
> >interfaces physically fit in the card cage and how many actually work
> >under a fairly heavy load. Have you tested a 12 port switched 100
> >Mb/s ethernet under load? The DEC gigaswitch has been tested under
> >load and has held up so far under load. The Cisco 5000 may bridge
> >better than it routes since there is no route change to deal with, but
> >I'd be a bit worried about deploying without stress testing.
> >
> >The problem for the major exchanges may soon be what to do when the
> >gigaswitch runs out of bandwidth.
> >
> >Curtis
> >
>
> The Feb 1996 issue of Data Communications ran a benchmark on these switches
> along with a few others. I will just summerize the Digital and Cisco info
> that they state:
>
> Cisco Catalyst 5000 - Full duplex Ethernet, half duplex FDDI, full duplex fast
> Ethernet; Max: 96 Ethernet, 4 FDDI, 50 Fast Ethernet; Price: $22K for chassis, 24
> Ethernet ports, 2 FDDI
>
> Digital Gigaswitch - Ethernet, Full Duplex FDDI; Max: 12 Ethernet, 8 FDDI; Price:
> $21K for chassis, 24 Ethernet ports, 2 FDDI
>
> Benchmarks:
>
> a) % of frames delivered without loss (100% load on 40 ports):
> Burst size (in 64 byte frames)
> 24: Cisco 99% DEC 76%
> 62: Cisco 100% DEC 75%
> 124: Cisco 100% DEC 65%
> 372: Cisco 100% DEC 51%
> 744 Cisco 100% DEC 49%
>
>
> b) % of frames delivered without loss (64 byte frames, 24 frame bursts on 40 ports):
> 70%: Cisco: 100% DEC: 81%
> 80% Cisco 99% DEC 79%
> 90% Cisco 99% DEC 85%
> 100% Cisco 99% DEC 75%
> 150% Cisco 99% DEC 22%
>
> c) Per port thruput: 64 byte frame per second (24 frame bursts on 40 ports)
> Cisco: 4891
> DEC: 4391
>
> d) Latency (microseconds) - 64 byte unidirectional traffic across 100Mb backbone
> Cisco: 79
> DEC 179
>
> Bottom line: The Gigaswitch performed the almost the worst (Fibronics took that
> honor) and the Cisco performed the best.
>
> Perhaps it is time to revisit the Gigaswitch technology?
>
> ----------------------------------------------------------------------
> Hank Nussbacher Manager, Internet Technology Programs
> Telephone: +972 3 6978852 Vnet: HANK at TELVM1
> Fax: +972 3 6978115 Internet: hank@ibm.net.il
> ----------------------------------------------------------------------
> IBM Israel
> 2, Weizmann St.
> Tel Aviv 61336 ====== ======= === ===
> http://www.ibm.net.il/ ====== ======== ==== ====
> Dialup registration: 177-022-3993 == == == ==== ====
> Company services: 03-6978663 == ====== == === ==
> Internet sales fax: 03-6978115 == == === == = ==
> Enquiries: info@ibm.net.il ====== ======== === ===
> Technical support: noc@ibm.net.il ====== ======= === ===
> ----------------------------------------------------------------------
>
>
>
Re: Inter-exchange media types [ In reply to ]
Peter Lothberg <roll@stupi.se> writes
* > Dtatacom tests were performed in the 10/100 setup. Obviously
* > the Fast ethernet switches had an advantage over the FDDI
* > switches since Fast ethernet and conventional ethernet work
* > with the same frame types. FDDI switches on the other hand
* > has to convert ethernet frames to FDDI frames and vice versa.
* > Todays NAPs in most cases are not 10/100 set up. It is more like
* > DS3/100/100 setup where routers are feeding traffic into the Gigaswitch
* > using FDDI and since HSSI and FDDI is using same MTU size, no
* > fragmentation is involved.
*
* Both 10 and 100 ethernet use 1500 byte mtu.
*
* And remember that the characteristics of a loaded exchange is very
* diffrent between ethernet and token-ring.

And then there is always the possibility to run 100BT in
full-duplex rather than half duplex...

-Marten
Re: Inter-exchange media types [ In reply to ]
> ((and some fddi switches can talk full-duplex))

The GIGAswitch is full duplex -- all DEC FDDI gear is full duplex.
Most other FDDI interfaces, like for example Cisco's, are not.
Re: Inter-exchange media types [ In reply to ]
> Dtatacom tests were performed in the 10/100 setup. Obviously
> the Fast ethernet switches had an advantage over the FDDI
> switches since Fast ethernet and conventional ethernet work
> with the same frame types. FDDI switches on the other hand
> has to convert ethernet frames to FDDI frames and vice versa.
> Todays NAPs in most cases are not 10/100 set up. It is more like
> DS3/100/100 setup where routers are feeding traffic into the Gigaswitch
> using FDDI and since HSSI and FDDI is using same MTU size, no
> fragmentation is involved.

Both 10 and 100 ethernet use 1500 byte mtu.

And remember that the characteristics of a loaded exchange is very
diffrent between ethernet and token-ring.

---Peter
Re: Inter-exchange media types [ In reply to ]
> Peter Lothberg <roll@stupi.se> writes
> * > Dtatacom tests were performed in the 10/100 setup. Obviously
> * > the Fast ethernet switches had an advantage over the FDDI
> * > switches since Fast ethernet and conventional ethernet work
> * > with the same frame types. FDDI switches on the other hand
> * > has to convert ethernet frames to FDDI frames and vice versa.
> * > Todays NAPs in most cases are not 10/100 set up. It is more like
> * > DS3/100/100 setup where routers are feeding traffic into the Gigaswitch
> * > using FDDI and since HSSI and FDDI is using same MTU size, no
> * > fragmentation is involved.
> *
> * Both 10 and 100 ethernet use 1500 byte mtu.
> *
> * And remember that the characteristics of a loaded exchange is very
> * diffrent between ethernet and token-ring.
>
> And then there is always the possibility to run 100BT in
> full-duplex rather than half duplex...
>

Asuming that you have a switch in the middle that can buffer the
BW/Delay quota. How many of the 100Mbit ether switches has 4MB per
port in buffering?

(token out for lunch and the router buffers)

((and some fddi switches can talk full-duplex))


--Peter
Re: Inter-exchange media types [ In reply to ]
aSince large amounts of traffic on the Net orginates from
modems which are typically plugged into terminal servers, which
virtually all have ethernet interfaces, very large
amounts of internet traffic have MTUs smaller than the
1500.

The locations I know of that have FDDI and state the need
for large MTUs are the large sites, that often have discount
T-3 service provided at by CO-REN or direct federal subsidies.

(Also dialup traffic would seem to be the eara of most rapid
growth). I know there was some work done at the Sprint Nap
at one point doing traffic research, but don't know if it
included any type of size historgram.

I've had several people assert that FDDI frame sizes are in fact common, or that
at least DS-3 connected customers desire this (who are these people again?)

I won't say it isn't true, because I don't have any real data, but
I don't see any evidence that anyone else has any idea either.
(With stuff plugged into the gigaswitch, I don't see a real easy
way to find out either, perhaps we could file a FIOA with the NSA :) )

I believe the folowing to be true:
1. If there is little traffic over 1500 MTU, then
switched, 100mbps, full duplex ethernet, will be cheaper
more scaleable, and perform better than switched full duplex FDDI.
A. Ethernet hardware is more common, thus greater economies of scale.
B. Cisco's have full duplex ethernet now.
C. The FEP card has TWO 100 mbps ports compared to one FDDI. (and
costs less)
D. I feel certain that far more packets have been switched
in Cisco Cat 5000s, that Dec Gigaswitches, because real line
networks other than the Internet also use these. Cisco might
could provide some sales numbers to compare with Dec if anyone
is interested.
E. If you have lots of 10mbps switched connections going
into the FDDI, you have a addtional overhead, of the translational
bridging.

I'd still like to see a number that shows I am wrong, even if
it is not very meaningful.

>Dtatacom tests were performed in the 10/100 setup. Obviously
>the Fast ethernet switches had an advantage over the FDDI
>switches since Fast ethernet and conventional ethernet work
>with the same frame types. FDDI switches on the other hand
>has to convert ethernet frames to FDDI frames and vice versa.
>Todays NAPs in most cases are not 10/100 set up. It is more like
>DS3/100/100 setup where routers are feeding traffic into the Gigaswitch
>using FDDI and since HSSI and FDDI is using same MTU size, no
>fragmentation is involved.
>
>
> --Ismat
>
>
>On Sun, 28 Apr 1996, Hank Nussbacher wrote:
>
>>
>> On Thu, 25 Apr 1996 12:34:25 -0400 Curtis Villamizar wrote:
>>
>> >For many products, there is a big difference between how many
>> >interfaces physically fit in the card cage and how many actually work
>> >under a fairly heavy load. Have you tested a 12 port switched 100
>> >Mb/s ethernet under load? The DEC gigaswitch has been tested under
>> >load and has held up so far under load. The Cisco 5000 may bridge
>> >better than it routes since there is no route change to deal with, but
>> >I'd be a bit worried about deploying without stress testing.
>> >
>> >The problem for the major exchanges may soon be what to do when the
>> >gigaswitch runs out of bandwidth.
>> >
>> >Curtis
>> >
>>
>> The Feb 1996 issue of Data Communications ran a benchmark on these switches
>> along with a few others. I will just summerize the Digital and Cisco info
>> that they state:
>>
>> Cisco Catalyst 5000 - Full duplex Ethernet, half duplex FDDI, full duplex fast
>> Ethernet; Max: 96 Ethernet, 4 FDDI, 50 Fast Ethernet; Price: $22K for chassis, 24
>> Ethernet ports, 2 FDDI
>>
>> Digital Gigaswitch - Ethernet, Full Duplex FDDI; Max: 12 Ethernet, 8 FDDI; Price:
>> $21K for chassis, 24 Ethernet ports, 2 FDDI
>>
>> Benchmarks:
>>
>> a) % of frames delivered without loss (100% load on 40 ports):
>> Burst size (in 64 byte frames)
>> 24: Cisco 99% DEC 76%
>> 62: Cisco 100% DEC 75%
>> 124: Cisco 100% DEC 65%
>> 372: Cisco 100% DEC 51%
>> 744 Cisco 100% DEC 49%
>>
>>
>> b) % of frames delivered without loss (64 byte frames, 24 frame bursts on 40 ports):
>> 70%: Cisco: 100% DEC: 81%
>> 80% Cisco 99% DEC 79%
>> 90% Cisco 99% DEC 85%
>> 100% Cisco 99% DEC 75%
>> 150% Cisco 99% DEC 22%
>>
>> c) Per port thruput: 64 byte frame per second (24 frame bursts on 40 ports)
>> Cisco: 4891
>> DEC: 4391
>>
>> d) Latency (microseconds) - 64 byte unidirectional traffic across 100Mb backbone
>> Cisco: 79
>> DEC 179
>>
>> Bottom line: The Gigaswitch performed the almost the worst (Fibronics took that
>> honor) and the Cisco performed the best.
>>
>> Perhaps it is time to revisit the Gigaswitch technology?
>>
>> ----------------------------------------------------------------------
>> Hank Nussbacher Manager, Internet Technology Programs
>> Telephone: +972 3 6978852 Vnet: HANK at TELVM1
>> Fax: +972 3 6978115 Internet: hank@ibm.net.il
>> ----------------------------------------------------------------------
>> IBM Israel
>> 2, Weizmann St.
>> Tel Aviv 61336 ====== ======= === ===
>> http://www.ibm.net.il/ ====== ======== ==== ====
>> Dialup registration: 177-022-3993 == == == ==== ====
>> Company services: 03-6978663 == ====== == === ==
>> Internet sales fax: 03-6978115 == == === == = ==
>> Enquiries: info@ibm.net.il ====== ======== === ===
>> Technical support: noc@ibm.net.il ====== ======= === ===
>> ----------------------------------------------------------------------
>>
>>
>>
>


--
Jeremy Porter, Freeside Communications, Inc. jerry@fc.net
PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-339-6094
http://www.fc.net
- - - - - - - - - - - - - - - - -
Re: Inter-exchange media types [ In reply to ]
Hm,

I think Peter was too brief to be understood by all. Let me try
to expand on his major point (buffering requirements). First,
however, to this:

Jeremy Porter <jerry@fc.net> wrote:

> Since large amounts of traffic on the Net orginates from
> modems which are typically plugged into terminal servers, which
> virtually all have ethernet interfaces, very large
> amounts of internet traffic have MTUs smaller than the
> 1500.
>
> [Continues argument in the line of "if little traffic uses more
> than 1500 bytes MTU, ethernet will be better/cheaper/etc."]

I would claim that the average packet size doesn't really matter
much -- the average packet size is usually in the order of 2-300
bytes anyway. However, restricting the MTU of an IX to 1500
bytes *will* matter for those fortunate enough to have FDDI and
DS3 (or better) equipment all the way, forcing them to use
smaller packets than they otherwise could. Some hosts get
noticeably higher performance when they are able to use FDDI-
sized packets compared to Ethernet-sized packets, and restricting
the packet size to 1500 bytes will put a limit on the maximum
performance these people will see. In some cases it is important
to cater to these needs.

The claim that switched fast full-duplex Ethernet will perform
better than switched, full-duplex FDDI for small packets doesn't
really make sense -- not to me at least. I mean, it's not like
FDDI doesn't use variable-sized packets...


Now, over to the rather important point Peter made. In some
common cases what really matters is the behaviour of these boxes
under high load or congestion. The Digital GigaSwitch is
reportedly able to "steal" the token on one of the access ports
if that port sends too much traffic to another port where there
currently is congestion. This causes the router on the port
where the token was stolen to buffer the packets it has to send
until it sees the token again. Thus, the total buffering
capacity of the system will be the sum of the buffering internal
to the switch and the buffering in each connected router. I have
a hard time seeing how similar effects could be achieved with
ethernet-type switches. (If I'm not badly mistaken, this is a
variant of one of the architectural problems with the current ATM
based IXes as well.)

Thanks to Curtis Villamizar it should be fairly well known by now
what insufficient buffering can do to your effective utilization
under high offered load (it's not pretty), and that the
requirements for buffering at a bottleneck scales approximately
with the (end-to-end) bandwidth X delay product for the traffic
you transport through that bottleneck.

So, there you have it: if you foresee that you will push the
technology to it's limits, switched ethernet (fast or full
duplex) as part of a "total solution" for an IX point seems to be
at a disadvantage compared to switched FDDI as currently
implemented in the Digital GigaSwitch.

This doesn't render switched ethernet unusable in all
circumstances, of course.


Regards,

- Havard
- - - - - - - - - - - - - - - - -

1 2  View All