Mailing List Archive

BigIron/NetIron experiences?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: MD5

Hi,

We currently run a network made up of 100% Cisco devices.
We now have to setup a new IP POP and my finance director has asked
me to decide on something more affordable, so I'm thinking along the
lines of using the BigIron 4000 (or NetIron 400) as core layer 3
switches and the NetIron stackable routers (NSR24?) at the
distribution level.

I would like to hear experiences from fellow Foundry users before we
try to implement this, as we had a whole stack of Extreme Summit
switches go terribly wrong (dropped packets and locked up at under
10Mbit/s load).

Here are our requirements...
All the switches must be able to do L2 ingress/egress rate limiting
and support BGP.
The core switches should be able to handle at least 2 full BGP tables
and about 30 peering sessions.
And of course the core switches should cost less than the Catalyst 6500
series, which I'm almost certain they will.

I've read some horror stories about Foundry's BGP implementation, does
anyone know what that is about?

All responses will be seriously digested and appreciated.

Thanks in advance!

Best Regards,

Glenn Tan
Senior Network Manager
Network Operations/Engineering
***********************************
Intermedia Network Services Pte Ltd
HP: +65 94371288
Tel: +65 65470911
Fax: +65 65470933
http://www.intermedia.com.sg
***********************************

-----BEGIN PGP SIGNATURE-----
Version: 2.6

iQCVAwUAQiLxeGKcKk2+XUMDAQHMNQP7BYwcF3wYQfCk8k+Qv0NCgZL7MHUw1xjK
151N0eJ8QzKCykoUuSNKgyyKPa0mzvrQuY7ZdLcu+DV/RZ0AJVblYjtenpmGUG/I
YOZo8iG7lCK5wj1vGhmdkVve6b/ie0+Mqv4nvcpJcND2EVsNcoNOJZbCg3EQUSBC
sz2Jn98K7Oo=
=MHgc
-----END PGP SIGNATURE-----
BigIron/NetIron experiences? [ In reply to ]
Hi Glenn,

On Mon, 28 Feb 2005, Glenn Tan wrote:

> Hi,
>
> We currently run a network made up of 100% Cisco devices.
> We now have to setup a new IP POP and my finance director has asked
> me to decide on something more affordable, so I'm thinking along the
> lines of using the BigIron 4000 (or NetIron 400) as core layer 3
> switches and the NetIron stackable routers (NSR24?) at the
> distribution level.
>
> I would like to hear experiences from fellow Foundry users before we
> try to implement this, as we had a whole stack of Extreme Summit
> switches go terribly wrong (dropped packets and locked up at under
> 10Mbit/s load).
>
> Here are our requirements...
> All the switches must be able to do L2 ingress/egress rate limiting
> and support BGP.
> The core switches should be able to handle at least 2 full BGP tables
> and about 30 peering sessions.
> And of course the core switches should cost less than the Catalyst 6500
> series, which I'm almost certain they will.
>
> I've read some horror stories about Foundry's BGP implementation, does
> anyone know what that is about?

In general we've found the recent builds of Ironware to provide solid BGP
and OSPF. (This certainly couldn't be said of earlier versions!). The CLI
is similar enough to cisco's that (with a few exceptions) anyone trained
on cisco ought to be able to operate it.

It's goes with out saying that should only pick Jetcore based hardware for
work on L3. Ironcore based hardware will melt under a large number of
flows. Jetcore hardware, while still technically flow based, only examines
the first 64 bytes of every packet for flow creation, which gives it far
more deterministic properties (similar enough to prefix based routing that
some people believe it is).

On the BigIron's we've experienced the following problems:

* No separate graphing of virtual interfaces. On a cisco 6500 you can
graph the virtual L3 interfaces (with some gotchas regarding mls, of
course). On the BigIron's you can only graph L2 ports. This struck me as
odd considering it looks like this information can be reconstructed from
the sFlow information, and might therefore be accessible to the hardware,
but I've had conversations with Foundry engineers who have assured me this
is a hardware limitation and cannot be fixed in software. This limitation
does not exist on the NetIrons.

* Enabling port monitoring disables sFlow. Bare this in mind if you plan
to use sFlow for billing and port monitoring for diagnositics.

* Slightly different next-hop lookup. cisco appears to implement a fully
recursive next-hop lookup, but the Foundry's don't seem to. This has
caused us a few problems in the past when static routes injected into
OSPF weren't configured on a directly connected router (yes... I know
that's bad anyway ;) ).

* Changes to BGP session config only take affect when the session is
cleared. This seems harmless to start with but consider that the following
commands will cause a full table leak!

neighbor x.y.z.w remote-as xxxx
neighbor x.y.z.w route-map out RM-PEER-OUT

The session is marked as configued after the first command is entered, so
the second doesn't take affect until 'clear ip bgp nei x.y.z.w soft out'
is entered.

However, under Foundry's CLI you can apply a route-map before specifying
an AS number, so the following is fine:

neighbor x.y.z.w peer-group PM-PEER
neighbor x.y.z.w remote-as xxxx

* I haven't played with it to much, but Foundry's QoS appears to be sorely
lacking when compared to cisco's.

* No DSCP marking of incoming traffic. Dispite being able to policy route
and queue based on DSCP values, none of the Foundrys can mark incoming
traffic with a DSCP value. They can mark IP Precendence via an inbound
rate-limiting, but bear in mind that the exceed action "mark and transmit"
appears to just drop the traffic on some versions of Ironware. (ie. It
still actually rate limits).

Sam