Mailing List Archive

1 2  View All
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
I just got quotes for an Arista 7280R (48x 10Gbit and 6x 100Gbit) and an
SLX9640.  Adding a port license to the 9640 to go to 8x 100G ports the
Arista quote for just the base box (no service, optics, etc.) was 251%
higher.

Maybe I have the wrong sales rep at Arista and the right one at Extreme?

Aaron


On 3/12/2019 1:39 PM, Richard Laager wrote:
> On 3/12/19 1:19 PM, Aaron wrote:
>> We replaced our Extreme gear with Brocade several years ago. We're
>> getting ready to swap out some MLXes' with SLXs'.  We've looked at
>> Arista hard but just can't seem to justify the 3x price premium.
> Unless Extreme has massively dropped the price or Arista has massively
> increased their prices, there's no reason that equivalent Arista gear
> should be 3x the price of an SLX, and that doesn't match up with my
> experience.
>
> I ended up with the Arista 7280SR (48-ports of 10G, 6 ports of
> short-range 100G in a 1RU fixed configuration) and it's been working well.
>

--
================================================================
Aaron Wendel
Chief Technical Officer
Wholesale Internet, Inc. (AS 32097)
(816)550-9030
http://www.wholesaleinternet.com
================================================================

_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
Rack space is the premium with me. A 7280R2K-48C6 gives me 48x10G plus
6x40/100G ports in 1U with 2 million routes vs 7U for an MLX-8.


On Tue, Mar 12, 2019, 11:49 Aaron <aaron@wholesaleinternet.net> wrote:

> I just got quotes for an Arista 7280R (48x 10Gbit and 6x 100Gbit) and an
> SLX9640. Adding a port license to the 9640 to go to 8x 100G ports the
> Arista quote for just the base box (no service, optics, etc.) was 251%
> higher.
>
> Maybe I have the wrong sales rep at Arista and the right one at Extreme?
>
> Aaron
>
>
> On 3/12/2019 1:39 PM, Richard Laager wrote:
> > On 3/12/19 1:19 PM, Aaron wrote:
> >> We replaced our Extreme gear with Brocade several years ago. We're
> >> getting ready to swap out some MLXes' with SLXs'. We've looked at
> >> Arista hard but just can't seem to justify the 3x price premium.
> > Unless Extreme has massively dropped the price or Arista has massively
> > increased their prices, there's no reason that equivalent Arista gear
> > should be 3x the price of an SLX, and that doesn't match up with my
> > experience.
> >
> > I ended up with the Arista 7280SR (48-ports of 10G, 6 ports of
> > short-range 100G in a 1RU fixed configuration) and it's been working
> well.
> >
>
> --
> ================================================================
> Aaron Wendel
> Chief Technical Officer
> Wholesale Internet, Inc. (AS 32097)
> (816)550-9030
> http://www.wholesaleinternet.com
> ================================================================
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
The SLX9640 is a 1U box with 24x 10G ports and 12x 100G ports (4
licensed on the base config) with 4 million routes for less than a third
of the price.

I'd love to try Arista but that's what I'm dealing with.

Aaron


On 3/12/2019 5:46 PM, George B wrote:
> Rack space is the premium with me. A 7280R2K-48C6 gives me 48x10G plus
> 6x40/100G ports in 1U with 2 million routes vs 7U for an MLX-8.
>
>
> On Tue, Mar 12, 2019, 11:49 Aaron <aaron@wholesaleinternet.net
> <mailto:aaron@wholesaleinternet.net>> wrote:
>
> I just got quotes for an Arista 7280R (48x 10Gbit and 6x 100Gbit)
> and an
> SLX9640.  Adding a port license to the 9640 to go to 8x 100G ports
> the
> Arista quote for just the base box (no service, optics, etc.) was
> 251%
> higher.
>
> Maybe I have the wrong sales rep at Arista and the right one at
> Extreme?
>
> Aaron
>
>
> On 3/12/2019 1:39 PM, Richard Laager wrote:
> > On 3/12/19 1:19 PM, Aaron wrote:
> >> We replaced our Extreme gear with Brocade several years ago. We're
> >> getting ready to swap out some MLXes' with SLXs'. We've looked at
> >> Arista hard but just can't seem to justify the 3x price premium.
> > Unless Extreme has massively dropped the price or Arista has
> massively
> > increased their prices, there's no reason that equivalent Arista
> gear
> > should be 3x the price of an SLX, and that doesn't match up with my
> > experience.
> >
> > I ended up with the Arista 7280SR (48-ports of 10G, 6 ports of
> > short-range 100G in a 1RU fixed configuration) and it's been
> working well.
> >
>
> --
> ================================================================
> Aaron Wendel
> Chief Technical Officer
> Wholesale Internet, Inc. (AS 32097)
> (816)550-9030
> http://www.wholesaleinternet.com
> ================================================================
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net <mailto:foundry-nsp@puck.nether.net>
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp

--
================================================================
Aaron Wendel
Chief Technical Officer
Wholesale Internet, Inc. (AS 32097)
(816)550-9030
http://www.wholesaleinternet.com
================================================================

_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
That's what I get for not keeping up with that product line.

I think price difference might also vary according to what kind of
discounts people's vendors can get them. Looking around I don't see that
much of a price difference between the two units though the SLX is lower
priced. About the only real technical advantage I see with the Aristas are
that they have generally been rather free of bugs and the mLAG seems to be
a little better suited to our environment than the MCT of the Brocade
line. That said, I wouldn't turn down a chance to try out one of those SLX
units.

On Tue, Mar 12, 2019 at 3:57 PM Aaron <aaron@wholesaleinternet.net> wrote:

> The SLX9640 is a 1U box with 24x 10G ports and 12x 100G ports (4
> licensed on the base config) with 4 million routes for less than a third
> of the price.
>
> I'd love to try Arista but that's what I'm dealing with.
>
> Aaron
>
>
> On 3/12/2019 5:46 PM, George B wrote:
> > Rack space is the premium with me. A 7280R2K-48C6 gives me 48x10G plus
> > 6x40/100G ports in 1U with 2 million routes vs 7U for an MLX-8.
> >
> >
> > On Tue, Mar 12, 2019, 11:49 Aaron <aaron@wholesaleinternet.net
> > <mailto:aaron@wholesaleinternet.net>> wrote:
> >
> > I just got quotes for an Arista 7280R (48x 10Gbit and 6x 100Gbit)
> > and an
> > SLX9640. Adding a port license to the 9640 to go to 8x 100G ports
> > the
> > Arista quote for just the base box (no service, optics, etc.) was
> > 251%
> > higher.
> >
> > Maybe I have the wrong sales rep at Arista and the right one at
> > Extreme?
> >
> > Aaron
> >
> >
> > On 3/12/2019 1:39 PM, Richard Laager wrote:
> > > On 3/12/19 1:19 PM, Aaron wrote:
> > >> We replaced our Extreme gear with Brocade several years ago. We're
> > >> getting ready to swap out some MLXes' with SLXs'. We've looked at
> > >> Arista hard but just can't seem to justify the 3x price premium.
> > > Unless Extreme has massively dropped the price or Arista has
> > massively
> > > increased their prices, there's no reason that equivalent Arista
> > gear
> > > should be 3x the price of an SLX, and that doesn't match up with my
> > > experience.
> > >
> > > I ended up with the Arista 7280SR (48-ports of 10G, 6 ports of
> > > short-range 100G in a 1RU fixed configuration) and it's been
> > working well.
> > >
> >
> > --
> > ================================================================
> > Aaron Wendel
> > Chief Technical Officer
> > Wholesale Internet, Inc. (AS 32097)
> > (816)550-9030
> > http://www.wholesaleinternet.com
> > ================================================================
> >
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp@puck.nether.net <mailto:foundry-nsp@puck.nether.net>
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
> >
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp@puck.nether.net
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> --
> ================================================================
> Aaron Wendel
> Chief Technical Officer
> Wholesale Internet, Inc. (AS 32097)
> (816)550-9030
> http://www.wholesaleinternet.com
> ================================================================
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
To other platforms?

We still have a few of these running, and we appear to be about to hit the
wall.

What's going on here?

System Parameters Default Maximum Current Actual Bootup Revertible
ip-cache 204800 1048576 1048576 1048576 1048576 Yes
ip-route 204800 1048576 1048576 1048576 1048576 Yes
ipv6-cache 65536 245760 245760 245760 245760 Yes
ipv6-route 65536 245760 245760 245760 245760 Yes

Router has been rebooted since those system-max limits were changed.

#show cam-partition slot 1

CAM partitioning profile: ipv4-ipv6-2

Slot 1 XPP20SP 0:
# of CAM device = 4
Total CAM Size = 917504 entries (63Mbits)

IP: Raw Size 786432, User Size 786432(0 reserved)
Subpartition 0: Raw Size 2420, User Size 2420, (0 reserved)
Subpartition 1: Raw Size 692807, User Size 692807, (0 reserved)
Subpartition 2: Raw Size 73329, User Size 73329, (0 reserved)
Subpartition 3: Raw Size 13567, User Size 13567, (0 reserved)
Subpartition 4: Raw Size 3075, User Size 3075, (0 reserved)

IPv6: Raw Size 131072, User Size 65536(0 reserved)
Subpartition 0: Raw Size 188, User Size 94, (0 reserved)
Subpartition 1: Raw Size 121688, User Size 60844, (0 reserved)
Subpartition 2: Raw Size 7940, User Size 3970, (0 reserved)
Subpartition 3: Raw Size 1058, User Size 529, (0 reserved)
Subpartition 4: Raw Size 160, User Size 80, (0 reserved)

If I'm interpreting that output correctly, the router has a total of
917504 CAM entries for routes, it's split between v4/v6, and in spite of
the system-max settings, this router will be out of FIB space at 786432 v4
routes / 65536 v6 routes...or any moment now for v6. This router is
logging CAM partition warnings about free slots being low.

We're seeing the same on MLXe and NetIron chassis in XMR mode
running 5.6.0j.


----------------------------------------------------------------------
Jon Lewis, MCP :) | I route
| therefore you are
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
On 13 Mar 2019, at 16:03, Jon Lewis wrote:
> If I'm interpreting that output correctly, the router has a total of
> 917504 CAM entries for routes, it's split between v4/v6, and in spite
> of the system-max settings, this router will be out of FIB space at
> 786432 v4 routes / 65536 v6 routes...or any moment now for v6. This
> router is logging CAM partition warnings about free slots being low.

Something around this, I think it was even 768000 v4 routes only.

Can you check the output of:

show ip cache
show ipv6 cache
show ip route summary
show ipv6 route summary


_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
The XMR and MLXe are limited to 786432 IPv4 routes and 65536 for IPv6 (ipv4-ipv6-2 profile)
If you want more routes you need -X2 linecards.

The IPv4 table and IPv6 tables are close to limit from the XMR/MLXe -x cards (+-757166 ipv4 and +-65239 ipv6)
That is why you get the cam profile warnings.



Kind regards/Met vriendelijke groet,

Dennis op de Weegh

?

Bitency
Hoge Ham 24
5104JG Dongen
?
Kvk nummer: 20144338
BTW nummer: NL213538519B01
?
W: www.bitency.nl
E: info@bitency.nl
T: +31 (0)162 714066


-----Oorspronkelijk bericht-----
Van: foundry-nsp <foundry-nsp-bounces@puck.nether.net> Namens Jon Lewis
Verzonden: woensdag 13 maart 2019 16:04
Aan: foundry-nsp@puck.nether.net
Onderwerp: Re: [f-nsp] Where have all the MLX/XMR users gone to?

To other platforms?

We still have a few of these running, and we appear to be about to hit the
wall.

What's going on here?

System Parameters Default Maximum Current Actual Bootup Revertible
ip-cache 204800 1048576 1048576 1048576 1048576 Yes
ip-route 204800 1048576 1048576 1048576 1048576 Yes
ipv6-cache 65536 245760 245760 245760 245760 Yes
ipv6-route 65536 245760 245760 245760 245760 Yes

Router has been rebooted since those system-max limits were changed.

#show cam-partition slot 1

CAM partitioning profile: ipv4-ipv6-2

Slot 1 XPP20SP 0:
# of CAM device = 4
Total CAM Size = 917504 entries (63Mbits)

IP: Raw Size 786432, User Size 786432(0 reserved)
Subpartition 0: Raw Size 2420, User Size 2420, (0 reserved)
Subpartition 1: Raw Size 692807, User Size 692807, (0 reserved)
Subpartition 2: Raw Size 73329, User Size 73329, (0 reserved)
Subpartition 3: Raw Size 13567, User Size 13567, (0 reserved)
Subpartition 4: Raw Size 3075, User Size 3075, (0 reserved)

IPv6: Raw Size 131072, User Size 65536(0 reserved)
Subpartition 0: Raw Size 188, User Size 94, (0 reserved)
Subpartition 1: Raw Size 121688, User Size 60844, (0 reserved)
Subpartition 2: Raw Size 7940, User Size 3970, (0 reserved)
Subpartition 3: Raw Size 1058, User Size 529, (0 reserved)
Subpartition 4: Raw Size 160, User Size 80, (0 reserved)

If I'm interpreting that output correctly, the router has a total of
917504 CAM entries for routes, it's split between v4/v6, and in spite of
the system-max settings, this router will be out of FIB space at 786432 v4
routes / 65536 v6 routes...or any moment now for v6. This router is
logging CAM partition warnings about free slots being low.

We're seeing the same on MLXe and NetIron chassis in XMR mode
running 5.6.0j.


----------------------------------------------------------------------
Jon Lewis, MCP :) | I route
| therefore you are
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
On Fri, 15 Mar 2019, Dennis op de Weegh wrote:

> The XMR and MLXe are limited to 786432 IPv4 routes and 65536 for IPv6 (ipv4-ipv6-2 profile)
> If you want more routes you need -X2 linecards.
>
> The IPv4 table and IPv6 tables are close to limit from the XMR/MLXe -x cards (+-757166 ipv4 and +-65239 ipv6)
> That is why you get the cam profile warnings.

:(

I'm going to lab test it, but I was hoping that more sensible system-max
settings (keeping in mind there's only 1M slots to carve up) would make it
possible to raise the v6 limit to around 75k while still maintaining
enough room for some v4 table growth, the idea being, buy some time so we
don't have to rush to decom our remaining brocade routers or come up with
other workarounds (less than full tables).

----------------------------------------------------------------------
Jon Lewis, MCP :) | I route
| therefore you are
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
On Fri, 15 Mar 2019, J?rg Kost wrote:

> On 13 Mar 2019, at 16:03, Jon Lewis wrote:
>> If I'm interpreting that output correctly, the router has a total of
>> 917504 CAM entries for routes, it's split between v4/v6, and in spite of
>> the system-max settings, this router will be out of FIB space at 786432 v4
>> routes / 65536 v6 routes...or any moment now for v6. This router is
>> logging CAM partition warnings about free slots being low.
>
> Something around this, I think it was even 768000 v4 routes only.
>
> Can you check the output of:
>
> show ip cache
> show ipv6 cache
> show ip route summary
> show ipv6 route summary

I'm not where I can easily get at all that info, but I do have the first
two available.

Total IP and IPVPN Cache Entry Usage on LPs:
Module Host Network Free Total
1 826 755362 292388 1048576
2 2181 755362 291033 1048576

Total IPv6 and IPv6 VPN Cache Entry Usage on LPs:
Module Host Network Free Total
1 151 66067 251738 317956
2 150 66067 251739 317956

These numbers, esp the IPv6, don't really make sense, since the v6 ones
imply the router has reserved more space for IPv6 route-cache than the
system-max settings requested...and regardless of what the show ip[v6]
cache output says, the router seems to be out of slots. It's got a couple
of full transit feeds and a couple dozen public peers...so roughly 755k v4
routes and roughly 66k v6 routes.

----------------------------------------------------------------------
Jon Lewis, MCP :) | I route
| therefore you are
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
On Fri, 15 Mar 2019, Dennis op de Weegh wrote:

> The XMR and MLXe are limited to 786432 IPv4 routes and 65536 for IPv6 (ipv4-ipv6-2 profile)
> If you want more routes you need -X2 linecards.
>
> The IPv4 table and IPv6 tables are close to limit from the XMR/MLXe -x cards (+-757166 ipv4 and +-65239 ipv6)
> That is why you get the cam profile warnings.

It seems Brocade was even worse than Cisco (1M routes on the 3bxl), in
that their 1M route cards are really 1M routes - 2x v6 routes - about
100000 TCAM slots for other misc uses, and you can't fine tune the v4/v6
split. I finally got a chance to do some messing around with a lab XMR
with full BGP feeds, and was I think I found is:

system-max settings dealing with max numbers of ip[v6]-route/cache affect
what the router is willing/able to store in RIB, but the cam-partition
profile affects what can be stuffed in FIB, and other than selecting
between the various cam-partition profiles, the TCAM carving between v4/v6
is not flexible (or if it is, that's a hidden command). So, for a
full-table router, the only cam-partition profile of any use is
ipv4-ipv6-2, and that profile may or may not hold a "full" table today
depending on the exact size of your full table.

Or in other words, the MLX/XMR is dead or very nearly there for anyone
needing full v4 & v6 tables.

----------------------------------------------------------------------
Jon Lewis, MCP :) | I route
| therefore you are
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Where have all the MLX/XMR users gone to? [ In reply to ]
XMR life time has or will come very soon for DFZ routing, the MLXe (if
you have one) is still a valid platform for DFZ and I am sure it is
going to be supported at least till 2025/2026.

So configurations that still work:
- MLXE chassis with MR2-X-management cards

Modules:
- BR-MLX-100Gx2-CFP2-X2
-> 2 x 100g
- BR-MLX-10GX20-X2 in its variants:
-> BR-MLX-10Gx20-X2
-> 20 x 1g/10g
-> BR-MLX-10Gx10-X2
-> 10 x 1g/10g, upgradeable to 20 1g/20g
-> BR-MLX-1GX10-U10G-X2
-> 20 x 1g, upgradeable to 20 1g/20g

Cam profiles:
- default (1424K mil ipv4, 416K ipv6)
- multiservice-6 ( 1120K ipv4, 768K ipv6)
- ipv4-ipv6-2 (2048K ipv4, 1024K ipv6, no VPN)
-> reference
https://documentation.extremenetworks.com/netiron/SW/63x/6300a/netIron-6300a-managementguide.pdf

See also
https://gtacknowledge.extremenetworks.com/articles/Q_A/How-to-enable-2-Million-routes-on-MLX-platform

If you buy new, SLX 9540/9640/9850 are valid targets but this leads the
thread back to the beginning ;-).


On 22 Mar 2019, at 0:37, Jon Lewis wrote:

> On Fri, 15 Mar 2019, Dennis op de Weegh wrote:
>
>> The XMR and MLXe are limited to 786432 IPv4 routes and 65536 for IPv6
>> (ipv4-ipv6-2 profile)
>> If you want more routes you need -X2 linecards.
>>
>> The IPv4 table and IPv6 tables are close to limit from the XMR/MLXe
>> -x cards (+-757166 ipv4 and +-65239 ipv6)
>> That is why you get the cam profile warnings.
>
> It seems Brocade was even worse than Cisco (1M routes on the 3bxl), in
> that their 1M route cards are really 1M routes - 2x v6 routes - about
> 100000 TCAM slots for other misc uses, and you can't fine tune the
> v4/v6 split. I finally got a chance to do some messing around with a
> lab XMR with full BGP feeds, and was I think I found is:
>
> system-max settings dealing with max numbers of ip[v6]-route/cache
> affect what the router is willing/able to store in RIB, but the
> cam-partition profile affects what can be stuffed in FIB, and other
> than selecting between the various cam-partition profiles, the TCAM
> carving between v4/v6 is not flexible (or if it is, that's a hidden
> command). So, for a full-table router, the only cam-partition profile
> of any use is ipv4-ipv6-2, and that profile may or may not hold a
> "full" table today depending on the exact size of your full table.
>
> Or in other words, the MLX/XMR is dead or very nearly there for anyone
> needing full v4 & v6 tables.
>
> ----------------------------------------------------------------------
> Jon Lewis, MCP :) | I route
> | therefore you are
> _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp

1 2  View All