Mailing List Archive

BR-MLX-10Gx24-DM 24-port 10GbE Module
Hi

I just came across BR-MLX-10Gx24-DM and got a bit disappointed.

"BR-MLX-10Gx24-DM interface modules require the "snmp-server
max-ifindex-per-module 40|64" configured. Otherwise, the cards will not
come up."

This causes snmp index renumbering of all ports, except the ones in slot 1.

"BR-MLX-10Gx24-DM module is an oversubscribed module. The module can
support up to 200Gbps when the system fabric mode is in Turbo mode
(i.e. system has only Gen 2 and Gen 3 modules such as 8x10G, 100G or
24x10G modules). The module can support up to 12 10G wire-speed
ports when the system fabric mode is in Normal mode (i.e. system also
has any Gen 1 modules such as 1G or 4x10G modules)."

"The 10Gx24-DM module ports can only be part of LAGs exclusively
consisting of 24x10G ports. A LAG cannot have a mix of 24x10G module
ports and any other 10G module ports."

"The following features are not supported by the 10Gx24-DM module:
• VLAN-Tag Preservation for Telemetry
• IPv6 Subnet Rate Limiting for IPv6 DOS Attack Protection
• IPTV Enhancements
• HQoS
• MPLS over GRE
• Single-Hop LSP Accounting
• Dynamic CAM Mode
• MCT-VPLS
• Routing over VPLS
• PBB
• ETH-AIS
• MMRP, MVRP
• OpenFlow: in Layer 3 flows, source MAC address cannot be modified to
any other MAC other than the port MAC address
• OpenFlow: Layer 2 flows only support the following ether types:
- IPv4, IPv6, ARP, 802.1ag, 802.1ah

Just as a heads up.

/Tias
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: BR-MLX-10Gx24-DM 24-port 10GbE Module [ In reply to ]
* tias@netnod.se (Mathias Wolkert) [Thu 30 Jan 2014, 11:39 CET]:
>I just came across BR-MLX-10Gx24-DM and got a bit disappointed.
>
>"BR-MLX-10Gx24-DM interface modules require the "snmp-server
>max-ifindex-per-module 40|64" configured. Otherwise, the cards will not
>come up."
>
>This causes snmp index renumbering of all ports, except the ones in slot 1.

Yes. I really like how Brocade derives ifIndex from slot/port
position. I was very disappointed when they broke the system for MLX
by changing the base for no technical reason at all, and it's sad to
see that that gratuitous change has come back multiple times now to
bite them in their own asses too.

If you don't have it configured the module stays down with a
convoluted error message. I bet they've had a lot of support
calls over this as well. At least they're easy to solve...


>"BR-MLX-10Gx24-DM module is an oversubscribed module. The module can
>support up to 200Gbps when the system fabric mode is in Turbo mode
>(i.e. system has only Gen 2 and Gen 3 modules such as 8x10G, 100G or
>24x10G modules). The module can support up to 12 10G wire-speed
>ports when the system fabric mode is in Normal mode (i.e. system also
>has any Gen 1 modules such as 1G or 4x10G modules)."

Actually I thought this was 18 ports linerate for Turbo mode, not 20.
Also, they're grouped, so you can't run ports 1-12 linerate in normal
mode and keep the rest empty, but must use (e.g.) 1-4, 9-12, 17-20.
Not optimal if you precable as much as possible, as you probably do.

[snip]
The other limitations are mostly caused by this being (presumably)
merchant silicon ASIC instead of their own FGPAs. Not that anybody
should miss Dynamic CAM mode, of course..!


-- Niels.

--
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp