Mailing List Archive

old, summit400 (ExtremeWare) and L3 routing concept
hello,

today I had to do some troubleshooting and I couldn't understand L3 IP
Routing in ExtremeWare (summit400).

Documentation specifies that it has:
Forwarding Tables
• Layer 2/MAC Addresses: 16K
• Layer 3 forwarding database in hardware: 4K
• Layer 3 routing table size: 64K


so, ipfdb - 4k

but, it in newer EXOS - there is nothing like ipfdb, just a routing table.

ipfdb looks like a table with bindings - DST IP - NextHop MAC.
And it is created for every signle DST IP that the switch is forwarding
traffic.

So, I have observed, that if there is a lot of traffic towards different
destinations, it is very ease to make ipfdb full and to get:
06/14/2014 08:40:07.20 <Erro:IPHS> Could not add L3 entry for unit
3-93.105.223.170 at 0xbff(-6: Table full)
06/14/2014 08:40:07.20 <Erro:IPHS> Could not add L3 entry for unit
2-93.105.223.170 at 0xbff(-6: Table full)
06/14/2014 08:40:07.20 <Erro:IPHS> Could not add L3 entry for unit
1-93.105.223.170 at 0xbff(-6: Table full)
06/14/2014 08:40:07.20 <Erro:IPHS> Could not add L3 entry for unit
0-93.105.223.170 at 0xbff(-6: Table full)

What happens than ? CPU forwards the packet ?
What is corelation between routing table size (64k is not that small)
and forwarding table ?

It there any way to install somehow some entries in ipfdb that will
cover particular networks ?

is it that command:
configure ipfdb route-add clear-subnet

default is clear all.

Sorry for asking silly-like questions, but this is wired for me...

Regards,
Marcin

_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp