Mailing List Archive

MLX expected behaviour for RIB/FIB exhaustion
Hello,

I have been searching through the archives to find an answer for my questions. I have also contacted our SE and have so far not been getting good/detailed responses.

We currently have 4 MLXe-4's with the 8port 10GbE "D-Line" blades which have support for 256k routes in hardware. So far that has been fine as we only accept a default route from our upstream provider. We are currently about to bring on another provider which means we need to accept a full route table from both so that we can do the best possible route path selection. Unfortunately for us the current IPv4 BGP table size is ~500k routes. I understand that there is a FIB and RIB on the system.

What I am trying to understand is what behavior/impact would we expect if the blades only support 256k routes. What does a route lookup do when the route table doesn't have a route in hardware. In the past with other vendors I have seen the CPU on the management module start to get very high usage. I don't believe that would be the case on the MLX but I need to get some second opinions as there is no conclusive document on the brocade web page or similar to explain exactly how it works in this scenario and what the impact would be.


Any feedback or ways around the problem would also be welcome.

Regards,

Jimmy.



_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: MLX expected behaviour for RIB/FIB exhaustion [ In reply to ]
Rather than taking full routes from both providers, why not just have them
advertise a default + their customer routes? This will be significantly
lower than a full table, and will allow you to improve your path selection
without compromising your hardware limitations.

Just a thought..


-- Eric Cables


On Sun, Jan 5, 2014 at 4:49 PM, Jimmy Stewpot <mailers@oranged.to> wrote:

> Hello,
>
> I have been searching through the archives to find an answer for my
> questions. I have also contacted our SE and have so far not been getting
> good/detailed responses.
>
> We currently have 4 MLXe-4's with the 8port 10GbE "D-Line" blades which
> have support for 256k routes in hardware. So far that has been fine as we
> only accept a default route from our upstream provider. We are currently
> about to bring on another provider which means we need to accept a full
> route table from both so that we can do the best possible route path
> selection. Unfortunately for us the current IPv4 BGP table size is ~500k
> routes. I understand that there is a FIB and RIB on the system.
>
> What I am trying to understand is what behavior/impact would we expect if
> the blades only support 256k routes. What does a route lookup do when the
> route table doesn't have a route in hardware. In the past with other
> vendors I have seen the CPU on the management module start to get very high
> usage. I don't believe that would be the case on the MLX but I need to get
> some second opinions as there is no conclusive document on the brocade web
> page or similar to explain exactly how it works in this scenario and what
> the impact would be.
>
>
> Any feedback or ways around the problem would also be welcome.
>
> Regards,
>
> Jimmy.
>
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>