Mailing List Archive

Packetloss on MLX-4
Greetings,

We've got the following setup:


[ other AS ]
|
[ mlx-4 ] -- [ icx6430 ] -- (cloud) -- [ customer ]
|
[ server ]


The mlx has a bgp session with the other AS and with the customer. The
server and other AS can reach each other, the customer and the server
can reach each other and the other AS and customer can reach each other.
This setup is duplicated many times, with different servers and
different customers.

From time to time a server gets unreachable from the customer and vice
verse. The other AS can still reach the server and the customer.

I made a port-mirror of the ports on the mlx-4 and the icx6430 that face
each other and captured the data going through. Although there's nothing
between the two devices, except for a direct patch, I see packets on the
icx, that I don't see on the mlx. The packetloss is not random, so it's
not a problem in the cable. When I ping the customer from the other AS
and the server, I see the reply packets for both on the icx, but only
for the other AS on the mlx.

So, the problem isn't at the customer, the routing on their side still
works. Somehow the mlx is dropping packets, but I have no idea why. It's
not always the same customer where the problem occurs and we couldn't
find anything that triggers this event.

Now the weirdest part. To "fix" the problem, it helps to change the
mac-address of the server. In some cases it also helped to do 'show
mac-address' on the mlx.

Does anyone have an idea where to look?


Thanks in advance,
Rogier

_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Packetloss on MLX-4 [ In reply to ]
Do you have any broadcast limiting (storm control) applied?

On other vendors' switches we find that excessive broadcast filtering can
cause this kind of thing.

Also it could be an inconsistency between MAC and ARP timeouts


*Justin Keery, Director*
Venus Business Communications Ltd
*FAST FIBRE INTERNET AND ETHERNET PRIVATE NETWORKS*
24 Denmark St, London WC2H 8NJ

VENUS
020 7240 5858
07976 153 984
www.venus.co.uk




On 12 February 2014 14:56, Rogier van Eeten <rogier@virtunix.nl> wrote:

> Greetings,
>
> We've got the following setup:
>
>
> [ other AS ]
> |
> [ mlx-4 ] -- [ icx6430 ] -- (cloud) -- [ customer ]
> |
> [ server ]
>
>
> The mlx has a bgp session with the other AS and with the customer. The
> server and other AS can reach each other, the customer and the server can
> reach each other and the other AS and customer can reach each other.
> This setup is duplicated many times, with different servers and different
> customers.
>
> From time to time a server gets unreachable from the customer and vice
> verse. The other AS can still reach the server and the customer.
>
> I made a port-mirror of the ports on the mlx-4 and the icx6430 that face
> each other and captured the data going through. Although there's nothing
> between the two devices, except for a direct patch, I see packets on the
> icx, that I don't see on the mlx. The packetloss is not random, so it's not
> a problem in the cable. When I ping the customer from the other AS and the
> server, I see the reply packets for both on the icx, but only for the other
> AS on the mlx.
>
> So, the problem isn't at the customer, the routing on their side still
> works. Somehow the mlx is dropping packets, but I have no idea why. It's
> not always the same customer where the problem occurs and we couldn't find
> anything that triggers this event.
>
> Now the weirdest part. To "fix" the problem, it helps to change the
> mac-address of the server. In some cases it also helped to do 'show
> mac-address' on the mlx.
>
> Does anyone have an idea where to look?
>
>
> Thanks in advance,
> Rogier
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
Re: Packetloss on MLX-4 [ In reply to ]
sh mem
sh ip cache
sh def val


if mem less than 7% you need change system-max params
see LP mem > 10%
and MP mem > 7%

if ip_cache dont have free you need change system-max params

You need check switch fabric too "sh sfm-links"
go to MLX and see all switch fabric need be "MLX" NOT "RX"
"NI-xx" no "BI-xx" !!!

make fun

Mitko


On Wednesday 12 February 2014 15:56:39 Rogier van Eeten wrote:
> Greetings,
>
> We've got the following setup:
>
>
> [ other AS ]
>
> [ mlx-4 ] -- [ icx6430 ] -- (cloud) -- [ customer ]
>
> [ server ]
>
>
> The mlx has a bgp session with the other AS and with the customer. The
> server and other AS can reach each other, the customer and the server
> can reach each other and the other AS and customer can reach each other.
> This setup is duplicated many times, with different servers and
> different customers.
>
> From time to time a server gets unreachable from the customer and vice
> verse. The other AS can still reach the server and the customer.
>
> I made a port-mirror of the ports on the mlx-4 and the icx6430 that face
> each other and captured the data going through. Although there's nothing
> between the two devices, except for a direct patch, I see packets on the
> icx, that I don't see on the mlx. The packetloss is not random, so it's
> not a problem in the cable. When I ping the customer from the other AS
> and the server, I see the reply packets for both on the icx, but only
> for the other AS on the mlx.
>
> So, the problem isn't at the customer, the routing on their side still
> works. Somehow the mlx is dropping packets, but I have no idea why. It's
> not always the same customer where the problem occurs and we couldn't
> find anything that triggers this event.
>
> Now the weirdest part. To "fix" the problem, it helps to change the
> mac-address of the server. In some cases it also helped to do 'show
> mac-address' on the mlx.
>
> Does anyone have an idea where to look?
>
>
> Thanks in advance,
> Rogier
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Packetloss on MLX-4 [ In reply to ]
You can check if disabling ip icmp redirect globally solves your issue, it sounds a little bit like that.
 
conf t
no ip icmp redirect
exit
 
Explained here, it is for an old firmware version but it is the same in 5.x version.
 
http://puck.nether.net/pipermail/foundry-nsp/2006-December/000784.html
 
You can also have a look for brocade one arm routing.
 
Good luck!
 
Frank 


---
Alle Postfächer an einem Ort. Jetzt wechseln und E-Mail-Adresse mitnehmen! Rundum glücklich mit freenetMail
Re: Packetloss on MLX-4 [ In reply to ]
So I found the solution. Due to a test during another problem, I put the
port to "no route-only". When I turned it to "route-only" all problems
were gone. There was no need for l2-switching on that port anymore.

I'm still curious why it drops the packets it in layer 2, but at least
the problem is solved.

Thanks for all the other answers. :)


_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp