Mailing List Archive

High LP CPU usage on MLXe-16
Hello!

I've run in strange issue on my MLXe-16: traffic on port which formerly
had been in LACP LAG and was removed from it hits LP CPU.

I have a LAG of 12 ports:

=== LAG "asw1-gdr-temp" ID 44 (dynamic Deployed) ===
LAG Configuration:
Ports: e 2/3 e 3/8 e 4/2 e 7/11 e 7/14 e 8/3 e 9/8 e 10/2 e 11/4 e 13/5 e 14/12 e 14/14
Port Count: 12
Primary Port: 2/3
Trunk Type: hash-based

I removed port eth 14/14 from LAG and assigned it to another customer,
and now the traffic on it is being forwarded by LP CPU:

telnet@lsr1-gdr.ki#show cpu-utilization lp

16:51:20 GMT+02 Thu Apr 12 2018

SLOT #: LP CPU UTILIZATION in %:
in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
1: 1 1 1 1
2: 1 1 1 1
3: 1 1 1 1
4: 1 1 1 1
5: 1 1 1 1
6: 1 1 1 1
7: 1 2 1 1
8: 1 1 1 1
9: 1 1 1 1
10: 1 1 1 1
11: 1 1 1 1
12: 2 3 2 2
13: 1 1 1 1
14: 21 22 17 16
15: 1 1 1 1
16: 1 1 1 1

LACP LAG contains ports on mix of BR-MLX-10Gx20 and NI-MLX-10Gx8-D
cards. Port 14/14 is on BR-MLX-10Gx20 card. A box runs IronWare ver. 5.9.00be.

Did anyone run into the same issue? Is there a way to fix that without
un-deploying LACP LAG?

Thanks in advance!

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: High LP CPU usage on MLXe-16 [ In reply to ]
Have you checked dm pstat to see what kind of traffic is hitting the CPU?
Just wondering if this is something related to the customer and not the
fact that the port was in a LAG. We usually find that high lp CPU
utilization is caused by multicast issues in our environment.

dm pstat will show various packet counts per lp since the last run, I
usually ignore the first run.

--
Eldon Koyle


On Thu, Apr 12, 2018, 08:16 Alexander Shikoff <minotaur@crete.org.ua> wrote:

> Hello!
>
> I've run in strange issue on my MLXe-16: traffic on port which formerly
> had been in LACP LAG and was removed from it hits LP CPU.
>
> I have a LAG of 12 ports:
>
> === LAG "asw1-gdr-temp" ID 44 (dynamic Deployed) ===
> LAG Configuration:
> Ports: e 2/3 e 3/8 e 4/2 e 7/11 e 7/14 e 8/3 e 9/8 e 10/2 e
> 11/4 e 13/5 e 14/12 e 14/14
> Port Count: 12
> Primary Port: 2/3
> Trunk Type: hash-based
>
> I removed port eth 14/14 from LAG and assigned it to another customer,
> and now the traffic on it is being forwarded by LP CPU:
>
> telnet@lsr1-gdr.ki#show cpu-utilization lp
>
> 16:51:20 GMT+02 Thu Apr 12 2018
>
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 1: 1 1 1 1
> 2: 1 1 1 1
> 3: 1 1 1 1
> 4: 1 1 1 1
> 5: 1 1 1 1
> 6: 1 1 1 1
> 7: 1 2 1 1
> 8: 1 1 1 1
> 9: 1 1 1 1
> 10: 1 1 1 1
> 11: 1 1 1 1
> 12: 2 3 2 2
> 13: 1 1 1 1
> 14: 21 22 17 16
> 15: 1 1 1 1
> 16: 1 1 1 1
>
> LACP LAG contains ports on mix of BR-MLX-10Gx20 and NI-MLX-10Gx8-D
> cards. Port 14/14 is on BR-MLX-10Gx20 card. A box runs IronWare ver.
> 5.9.00be.
>
> Did anyone run into the same issue? Is there a way to fix that without
> un-deploying LACP LAG?
>
> Thanks in advance!
>
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
Re: High LP CPU usage on MLXe-16 [ In reply to ]
On Thu, Apr 12, 2018 at 02:41:32PM +0000, Eldon Koyle wrote:
> Have you checked dm pstat to see what kind of traffic is hitting the CPU? Just wondering
> if this is something related to the customer and not the fact that the port was in a LAG.
> We usually find that high lp CPU utilization is caused by multicast issues in our
> environment.
> dm pstat will show various packet counts per lp since the last run, I usually ignore the
> first run.
I've checked "debug packet capture" on that card.
It is usual unicast traffic.

It looks to be possibly related to DEFECT000570731:

Symptom: On 20x10G Line card module high CPU condition could be seen when the command "no route-only"
is enabled.
Condition: "no route-only" option is enabled when there is a LAG spanning across multiple ports on the same 20
x 10G Line card module.

But I don't find the way how to fix that.
I've tried to configure port as route-only, and then turn it back to 'no route-only',
but that did not help.

> --
> Eldon Koyle
> On Thu, Apr 12, 2018, 08:16 Alexander Shikoff <[1]minotaur@crete.org.ua> wrote:
>
> Hello!
> I've run in strange issue on my MLXe-16: traffic on port which formerly
> had been in LACP LAG and was removed from it hits LP CPU.
> I have a LAG of 12 ports:
> === LAG "asw1-gdr-temp" ID 44 (dynamic Deployed) ===
> LAG Configuration:
> Ports: e 2/3 e 3/8 e 4/2 e 7/11 e 7/14 e 8/3 e 9/8 e 10/2 e 11/4 e 13/5 e
> 14/12 e 14/14
> Port Count: 12
> Primary Port: 2/3
> Trunk Type: hash-based
> I removed port eth 14/14 from LAG and assigned it to another customer,
> and now the traffic on it is being forwarded by LP CPU:
> [2]telnet@lsr1-gdr.ki#show cpu-utilization lp
> 16:51:20 GMT+02 Thu Apr 12 2018
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 1: 1 1 1 1
> 2: 1 1 1 1
> 3: 1 1 1 1
> 4: 1 1 1 1
> 5: 1 1 1 1
> 6: 1 1 1 1
> 7: 1 2 1 1
> 8: 1 1 1 1
> 9: 1 1 1 1
> 10: 1 1 1 1
> 11: 1 1 1 1
> 12: 2 3 2 2
> 13: 1 1 1 1
> 14: 21 22 17 16
> 15: 1 1 1 1
> 16: 1 1 1 1
> LACP LAG contains ports on mix of BR-MLX-10Gx20 and NI-MLX-10Gx8-D
> cards. Port 14/14 is on BR-MLX-10Gx20 card. A box runs IronWare ver. 5.9.00be.
> Did anyone run into the same issue? Is there a way to fix that without
> un-deploying LACP LAG?
> Thanks in advance!
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> [3]foundry-nsp@puck.nether.net
> [4]http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> ?????????
>
> 1. mailto:minotaur@crete.org.ua
> 2. http://telnet@lsr1-gdr.ki/#show
> 3. mailto:foundry-nsp@puck.nether.net
> 4. http://puck.nether.net/mailman/listinfo/foundry-nsp

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp