Mailing List Archive

Multicast is being switched by LP CPU on MLXe?
Hello!

I have some VLANs configured between certain ports on MLXe box (MLXe-16, 5.7.0e).
All ports are in 'no route-only' mode. For example:

telnet@lsr1-gdr.ki(config)#show vlan 720

PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3
Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
7/1 TRUNK TAGGED NONE FORWARDING
9/5 TRUNK TAGGED NONE FORWARDING
10/6 TRUNK TAGGED NONE FORWARDING
11/6 TRUNK TAGGED NONE FORWARDING
13/3 PHYSICAL TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Enabled - Passive
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan



As you may notice, passive multicast snooping is enabled on that VLAN.
The problem is that multicast traffic is being switched by LP CPU,
causing high CPU utilization and packet loss.

It is clearly seen on rconsole:

LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address 233.191.133.96
[...]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
[ppcr_rx_packet]: Packet received
Time stamp : 56 day(s) 20h 48m 05s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x000095cc
00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************

As far as I understand documentation that should not happen.
The situation remains the same if I disable IGMP snooping.

Any ideas/suggestions is kindly appreciated!

Thanks in advance.

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
I remember having a lot of trouble with multicast. I don't have the docs
handy, but I think there are some multicast cpu-protection commands you
could try.

--
Eldon Koyle
On Mar 29, 2016 9:21 PM, "Eldon Koyle" <ekoyle@gmail.com> wrote:

> I remember having a lot of trouble with multicast. I don't have the docs
> handy, but I think there are some multicast cpu-protection commands you
> could try.
>
> --
> Eldon Koyle
> On Mar 29, 2016 6:20 AM, "Alexander Shikoff" <minotaur@crete.org.ua>
> wrote:
>
>> Hello!
>>
>> I have some VLANs configured between certain ports on MLXe box (MLXe-16,
>> 5.7.0e).
>> All ports are in 'no route-only' mode. For example:
>>
>> telnet@lsr1-gdr.ki(config)#show vlan 720
>>
>> PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation
>> Type STATIC
>> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
>> L2 protocols : NONE
>> Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe
>> 13/3
>> Associated Virtual Interface Id: NONE
>> ----------------------------------------------------------
>> Port Type Tag-Mode Protocol State
>> 7/1 TRUNK TAGGED NONE FORWARDING
>> 9/5 TRUNK TAGGED NONE FORWARDING
>> 10/6 TRUNK TAGGED NONE FORWARDING
>> 11/6 TRUNK TAGGED NONE FORWARDING
>> 13/3 PHYSICAL TAGGED NONE FORWARDING
>> Arp Inspection: 0
>> DHCP Snooping: 0
>> IPv4 Multicast Snooping: Enabled - Passive
>> IPv6 Multicast Snooping: Disabled
>>
>> No Virtual Interfaces configured for this vlan
>>
>>
>>
>> As you may notice, passive multicast snooping is enabled on that VLAN.
>> The problem is that multicast traffic is being switched by LP CPU,
>> causing high CPU utilization and packet loss.
>>
>> It is clearly seen on rconsole:
>>
>> LP-11#debug packet capture rx include src-port me/6 vlan-id 720
>> dst-address 233.191.133.96
>> [...]
>> **********************************************************************
>> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
>> [ppcr_rx_packet]: Packet received
>> Time stamp : 56 day(s) 20h 48m 05s:,
>> TM Header: [ 0564 0aa3 0080 ]
>> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
>> Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys
>> mc: 0
>> **********************************************************************
>> Packet size: 1374, XPP reason code: 0x000095cc
>> 00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
>> 10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
>> 20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
>> 30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
>> 40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
>> 50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
>> 60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
>> 70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
>> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
>> 0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0
>>
>> 176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
>> **********************************************************************
>>
>> As far as I understand documentation that should not happen.
>> The situation remains the same if I disable IGMP snooping.
>>
>> Any ideas/suggestions is kindly appreciated!
>>
>> Thanks in advance.
>>
>> --
>> MINO-RIPE
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp@puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>
>
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Multicast cpu-protection and snooping is not possible at the same time.

Have you configured both multicast passive and multicast pimsm-snooping under the vlan configuration for vlan 720?

multicast passive
multicast pimsm-snooping

/Jan

From: foundry-nsp [mailto:foundry-nsp-bounces@puck.nether.net] On Behalf Of Eldon Koyle
Sent: 30. marts 2016 06:33
Cc: foundry-nsp <foundry-nsp@puck.nether.net>
Subject: Re: [f-nsp] Multicast is being switched by LP CPU on MLXe?


I remember having a lot of trouble with multicast. I don't have the docs handy, but I think there are some multicast cpu-protection commands you could try.

--
Eldon Koyle
On Mar 29, 2016 9:21 PM, "Eldon Koyle" <ekoyle@gmail.com<mailto:ekoyle@gmail.com>> wrote:

I remember having a lot of trouble with multicast. I don't have the docs handy, but I think there are some multicast cpu-protection commands you could try.

--
Eldon Koyle
On Mar 29, 2016 6:20 AM, "Alexander Shikoff" <minotaur@crete.org.ua<mailto:minotaur@crete.org.ua>> wrote:
Hello!

I have some VLANs configured between certain ports on MLXe box (MLXe-16, 5.7.0e).
All ports are in 'no route-only' mode. For example:

telnet@lsr1-gdr.ki<mailto:telnet@lsr1-gdr.ki>(config)#show vlan 720

PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3
Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
7/1 TRUNK TAGGED NONE FORWARDING
9/5 TRUNK TAGGED NONE FORWARDING
10/6 TRUNK TAGGED NONE FORWARDING
11/6 TRUNK TAGGED NONE FORWARDING
13/3 PHYSICAL TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Enabled - Passive
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan



As you may notice, passive multicast snooping is enabled on that VLAN.
The problem is that multicast traffic is being switched by LP CPU,
causing high CPU utilization and packet loss.

It is clearly seen on rconsole:

LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address 233.191.133.96
[...]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
[ppcr_rx_packet]: Packet received
Time stamp : 56 day(s) 20h 48m 05s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x000095cc
00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************

As far as I understand documentation that should not happen.
The situation remains the same if I disable IGMP snooping.

Any ideas/suggestions is kindly appreciated!

Thanks in advance.

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net<mailto:foundry-nsp@puck.nether.net>
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Hi!

On Wed, Mar 30, 2016 at 08:34:43AM +0000, Jan Pedersen wrote:
> Multicast cpu-protection and snooping is not possible at the same time.
>
>
> Have you configured both multicast passive and multicast pimsm-snooping under the vlan
> configuration for vlan 720?
Nope, there is no PIM in VLAN 720 et-all.
Customers are using IGMP v2 in order to access multicast streams.
Anyway, I've configured both multicast passive and multicast pimsm-snooping,
and that didn't help. Multicast packets are still forwarded by CPU.

The only thing which helped a bit is 'multicast-flooding' in VLAN
configuration, but it seems that then IGMP packets does not pass that VLAN...
I'm debugging that right now.

>
> multicast passive
>
> multicast pimsm-snooping
>
>
> /Jan
>
>
> From: foundry-nsp [mailto:foundry-nsp-bounces@puck.nether.net] On Behalf Of Eldon Koyle
> Sent: 30. marts 2016 06:33
> Cc: foundry-nsp <foundry-nsp@puck.nether.net>
> Subject: Re: [f-nsp] Multicast is being switched by LP CPU on MLXe?
>
>
> I remember having a lot of trouble with multicast. I don't have the docs handy, but I
> think there are some multicast cpu-protection commands you could try.
>
> --
> Eldon Koyle
>
> On Mar 29, 2016 9:21 PM, "Eldon Koyle" <[1]ekoyle@gmail.com> wrote:
>
> I remember having a lot of trouble with multicast. I don't have the docs handy, but I
> think there are some multicast cpu-protection commands you could try.
>
> --
> Eldon Koyle
>
> On Mar 29, 2016 6:20 AM, "Alexander Shikoff" <[2]minotaur@crete.org.ua> wrote:
>
> Hello!
> I have some VLANs configured between certain ports on MLXe box (MLXe-16, 5.7.0e).
> All ports are in 'no route-only' mode. For example:
> [3]telnet@lsr1-gdr.ki(config)#show vlan 720
> PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type STATIC
> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> L2 protocols : NONE
> Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3
> Associated Virtual Interface Id: NONE
> ----------------------------------------------------------
> Port Type Tag-Mode Protocol State
> 7/1 TRUNK TAGGED NONE FORWARDING
> 9/5 TRUNK TAGGED NONE FORWARDING
> 10/6 TRUNK TAGGED NONE FORWARDING
> 11/6 TRUNK TAGGED NONE FORWARDING
> 13/3 PHYSICAL TAGGED NONE FORWARDING
> Arp Inspection: 0
> DHCP Snooping: 0
> IPv4 Multicast Snooping: Enabled - Passive
> IPv6 Multicast Snooping: Disabled
> No Virtual Interfaces configured for this vlan
> As you may notice, passive multicast snooping is enabled on that VLAN.
> The problem is that multicast traffic is being switched by LP CPU,
> causing high CPU utilization and packet loss.
> It is clearly seen on rconsole:
> LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address
> 233.191.133.96
> [...]
> **********************************************************************
> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
> [ppcr_rx_packet]: Packet received
> Time stamp : 56 day(s) 20h 48m 05s:,
> TM Header: [ 0564 0aa3 0080 ]
> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
> Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
> **********************************************************************
> Packet size: 1374, XPP reason code: 0x000095cc
> 00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
> 10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
> 20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
> 30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
> 40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
> 50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
> 60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
> 70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> 0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0
> 176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
> **********************************************************************
> As far as I understand documentation that should not happen.
> The situation remains the same if I disable IGMP snooping.
> Any ideas/suggestions is kindly appreciated!
> Thanks in advance.
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> [4]foundry-nsp@puck.nether.net
> [5]http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> Посилання
>
> 1. mailto:ekoyle@gmail.com
> 2. mailto:minotaur@crete.org.ua
> 3. mailto:telnet@lsr1-gdr.ki
> 4. mailto:foundry-nsp@puck.nether.net
> 5. http://puck.nether.net/mailman/listinfo/foundry-nsp

> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp


--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
On Wed, Mar 30, 2016 at 12:24:03PM +0300, Alexander Shikoff wrote:
> Hi!
>
> On Wed, Mar 30, 2016 at 08:34:43AM +0000, Jan Pedersen wrote:
> > Multicast cpu-protection and snooping is not possible at the same time.
> >
> >
> > Have you configured both multicast passive and multicast pimsm-snooping under the vlan
> > configuration for vlan 720?
> Nope, there is no PIM in VLAN 720 et-all.
> Customers are using IGMP v2 in order to access multicast streams.
> Anyway, I've configured both multicast passive and multicast pimsm-snooping,
> and that didn't help. Multicast packets are still forwarded by CPU.
>
> The only thing which helped a bit is 'multicast-flooding' in VLAN
> configuration, but it seems that then IGMP packets does not pass that VLAN...
> I'm debugging that right now.

Well,

1. Let's consider following configuration:

telnet@lsr1-gdr.ki#show run | b 980
vlan 980 name V980
tagged ethe 5/8 ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6
no multicast-passive


With such config. I see IGMP reports/queries from both customer and IPTV provider:

12:57:13.907825 e4:d3:f1:7d:a7:e8 > 01:00:5e:00:00:01, ethertype IPv4 (0x0800), length 60: 176.96.223.203 > 224.0.0.1: igmp query v2
12:57:22.228748 d8:fe:e3:a8:5c:cc > 01:00:5e:3f:85:b0, ethertype IPv4 (0x0800), length 42: 192.168.210.2 > 233.191.133.176: igmp v2 report 233.191.133.176
12:58:14.065293 e4:d3:f1:7d:a7:e8 > 01:00:5e:00:00:01, ethertype IPv4 (0x0800), length 60: 176.96.223.203 > 224.0.0.1: igmp query v2
12:58:23.078744 d8:fe:e3:a8:5c:cc > 01:00:5e:3f:85:b0, ethertype IPv4 (0x0800), length 42: 192.168.210.2 > 233.191.133.176: igmp v2 report 233.191.133.176

But multicast packets are forwarded by CPU:
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa01a
[ppcr_rx_packet]: Packet received
Time stamp : 57 day(s) 18h 30m 28s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x00006fa4
00: 05f0 0003 5c50 03d4-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7e8 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 980(0x03d4)
30: 85b0 04d2 04d2 052c-0000 4701 3716 c04f CAM = 0x0ffff(R)
40: c309 9f38 2400 3805-bf85 0d2b c00e 804c SFLOW = 0
50: 5a40 3521 e001 823c-5a33 fc82 244d e058 DBL TAG = 0
60: a1b8 9ef8 8170 0d64-0c00 6e43 2900 0bc0
70: 3501 8935 29c1 5971-08a2 6e00 cc04 e199
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************



2. Let's enable multicast-passive to monitor groups:
vlan 980 name V980
tagged ethe 5/8 ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6
multicast passive

Now I see groups, for example

telnet@lsr1-gdr.ki#show ip multicast vlan 980 233.191.133.176
----------+-----+---------+---------------+-----+-----+------
VLAN State Mode Active Time (*, G)(S, G)
Querier Query Count Count
----------+-----+---------+---------------+-----+-----+------
980 Ena Passive 176.96.223.203 37 9 7
----------+-----+---------+---------------+-----+-----+------

Router ports: 11/6 (40s)

Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join

1 (*, 233.191.133.176) 00:02:56 NumOIF: 2 profile: none
Outgoing Interfaces:
e9/5 vlan 980 ( V2) 00:02:35/32s
e11/6 vlan 980 ( R) 00:02:40/40s

1 (176.96.223.38, 233.191.133.176) in e11/6 vlan 980 00:02:56 NumOIF: 1 profile: none
Outgoing Interfaces:
TR(e9/5,e9/5) vlan 980 ( V2) 00:02:35/0s
FID: 0xa0ac MVID: None

Also I see IGMP reports/queries as previously:
13:06:15.369744 e4:d3:f1:7d:a7:e8 > 01:00:5e:00:00:01, ethertype IPv4 (0x0800), length 60: 176.96.223.203 > 224.0.0.1: igmp query v2
13:06:22.174711 d8:fe:e3:a8:5c:cc > 01:00:5e:3f:85:b0, ethertype IPv4 (0x0800), length 42: 192.168.210.2 > 233.191.133.176: igmp v2 report 233.191.133.176
13:07:15.523997 e4:d3:f1:7d:a7:e8 > 01:00:5e:00:00:01, ethertype IPv4 (0x0800), length 60: 176.96.223.203 > 224.0.0.1: igmp query v2
13:07:25.274359 d8:fe:e3:a8:5c:cc > 01:00:5e:3f:85:b0, ethertype IPv4 (0x0800), length 42: 192.168.210.2 > 233.191.133.176: igmp v2 report 233.191.133.176


But multicast packets are still forwarded by CPU:
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa099
[ppcr_rx_packet]: Packet received
Time stamp : 57 day(s) 18h 34m 24s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x0000a7a8
00: 05f0 0003 5c50 03d4-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7e8 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 980(0x03d4)
30: 85b0 04d2 04d2 052c-0000 4701 371d ddd4 CAM = 0x0ffff(R)
40: 3e36 f772 8008 63de-be1d 24a0 537b b599 SFLOW = 0
50: 1cf8 584f 5c3e 8267-fed6 f8ab ad00 63a2 DBL TAG = 0
60: b4e1 914c 884b 6fb8-d5e0 02ac f4af 5243
70: 77c0 1ab7 1589 5b7c-5711 e2f8 d0e0 84b0
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************



3. Let's enable multicast-flooding:

vlan 980 name V980
tagged ethe 5/8 ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6
no multicast passive
multicast-flooding

Now packets are forwarded (to be more precise, flooded) in hardware,
but now I don't see IGMP queries from IPTV provider, and IPTV provider
does not see IGMP reports from me.

As workaround, provider con make static multicast groups but that
is not an option if multicast VLAN has a lot of ports...

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
High CPU on the LP means that there's something that's preventing hardware programming on the LP's packet processor (FPGA). As unlikely as it sounds, when I usually see this on an LP it's because there's a duplicate multicast source IP or a duplicate MAC address.

Recently, I had a customer using a Linux server as an encoder. An admin just copy and pasted the network ifcfg files of this server onto a new server, but didn't remove or change the MAC address in original config. This created multicast sources with different IPs, but with the same MAC address. There's nothing inside the MLXe that knows there's a duplication (because it's not supposed to happen), so the router is constantly re-programming the Packet Processor back and forth between the two MACs. This constant churn prevents the FID entry from ever being used, so the CPU on the LP tries to handle all of the switching. But...this may not be a multicast issue. If you still see high CPU after you disable IGMP Snooping, I could temporarily disable the multicast source (if you can) and see if this reduces the CPU.

When everything is working correctly, you shouldn't see any real CPU utilization on an LP over a 1 min average. I have a customer pushing 1.2 Terabits of multicast on an original MLX-32 with all LPs running at 1-2%, so the CPU level isn't related to the amount of traffic. What you want to keep in mind is that high LP CPU is usually a symptom of lots of churning of the FID tables on the Packet Processor, so work back from there and think about what could cause that. If you have high matching CPU on the Management Modules? Do you see a spike in the L2 process when you run a Show CPU Proc? Is there a possibility of a L2 loop somewhere? Do you have another process like BGP, OSPF, or STP using a lot of CPU cycles?

Hope this helps as a starting point.

Wilbur

-----Original Message-----
From: foundry-nsp [mailto:foundry-nsp-bounces@puck.nether.net] On Behalf Of Alexander Shikoff
Sent: Tuesday, March 29, 2016 6:20 AM
To: foundry-nsp@puck.nether.net
Subject: [f-nsp] Multicast is being switched by LP CPU on MLXe?

Hello!

I have some VLANs configured between certain ports on MLXe box (MLXe-16, 5.7.0e).
All ports are in 'no route-only' mode. For example:

telnet@lsr1-gdr.ki(config)#show vlan 720

PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3
Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
7/1 TRUNK TAGGED NONE FORWARDING
9/5 TRUNK TAGGED NONE FORWARDING
10/6 TRUNK TAGGED NONE FORWARDING
11/6 TRUNK TAGGED NONE FORWARDING
13/3 PHYSICAL TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Enabled - Passive
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan



As you may notice, passive multicast snooping is enabled on that VLAN.
The problem is that multicast traffic is being switched by LP CPU, causing high CPU utilization and packet loss.

It is clearly seen on rconsole:

LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address 233.191.133.96 [...]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
[ppcr_rx_packet]: Packet received
Time stamp : 56 day(s) 20h 48m 05s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723 Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x000095cc
00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************

As far as I understand documentation that should not happen.
The situation remains the same if I disable IGMP snooping.

Any ideas/suggestions is kindly appreciated!

Thanks in advance.

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
https://urldefense.proofpoint.com/v2/url?u=http-3A__puck.nether.net_mailman_listinfo_foundry-2Dnsp&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=l86Fj-WC0GHHSCjQjuUvTzxOj0iW25AHL3VIC5Dog8o&m=WPk4eVQmFKiXCfefjUdaL3F5XLDDJiGTkS5lzLjgtHk&s=CIQiBvRtCiKC8nH_4Cm4bNCUKrkySL6Zvqi9MEzYn7s&e=
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Does this apply to the CER's as well? We've been seeing high (40%+) CPU
from lp on them, but we were assuming that was normal.

On 3/30/2016 1:45 PM, Wilbur Smith wrote:
> High CPU on the LP means that there's something that's preventing hardware programming on the LP's packet processor (FPGA). As unlikely as it sounds, when I usually see this on an LP it's because there's a duplicate multicast source IP or a duplicate MAC address.
>
> Recently, I had a customer using a Linux server as an encoder. An admin just copy and pasted the network ifcfg files of this server onto a new server, but didn't remove or change the MAC address in original config. This created multicast sources with different IPs, but with the same MAC address. There's nothing inside the MLXe that knows there's a duplication (because it's not supposed to happen), so the router is constantly re-programming the Packet Processor back and forth between the two MACs. This constant churn prevents the FID entry from ever being used, so the CPU on the LP tries to handle all of the switching. But...this may not be a multicast issue. If you still see high CPU after you disable IGMP Snooping, I could temporarily disable the multicast source (if you can) and see if this reduces the CPU.
>
> When everything is working correctly, you shouldn't see any real CPU utilization on an LP over a 1 min average. I have a customer pushing 1.2 Terabits of multicast on an original MLX-32 with all LPs running at 1-2%, so the CPU level isn't related to the amount of traffic. What you want to keep in mind is that high LP CPU is usually a symptom of lots of churning of the FID tables on the Packet Processor, so work back from there and think about what could cause that. If you have high matching CPU on the Management Modules? Do you see a spike in the L2 process when you run a Show CPU Proc? Is there a possibility of a L2 loop somewhere? Do you have another process like BGP, OSPF, or STP using a lot of CPU cycles?
>
> Hope this helps as a starting point.
>
> Wilbur
>
> -----Original Message-----
> From: foundry-nsp [mailto:foundry-nsp-bounces@puck.nether.net] On Behalf Of Alexander Shikoff
> Sent: Tuesday, March 29, 2016 6:20 AM
> To: foundry-nsp@puck.nether.net
> Subject: [f-nsp] Multicast is being switched by LP CPU on MLXe?
>
> Hello!
>
> I have some VLANs configured between certain ports on MLXe box (MLXe-16, 5.7.0e).
> All ports are in 'no route-only' mode. For example:
>
> telnet@lsr1-gdr.ki(config)#show vlan 720
>
> PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type STATIC
> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> L2 protocols : NONE
> Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3
> Associated Virtual Interface Id: NONE
> ----------------------------------------------------------
> Port Type Tag-Mode Protocol State
> 7/1 TRUNK TAGGED NONE FORWARDING
> 9/5 TRUNK TAGGED NONE FORWARDING
> 10/6 TRUNK TAGGED NONE FORWARDING
> 11/6 TRUNK TAGGED NONE FORWARDING
> 13/3 PHYSICAL TAGGED NONE FORWARDING
> Arp Inspection: 0
> DHCP Snooping: 0
> IPv4 Multicast Snooping: Enabled - Passive
> IPv6 Multicast Snooping: Disabled
>
> No Virtual Interfaces configured for this vlan
>
>
>
> As you may notice, passive multicast snooping is enabled on that VLAN.
> The problem is that multicast traffic is being switched by LP CPU, causing high CPU utilization and packet loss.
>
> It is clearly seen on rconsole:
>
> LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address 233.191.133.96 [...]
> **********************************************************************
> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
> [ppcr_rx_packet]: Packet received
> Time stamp : 56 day(s) 20h 48m 05s:,
> TM Header: [ 0564 0aa3 0080 ]
> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723 Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
> **********************************************************************
> Packet size: 1374, XPP reason code: 0x000095cc
> 00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
> 10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
> 20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
> 30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
> 40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
> 50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
> 60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
> 70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> 0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0
>
> 176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
> **********************************************************************
>
> As far as I understand documentation that should not happen.
> The situation remains the same if I disable IGMP snooping.
>
> Any ideas/suggestions is kindly appreciated!
>
> Thanks in advance.
>
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> https://urldefense.proofpoint.com/v2/url?u=http-3A__puck.nether.net_mailman_listinfo_foundry-2Dnsp&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=l86Fj-WC0GHHSCjQjuUvTzxOj0iW25AHL3VIC5Dog8o&m=WPk4eVQmFKiXCfefjUdaL3F5XLDDJiGTkS5lzLjgtHk&s=CIQiBvRtCiKC8nH_4Cm4bNCUKrkySL6Zvqi9MEzYn7s&e=
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp

_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
We had a similar issue in 2013 -- TAC applied "multicast flooding" to the
VLAN and the CPU issue was resolved.
https://puck.nether.net/pipermail/foundry-nsp/2013-September/008689.html
http://www.brocade.com/content/html/en/configuration-guide/NI_05800a_SWITCHI
NG/GUID-982D7BE8-62B7-44F7-8B0F-6C15DC1F7034.html
http://www.brocade.com/content/html/en/administration-guide/netiron-05900-ad
minguide/GUID-988F87A3-2464-4159-940D-43219E82A4E9.html

Frank

-----Original Message-----
From: foundry-nsp [mailto:foundry-nsp-bounces@puck.nether.net] On Behalf Of
Alexander Shikoff
Sent: Tuesday, March 29, 2016 7:20 AM
To: foundry-nsp@puck.nether.net
Subject: [f-nsp] Multicast is being switched by LP CPU on MLXe?

Hello!

I have some VLANs configured between certain ports on MLXe box (MLXe-16,
5.7.0e).
All ports are in 'no route-only' mode. For example:

telnet@lsr1-gdr.ki(config)#show vlan 720

PORT-VLAN 720, Name V720, Priority Level 0, Priority Force 0, Creation Type
STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 7/1 ethe 9/5 ethe 10/6 ethe 11/6 ethe 13/3

Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
7/1 TRUNK TAGGED NONE FORWARDING
9/5 TRUNK TAGGED NONE FORWARDING
10/6 TRUNK TAGGED NONE FORWARDING
11/6 TRUNK TAGGED NONE FORWARDING
13/3 PHYSICAL TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Enabled - Passive
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan



As you may notice, passive multicast snooping is enabled on that VLAN.
The problem is that multicast traffic is being switched by LP CPU,
causing high CPU utilization and packet loss.

It is clearly seen on rconsole:

LP-11#debug packet capture rx include src-port me/6 vlan-id 720 dst-address
233.191.133.96
[...]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa06d
[ppcr_rx_packet]: Packet received
Time stamp : 56 day(s) 20h 48m 05s:,
TM Header: [ 0564 0aa3 0080 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 0 Src sys port: 2723
Dest Port: 0 Drop Prec: 2 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys
mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x000095cc
00: 05f0 0003 5c50 02d0-7941 fffe 8000 0000 FID = 0x05f0
10: 0100 5e3f 85b0 e4d3-f17d a7c5 0800 4516 Offset = 0x10
20: 0540 0000 4000 7e11-f89f b060 df26 e9bf VLAN = 720(0x02d0)
30: 85b0 04d2 04d2 052c-0000 4701 3717 8134 CAM = 0x0ffff(R)
40: 01d0 92de c56f 18f6-dc4f 8d00 1273 cdb3 SFLOW = 0
50: c3ff 3da8 2600 5a37-cfbe 993f dbfd c927 DBL TAG = 0
60: 8000 8290 ef9b 7638-9089 9a50 5000 8611
70: 2026 0079 8de2 a404-1013 dffd 04e0 1404
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
0 0 0 11/6 3 0 1 0 1 1 1 0 0 0 1 0

176.96.223.38 -> 233.191.133.176 UDP [1234 -> 1234]
**********************************************************************

As far as I understand documentation that should not happen.
The situation remains the same if I disable IGMP snooping.

Any ideas/suggestions is kindly appreciated!

Thanks in advance.

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp


_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Hi!


Well, I'd like to bring this thread up again hoping to catch
someone who has also hit this issue.

Today I upgraded software to 05.9.00be, and situation is still
the same: with enabled Multicast Traffic Reduction,
multicast traffic is being switched by LP CPU.

Current test VLAN configuration is:
!
vlan 450 name ITCons2DS_test
tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
multicast passive
multicast pimsm-snooping
!

telnet@lsr1-gdr.ki#show vlan 450

PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority Force 0, Creation Type STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
7/1 TRUNK TAGGED NONE FORWARDING
7/2 TRUNK TAGGED NONE FORWARDING
9/5 TRUNK TAGGED NONE FORWARDING
11/2 TRUNK TAGGED NONE FORWARDING
12/8 TRUNK TAGGED NONE FORWARDING
13/8 TRUNK TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Enabled - Passive
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan


IGMP snooping works, I'm able to see In/Out interfaces and current
active querier:

telnet@lsr1-gdr.ki#show ip multicast vlan 450
----------+-----+---------+---------------+-----+-----+------
VLAN State Mode Active Time (*, G)(S, G)
Querier Query Count Count
----------+-----+---------+---------------+-----+-----+------
450 Ena Passive 192.168.210.1 119 1 1
----------+-----+---------+---------------+-----+-----+------

Router ports: 12/8 (11s)

Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join

1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
e9/5 vlan 450 ( V2) 00:34:48/40s

1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
FID: 0xa0a9 MVID: None


Right after multicast stream start flooding from Eth11/2 out of TR(e9/5,e7/1),
the CPU load on LP 11 increases:

telnet@lsr1-gdr.ki#show cpu-utilization lp 11

17:25:10 GMT+02 Tue Dec 13 2016

SLOT #: LP CPU UTILIZATION in %:
in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
11: 6 6 6 6



And I see these packets processed by LP CPU:

LP-11#debug packet capture include vlan-id 450
[...]
91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
[xpp10ge_cpu_forward_debug]: Forward LP packet
Time stamp : 00 day(s) 11h 32m 49s:,
TM Header: [ 1022 00a9 a0a9 ]
Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
**********************************************************************
00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0

91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
**********************************************************************
[ppcr_rx_packet]: Packet received
Time stamp : 00 day(s) 11h 32m 49s:,
TM Header: [ 0564 8a23 0040 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port: 2595
Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x00045286
00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
Pri CPU MON SRC PType US BRD D
4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0


I have no ideas why it happens. "Multicast Guide" clearly tells
that these packets should be processed in hardware.
Please advice!

Thanks!

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
What does your pim configuration look like? Especially your rp config.

Making sure there is no rp-candidate for traffic you want to keep on a
single l2 domain can help a lot (in fact, we only add rp entries for
specific apps). This is especially true for groups used by SSDP or mDNS.
It's been a while, but I remember having similar issues. I'll have to go
dig through my configs and see if it reminds me of anything else.

--
Eldon Koyle

On Dec 13, 2016 08:29, "Alexander Shikoff" <minotaur@crete.org.ua> wrote:

> Hi!
>
>
> Well, I'd like to bring this thread up again hoping to catch
> someone who has also hit this issue.
>
> Today I upgraded software to 05.9.00be, and situation is still
> the same: with enabled Multicast Traffic Reduction,
> multicast traffic is being switched by LP CPU.
>
> Current test VLAN configuration is:
> !
> vlan 450 name ITCons2DS_test
> tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> multicast passive
> multicast pimsm-snooping
> !
>
> telnet@lsr1-gdr.ki#show vlan 450
>
> PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority Force 0,
> Creation Type STATIC
> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> L2 protocols : NONE
> Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8
> ethe 13/8
> Associated Virtual Interface Id: NONE
> ----------------------------------------------------------
> Port Type Tag-Mode Protocol State
> 7/1 TRUNK TAGGED NONE FORWARDING
> 7/2 TRUNK TAGGED NONE FORWARDING
> 9/5 TRUNK TAGGED NONE FORWARDING
> 11/2 TRUNK TAGGED NONE FORWARDING
> 12/8 TRUNK TAGGED NONE FORWARDING
> 13/8 TRUNK TAGGED NONE FORWARDING
> Arp Inspection: 0
> DHCP Snooping: 0
> IPv4 Multicast Snooping: Enabled - Passive
> IPv6 Multicast Snooping: Disabled
>
> No Virtual Interfaces configured for this vlan
>
>
> IGMP snooping works, I'm able to see In/Out interfaces and current
> active querier:
>
> telnet@lsr1-gdr.ki#show ip multicast vlan 450
> ----------+-----+---------+---------------+-----+-----+------
> VLAN State Mode Active Time (*, G)(S, G)
> Querier Query Count Count
> ----------+-----+---------+---------------+-----+-----+------
> 450 Ena Passive 192.168.210.1 119 1 1
> ----------+-----+---------+---------------+-----+-----+------
>
> Router ports: 12/8 (11s)
>
> Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join
>
> 1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
> Outgoing Interfaces:
> e9/5 vlan 450 ( V2) 00:34:48/40s
>
> 1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48
> NumOIF: 1 profile: none
> Outgoing Interfaces:
> TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
> FID: 0xa0a9 MVID: None
>
>
> Right after multicast stream start flooding from Eth11/2 out of
> TR(e9/5,e7/1),
> the CPU load on LP 11 increases:
>
> telnet@lsr1-gdr.ki#show cpu-utilization lp 11
>
> 17:25:10 GMT+02 Tue Dec 13 2016
>
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 11: 6 6 6 6
>
>
>
> And I see these packets processed by LP CPU:
>
> LP-11#debug packet capture include vlan-id 450
> [...]
> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> **********************************************************************
> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
> [xpp10ge_cpu_forward_debug]: Forward LP packet
> Time stamp : 00 day(s) 11h 32m 49s:,
> TM Header: [ 1022 00a9 a0a9 ]
> Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
> Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
> **********************************************************************
> 00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> 20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
> 30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
> 40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
> 50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
> 60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
> 70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> 4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0
>
> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> **********************************************************************
> [ppcr_rx_packet]: Packet received
> Time stamp : 00 day(s) 11h 32m 49s:,
> TM Header: [ 0564 8a23 0040 ]
> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port: 2595
> Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys
> mc: 0
> **********************************************************************
> Packet size: 1374, XPP reason code: 0x00045286
> 00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> 20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
> 30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
> 40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
> 50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
> 60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
> 70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
> Pri CPU MON SRC PType US BRD D
> 4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0
>
>
> I have no ideas why it happens. "Multicast Guide" clearly tells
> that these packets should be processed in hardware.
> Please advice!
>
> Thanks!
>
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
I guess I should have asked whether you are running pim first. Is there a
ve on that VLAN with ip pim configured?

--
Eldon

On Dec 18, 2016 10:03 AM, "Eldon Koyle" <ekoyle+puck.nether.net@gmail.com>
wrote:

> What does your pim configuration look like? Especially your rp config.
>
> Making sure there is no rp-candidate for traffic you want to keep on a
> single l2 domain can help a lot (in fact, we only add rp entries for
> specific apps). This is especially true for groups used by SSDP or mDNS.
> It's been a while, but I remember having similar issues. I'll have to go
> dig through my configs and see if it reminds me of anything else.
>
> --
> Eldon Koyle
>
> On Dec 13, 2016 08:29, "Alexander Shikoff" <minotaur@crete.org.ua> wrote:
>
>> Hi!
>>
>>
>> Well, I'd like to bring this thread up again hoping to catch
>> someone who has also hit this issue.
>>
>> Today I upgraded software to 05.9.00be, and situation is still
>> the same: with enabled Multicast Traffic Reduction,
>> multicast traffic is being switched by LP CPU.
>>
>> Current test VLAN configuration is:
>> !
>> vlan 450 name ITCons2DS_test
>> tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
>> multicast passive
>> multicast pimsm-snooping
>> !
>>
>> telnet@lsr1-gdr.ki#show vlan 450
>>
>> PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority Force 0,
>> Creation Type STATIC
>> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
>> L2 protocols : NONE
>> Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8
>> ethe 13/8
>> Associated Virtual Interface Id: NONE
>> ----------------------------------------------------------
>> Port Type Tag-Mode Protocol State
>> 7/1 TRUNK TAGGED NONE FORWARDING
>> 7/2 TRUNK TAGGED NONE FORWARDING
>> 9/5 TRUNK TAGGED NONE FORWARDING
>> 11/2 TRUNK TAGGED NONE FORWARDING
>> 12/8 TRUNK TAGGED NONE FORWARDING
>> 13/8 TRUNK TAGGED NONE FORWARDING
>> Arp Inspection: 0
>> DHCP Snooping: 0
>> IPv4 Multicast Snooping: Enabled - Passive
>> IPv6 Multicast Snooping: Disabled
>>
>> No Virtual Interfaces configured for this vlan
>>
>>
>> IGMP snooping works, I'm able to see In/Out interfaces and current
>> active querier:
>>
>> telnet@lsr1-gdr.ki#show ip multicast vlan 450
>> ----------+-----+---------+---------------+-----+-----+------
>> VLAN State Mode Active Time (*, G)(S, G)
>> Querier Query Count Count
>> ----------+-----+---------+---------------+-----+-----+------
>> 450 Ena Passive 192.168.210.1 119 1 1
>> ----------+-----+---------+---------------+-----+-----+------
>>
>> Router ports: 12/8 (11s)
>>
>> Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join
>>
>> 1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
>> Outgoing Interfaces:
>> e9/5 vlan 450 ( V2) 00:34:48/40s
>>
>> 1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48
>> NumOIF: 1 profile: none
>> Outgoing Interfaces:
>> TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
>> FID: 0xa0a9 MVID: None
>>
>>
>> Right after multicast stream start flooding from Eth11/2 out of
>> TR(e9/5,e7/1),
>> the CPU load on LP 11 increases:
>>
>> telnet@lsr1-gdr.ki#show cpu-utilization lp 11
>>
>> 17:25:10 GMT+02 Tue Dec 13 2016
>>
>> SLOT #: LP CPU UTILIZATION in %:
>> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
>> 11: 6 6 6 6
>>
>>
>>
>> And I see these packets processed by LP CPU:
>>
>> LP-11#debug packet capture include vlan-id 450
>> [...]
>> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
>> **********************************************************************
>> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
>> [xpp10ge_cpu_forward_debug]: Forward LP packet
>> Time stamp : 00 day(s) 11h 32m 49s:,
>> TM Header: [ 1022 00a9 a0a9 ]
>> Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
>> Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
>> **********************************************************************
>> 00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
>> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
>> 20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
>> 30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
>> 40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
>> 50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
>> 60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
>> 70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
>> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
>> 4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0
>>
>> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
>> **********************************************************************
>> [ppcr_rx_packet]: Packet received
>> Time stamp : 00 day(s) 11h 32m 49s:,
>> TM Header: [ 0564 8a23 0040 ]
>> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port: 2595
>> Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys
>> mc: 0
>> **********************************************************************
>> Packet size: 1374, XPP reason code: 0x00045286
>> 00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
>> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
>> 20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
>> 30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
>> 40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
>> 50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
>> 60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
>> 70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
>> Pri CPU MON SRC PType US BRD D
>> 4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0
>>
>>
>> I have no ideas why it happens. "Multicast Guide" clearly tells
>> that these packets should be processed in hardware.
>> Please advice!
>>
>> Thanks!
>>
>> --
>> MINO-RIPE
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp@puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>
>
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Hello!

On Sun, Dec 18, 2016 at 10:03:59AM -0700, Eldon Koyle wrote:
> What does your pim configuration look like? Especially your rp config.
> Making sure there is no rp-candidate for traffic you want to keep on a single l2 domain can
> help a lot (in fact, we only add rp entries for specific apps). This is especially true
> for groups used by SSDP or mDNS. It's been a while, but I remember having similar issues.
> I'll have to go dig through my configs and see if it reminds me of anything else.

There is no any L3 multicast routing.
Configuration is clear L2.

> --
> Eldon Koyle
>
> On Dec 13, 2016 08:29, "Alexander Shikoff" <[1]minotaur@crete.org.ua> wrote:
>
> Hi!
> Well, I'd like to bring this thread up again hoping to catch
> someone who has also hit this issue.
> Today I upgraded software to 05.9.00be, and situation is still
> the same: with enabled Multicast Traffic Reduction,
> multicast traffic is being switched by LP CPU.
> Current test VLAN configuration is:
> !
> vlan 450 name ITCons2DS_test
> tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> multicast passive
> multicast pimsm-snooping
> !
> [2]telnet@lsr1-gdr.ki#show vlan 450
> PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority Force 0, Creation Type
> STATIC
> Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> L2 protocols : NONE
> Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> Associated Virtual Interface Id: NONE
> ----------------------------------------------------------
> Port Type Tag-Mode Protocol State
> 7/1 TRUNK TAGGED NONE FORWARDING
> 7/2 TRUNK TAGGED NONE FORWARDING
> 9/5 TRUNK TAGGED NONE FORWARDING
> 11/2 TRUNK TAGGED NONE FORWARDING
> 12/8 TRUNK TAGGED NONE FORWARDING
> 13/8 TRUNK TAGGED NONE FORWARDING
> Arp Inspection: 0
> DHCP Snooping: 0
> IPv4 Multicast Snooping: Enabled - Passive
> IPv6 Multicast Snooping: Disabled
> No Virtual Interfaces configured for this vlan
> IGMP snooping works, I'm able to see In/Out interfaces and current
> active querier:
> [3]telnet@lsr1-gdr.ki#show ip multicast vlan 450
> ----------+-----+---------+---------------+-----+-----+------
> VLAN State Mode Active Time (*, G)(S, G)
> Querier Query Count Count
> ----------+-----+---------+---------------+-----+-----+------
> 450 Ena Passive 192.168.210.1 119 1 1
> ----------+-----+---------+---------------+-----+-----+------
> Router ports: 12/8 (11s)
> Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join
> 1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
> Outgoing Interfaces:
> e9/5 vlan 450 ( V2) 00:34:48/40s
> 1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48 NumOIF: 1
> profile: none
> Outgoing Interfaces:
> TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
> FID: 0xa0a9 MVID: None
> Right after multicast stream start flooding from Eth11/2 out of TR(e9/5,e7/1),
> the CPU load on LP 11 increases:
> [4]telnet@lsr1-gdr.ki#show cpu-utilization lp 11
> 17:25:10 GMT+02 Tue Dec 13 2016
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 11: 6 6 6 6
> And I see these packets processed by LP CPU:
> LP-11#debug packet capture include vlan-id 450
> [...]
> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> **********************************************************************
> [ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
> [xpp10ge_cpu_forward_debug]: Forward LP packet
> Time stamp : 00 day(s) 11h 32m 49s:,
> TM Header: [ 1022 00a9 a0a9 ]
> Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
> Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
> **********************************************************************
> 00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> 20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
> 30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
> 40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
> 50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
> 60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
> 70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
> Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> 4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0
> 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> **********************************************************************
> [ppcr_rx_packet]: Packet received
> Time stamp : 00 day(s) 11h 32m 49s:,
> TM Header: [ 0564 8a23 0040 ]
> Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port: 2595
> Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
> **********************************************************************
> Packet size: 1374, XPP reason code: 0x00045286
> 00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
> 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> 20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
> 30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
> 40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
> 50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
> 60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
> 70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
> Pri CPU MON SRC PType US BRD D
> 4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0
> I have no ideas why it happens. "Multicast Guide" clearly tells
> that these packets should be processed in hardware.
> Please advice!
> Thanks!
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> [5]foundry-nsp@puck.nether.net
> [6]http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> Посилання
>
> 1. mailto:minotaur@crete.org.ua
> 2. http://telnet@lsr1-gdr.ki/#show
> 3. http://telnet@lsr1-gdr.ki/#show
> 4. http://telnet@lsr1-gdr.ki/#show
> 5. mailto:foundry-nsp@puck.nether.net
> 6. http://puck.nether.net/mailman/listinfo/foundry-nsp

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
It's been a while, but I wouldn't think that pim snooping would not do a
darn thing on l2, you would want igmp snooping. That said, I don't think
that would work without at least a local l3 interface to respond to the
queries. Otherwise, you might as well just use broadcast. (Not that I
recommend that)

On Mon, Dec 19, 2016 at 4:57 AM, Alexander Shikoff <minotaur@crete.org.ua>
wrote:

> Hello!
>
> On Sun, Dec 18, 2016 at 10:03:59AM -0700, Eldon Koyle wrote:
> > What does your pim configuration look like? Especially your rp
> config.
> > Making sure there is no rp-candidate for traffic you want to keep on
> a single l2 domain can
> > help a lot (in fact, we only add rp entries for specific apps). This
> is especially true
> > for groups used by SSDP or mDNS. It's been a while, but I remember
> having similar issues.
> > I'll have to go dig through my configs and see if it reminds me of
> anything else.
>
> There is no any L3 multicast routing.
> Configuration is clear L2.
>
> > --
> > Eldon Koyle
> >
> > On Dec 13, 2016 08:29, "Alexander Shikoff" <[1]minotaur@crete.org.ua>
> wrote:
> >
> > Hi!
> > Well, I'd like to bring this thread up again hoping to catch
> > someone who has also hit this issue.
> > Today I upgraded software to 05.9.00be, and situation is still
> > the same: with enabled Multicast Traffic Reduction,
> > multicast traffic is being switched by LP CPU.
> > Current test VLAN configuration is:
> > !
> > vlan 450 name ITCons2DS_test
> > tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> > multicast passive
> > multicast pimsm-snooping
> > !
> > [2]telnet@lsr1-gdr.ki#show vlan 450
> > PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority
> Force 0, Creation Type
> > STATIC
> > Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> > L2 protocols : NONE
> > Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2
> ethe 12/8 ethe 13/8
> > Associated Virtual Interface Id: NONE
> > ----------------------------------------------------------
> > Port Type Tag-Mode Protocol State
> > 7/1 TRUNK TAGGED NONE FORWARDING
> > 7/2 TRUNK TAGGED NONE FORWARDING
> > 9/5 TRUNK TAGGED NONE FORWARDING
> > 11/2 TRUNK TAGGED NONE FORWARDING
> > 12/8 TRUNK TAGGED NONE FORWARDING
> > 13/8 TRUNK TAGGED NONE FORWARDING
> > Arp Inspection: 0
> > DHCP Snooping: 0
> > IPv4 Multicast Snooping: Enabled - Passive
> > IPv6 Multicast Snooping: Disabled
> > No Virtual Interfaces configured for this vlan
> > IGMP snooping works, I'm able to see In/Out interfaces and current
> > active querier:
> > [3]telnet@lsr1-gdr.ki#show ip multicast vlan 450
> > ----------+-----+---------+---------------+-----+-----+------
> > VLAN State Mode Active Time (*, G)(S, G)
> > Querier Query Count Count
> > ----------+-----+---------+---------------+-----+-----+------
> > 450 Ena Passive 192.168.210.1 119 1 1
> > ----------+-----+---------+---------------+-----+-----+------
> > Router ports: 12/8 (11s)
> > Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join
> > 1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile:
> none
> > Outgoing Interfaces:
> > e9/5 vlan 450 ( V2) 00:34:48/40s
> > 1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48
> NumOIF: 1
> > profile: none
> > Outgoing Interfaces:
> > TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
> > FID: 0xa0a9 MVID: None
> > Right after multicast stream start flooding from Eth11/2 out of
> TR(e9/5,e7/1),
> > the CPU load on LP 11 increases:
> > [4]telnet@lsr1-gdr.ki#show cpu-utilization lp 11
> > 17:25:10 GMT+02 Tue Dec 13 2016
> > SLOT #: LP CPU UTILIZATION in %:
> > in 1 second: in 5 seconds: in 60 seconds: in 300
> seconds:
> > 11: 6 6 6 6
> > And I see these packets processed by LP CPU:
> > LP-11#debug packet capture include vlan-id 450
> > [...]
> > 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> > ************************************************************
> **********
> > [ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
> > [xpp10ge_cpu_forward_debug]: Forward LP packet
> > Time stamp : 00 day(s) 11h 32m 49s:,
> > TM Header: [ 1022 00a9 a0a9 ]
> > Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
> > Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
> > ************************************************************
> **********
> > 00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
> > 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> > 20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
> > 30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
> > 40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
> > 50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
> > 60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
> > 70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
> > Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> > 4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0
> > 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> > ************************************************************
> **********
> > [ppcr_rx_packet]: Packet received
> > Time stamp : 00 day(s) 11h 32m 49s:,
> > TM Header: [ 0564 8a23 0040 ]
> > Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port:
> 2595
> > Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src:
> 0 Sys mc: 0
> > ************************************************************
> **********
> > Packet size: 1374, XPP reason code: 0x00045286
> > 00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
> > 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> > 20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
> > 30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
> > 40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
> > 50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
> > 60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
> > 70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
> > Pri CPU MON SRC PType US BRD D
> > 4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0
> > I have no ideas why it happens. "Multicast Guide" clearly tells
> > that these packets should be processed in hardware.
> > Please advice!
> > Thanks!
> > --
> > MINO-RIPE
> > _______________________________________________
> > foundry-nsp mailing list
> > [5]foundry-nsp@puck.nether.net
> > [6]http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
> > Посилання
> >
> > 1. mailto:minotaur@crete.org.ua
> > 2. http://telnet@lsr1-gdr.ki/#show
> > 3. http://telnet@lsr1-gdr.ki/#show
> > 4. http://telnet@lsr1-gdr.ki/#show
> > 5. mailto:foundry-nsp@puck.nether.net
> > 6. http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>

--

E-Mail to and from me, in connection with the transaction
of public business, is subject to the Wyoming Public Records
Act and may be disclosed to third parties.
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
On Mon, Dec 19, 2016 at 11:20:36AM -0700, Daniel Schmidt wrote:
> It's been a while, but I wouldn't think that pim snooping would not do a darn thing on l2,
> you would want igmp snooping. That said, I don't think that would work without at least a
> local l3 interface to respond to the queries. Otherwise, you might as well just use
> broadcast. (Not that I recommend that)

Hi!

I don't need this box to respond and/or generate IGMP queries.
I just need to snoop them in order to prevent unnecessary multicast
flooding over all ports in certain VLAN.

Again, my configuration is:

!
vlan 450 name ITCons2DS_test
tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
multicast passive
!

That's all. No any L3 multicast routing. Just IGMP snooping.

And with such configuration my MLXe-16 starts switching multicast
packets by LP CPU.

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Doesn't work that way. You need something to generate the igmp queries - a
l3 interface. Without it, igmp snooping doesn't work and you get exactly
what you have - a broadcast being forwarded out all ports killing the CPU.

On Mon, Dec 19, 2016 at 11:37 AM, Alexander Shikoff <minotaur@crete.org.ua>
wrote:

> On Mon, Dec 19, 2016 at 11:20:36AM -0700, Daniel Schmidt wrote:
> > It's been a while, but I wouldn't think that pim snooping would not
> do a darn thing on l2,
> > you would want igmp snooping. That said, I don't think that would
> work without at least a
> > local l3 interface to respond to the queries. Otherwise, you might
> as well just use
> > broadcast. (Not that I recommend that)
>
> Hi!
>
> I don't need this box to respond and/or generate IGMP queries.
> I just need to snoop them in order to prevent unnecessary multicast
> flooding over all ports in certain VLAN.
>
> Again, my configuration is:
>
> !
> vlan 450 name ITCons2DS_test
> tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> multicast passive
> !
>
> That's all. No any L3 multicast routing. Just IGMP snooping.
>
> And with such configuration my MLXe-16 starts switching multicast
> packets by LP CPU.
>
> --
> MINO-RIPE
>

--

E-Mail to and from me, in connection with the transaction
of public business, is subject to the Wyoming Public Records
Act and may be disclosed to third parties.
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
For IGMP snooping to work, there must be an L3 device acting as an
IGMP querier on your L2 domain (typically a router). This device is
in charge of keeping track of which IGMP clients have asked for which
multicast groups, and periodically asking if they still want it. The
MLX would not need to be the querier, but there has to be one in that
VLAN.

If there is no IGMP querier, your only real option would be to flood
all the multicast (unless you are connecting a group of routers that
are speaking PIM, then pim snooping might be able to help you).

--
Eldon

On Mon, Dec 19, 2016 at 11:45 AM, Daniel Schmidt <daniel.schmidt@wyo.gov> wrote:
> Doesn't work that way. You need something to generate the igmp queries - a
> l3 interface. Without it, igmp snooping doesn't work and you get exactly
> what you have - a broadcast being forwarded out all ports killing the CPU.
>
> On Mon, Dec 19, 2016 at 11:37 AM, Alexander Shikoff <minotaur@crete.org.ua>
> wrote:
>>
>> On Mon, Dec 19, 2016 at 11:20:36AM -0700, Daniel Schmidt wrote:
>> > It's been a while, but I wouldn't think that pim snooping would not
>> > do a darn thing on l2,
>> > you would want igmp snooping. That said, I don't think that would
>> > work without at least a
>> > local l3 interface to respond to the queries. Otherwise, you might
>> > as well just use
>> > broadcast. (Not that I recommend that)
>>
>> Hi!
>>
>> I don't need this box to respond and/or generate IGMP queries.
>> I just need to snoop them in order to prevent unnecessary multicast
>> flooding over all ports in certain VLAN.
>>
>> Again, my configuration is:
>>
>> !
>> vlan 450 name ITCons2DS_test
>> tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
>> multicast passive
>> !
>>
>> That's all. No any L3 multicast routing. Just IGMP snooping.
>>
>> And with such configuration my MLXe-16 starts switching multicast
>> packets by LP CPU.
>>
>> --
>> MINO-RIPE
>
>
>
>
> E-Mail to and from me, in connection with the transaction
> of public business, is subject to the Wyoming Public Records
> Act and may be disclosed to third parties.
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Hi!

On Mon, Dec 19, 2016 at 04:14:28PM -0700, Eldon Koyle wrote:
> For IGMP snooping to work, there must be an L3 device acting as an
> IGMP querier on your L2 domain (typically a router). This device is
> in charge of keeping track of which IGMP clients have asked for which
> multicast groups, and periodically asking if they still want it. The
> MLX would not need to be the querier, but there has to be one in that
> VLAN.
>
> If there is no IGMP querier, your only real option would be to flood
> all the multicast (unless you are connecting a group of routers that
> are speaking PIM, then pim snooping might be able to help you).

There IS one querier:

telnet@lsr1-gdr.ki#show ip multicast vlan 450
----------+-----+---------+---------------+-----+-----+------
VLAN State Mode Active Time (*, G)(S, G)
Querier Query Count Count
----------+-----+---------+---------------+-----+-----+------
450 Ena Passive 192.168.210.1 119 1 1
----------+-----+---------+---------------+-----+-----+------

Router ports: 12/8 (11s)

Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join

1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
e9/5 vlan 450 ( V2) 00:34:48/40s

1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
FID: 0xa0a9 MVID: None


--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Multicast is always loads of fun to debug! Not really though.

So it looks like your upstream querier (router) for VLAN 450 is hanging off of e9/5, while the multicast source in on e11/12 and the subscriber to the multicast group is on e 7/1. Does this sound correct?

If you're seeing high LP usage and suspect cast traffic is being processed by the LP and not the hardware, this normally tells me the hardware tables are being repeatedly reprogramed or there is some table churn happening somewhere. It's normal to see a burst of high LP CPU in the instant the initial hardware programming takes place, but this should only last 1 second or so and the FPGA takes over.

I've dealt a lot with this in the past and here are some things that I've seen cause this:

1) A hardware defect on the LP. This is very rare and when this does happen on routers with supported code normally you would also see sys-mon alerts in the logs

2) A duplicate multicast group address or source IP is used for two different cast streams. Example would be for encoding devices both using the same group or even the same IP address on their interface. Even if they were in separate VLANs this would cause issues because it triggers the MLXe to constantly update the FID entry on the LP. I've seen this a few times in the past.

3) Duplicate IP or MAC on two subscribers. This is more common that you think with Linux host based receivers or a semi-custom device that is manually programmed. Sometimes a sys-admin copies the ifcfg file between two hosts and forgets to delete the MAC from the file. I've also seen hardware based devices where the IP address and/or MAC is flashed on to the device and an older config is re-used causing a duplication.

4) Something going on in our code. If you have a LAG between the two routers, try disabling all but one of the ports if possibly. Bump the last port to make sure the table entries are reset and see it the issue still happens. If so, contact TAC because there may be something going on with cast FID programming on the LAG.

If possible, also try moving both the source and receiver to the same MLXe in a separate VLAN and just set that VLAN to 'multicast active'; make sure there's a VE with an IP on local VLAN though.

With multicast active, the router provides the same IGMP messaging that's needed for IGMP Snooping or Multicast Passive to work, but without needing to run PIM. It can be a good way to confirm if this is an issue with PIM or a lower level issue.

I can tell you that this does normally work very well on the MLX. I have some customers pushing 3.2 Terrabits of multicast on an MLXe-16 all with LPs in the 1-3% CPU range. The trick is to figure out what's causing table turn at the LP level and preventing the hardware table from being used.


Wilbur


________________________________________
From: foundry-nsp <foundry-nsp-bounces@puck.nether.net> on behalf of Alexander Shikoff <minotaur@crete.org.ua>
Sent: Tuesday, December 20, 2016 03:57 AM
To: Eldon Koyle
Cc: foundry-nsp
Subject: Re: [f-nsp] Multicast is being switched by LP CPU on MLXe?

Hi!

On Mon, Dec 19, 2016 at 04:14:28PM -0700, Eldon Koyle wrote:
> For IGMP snooping to work, there must be an L3 device acting as an
> IGMP querier on your L2 domain (typically a router). This device is
> in charge of keeping track of which IGMP clients have asked for which
> multicast groups, and periodically asking if they still want it. The
> MLX would not need to be the querier, but there has to be one in that
> VLAN.
>
> If there is no IGMP querier, your only real option would be to flood
> all the multicast (unless you are connecting a group of routers that
> are speaking PIM, then pim snooping might be able to help you).

There IS one querier:

telnet@lsr1-gdr.ki#show ip multicast vlan 450
----------+-----+---------+---------------+-----+-----+------
VLAN State Mode Active Time (*, G)(S, G)
Querier Query Count Count
----------+-----+---------+---------------+-----+-----+------
450 Ena Passive 192.168.210.1 119 1 1
----------+-----+---------+---------------+-----+-----+------

Router ports: 12/8 (11s)

Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join

1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
e9/5 vlan 450 ( V2) 00:34:48/40s

1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48 NumOIF: 1 profile: none
Outgoing Interfaces:
TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
FID: 0xa0a9 MVID: None


--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
https://urldefense.proofpoint.com/v2/url?u=http-3A__puck.nether.net_mailman_listinfo_foundry-2Dnsp&d=DgICAg&c=IL_XqQWOjubgfqINi2jTzg&r=l86Fj-WC0GHHSCjQjuUvTzxOj0iW25AHL3VIC5Dog8o&m=Fq8aP4YieEZRBv4fbUkqtf-OXS8oD5s4dAMZCdABNeM&s=h8wpi4-zrQ_TvVCgKi38R_3w4FGktHATrHZU6vSuR6Q&e=
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Re: Multicast is being switched by LP CPU on MLXe? [ In reply to ]
Hello!



On Tue, Dec 20, 2016 at 08:31:25PM +0000, Wilbur Smith wrote:
> Multicast is always loads of fun to debug! Not really though.
>
> So it looks like your upstream querier (router) for VLAN 450 is hanging off of e9/5, while the multicast source in on e11/12 and the subscriber to the multicast group is on e 7/1. Does this sound correct?
>
> If you're seeing high LP usage and suspect cast traffic is being processed by the LP and not the hardware, this normally tells me the hardware tables are being repeatedly reprogramed or there is some table churn happening somewhere. It's normal to see a burst of high LP CPU in the instant the initial hardware programming takes place, but this should only last 1 second or so and the FPGA takes over.
>
> I've dealt a lot with this in the past and here are some things that I've seen cause this:
>
> 1) A hardware defect on the LP. This is very rare and when this does happen on routers with supported code normally you would also see sys-mon alerts in the logs
>
> 2) A duplicate multicast group address or source IP is used for two different cast streams. Example would be for encoding devices both using the same group or even the same IP address on their interface. Even if they were in separate VLANs this would cause issues because it triggers the MLXe to constantly update the FID entry on the LP. I've seen this a few times in the past.
>
> 3) Duplicate IP or MAC on two subscribers. This is more common that you think with Linux host based receivers or a semi-custom device that is manually programmed. Sometimes a sys-admin copies the ifcfg file between two hosts and forgets to delete the MAC from the file. I've also seen hardware based devices where the IP address and/or MAC is flashed on to the device and an older config is re-used causing a duplication.
>
> 4) Something going on in our code. If you have a LAG between the two routers, try disabling all but one of the ports if possibly. Bump the last port to make sure the table entries are reset and see it the issue still happens. If so, contact TAC because there may be something going on with cast FID programming on the LAG.
>
> If possible, also try moving both the source and receiver to the same MLXe in a separate VLAN and just set that VLAN to 'multicast active'; make sure there's a VE with an IP on local VLAN though.
>
> With multicast active, the router provides the same IGMP messaging that's needed for IGMP Snooping or Multicast Passive to work, but without needing to run PIM. It can be a good way to confirm if this is an issue with PIM or a lower level issue.
>
> I can tell you that this does normally work very well on the MLX. I have some customers pushing 3.2 Terrabits of multicast on an MLXe-16 all with LPs in the 1-3% CPU range. The trick is to figure out what's causing table turn at the LP level and preventing the hardware table from being used.

Dear Wilbur,

Apologises for delay with reply.
Thank you for a lot of clues, I need some time to check them.

Meanwhile I've discovered the same problem in different VLAN,
with simpler configuration.

VLAN 779 consists of three ports. No LACP LAGs, no any special
ip multicast configuration:

telnet@lsr1-gdr.ki#show vlan 779

PORT-VLAN 779, Name V779_Cosmonova_Multicast, Priority Level 0, Priority Force 0, Creation Type STATIC
Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
L2 protocols : NONE
Statically tagged Ports : ethe 1/2 ethe 10/2 ethe 12/3
Associated Virtual Interface Id: NONE
----------------------------------------------------------
Port Type Tag-Mode Protocol State
1/2 PHYSICAL TAGGED NONE FORWARDING
10/2 PHYSICAL TAGGED NONE FORWARDING
12/3 PHYSICAL TAGGED NONE FORWARDING
Arp Inspection: 0
DHCP Snooping: 0
IPv4 Multicast Snooping: Disabled
IPv6 Multicast Snooping: Disabled

No Virtual Interfaces configured for this vlan

telnet@lsr1-gdr.ki#show run | b 779
vlan 779 name V779_Cosmonova_Multicast
tagged ethe 1/2 ethe 10/2 ethe 12/3
!

Port 12/3 here is connected to multicast source. Ports 1/2 and 10/2
are connected to multicast customers. There is neither IGMP nor PIM,
just some static multicast groups.

And I see that multicast packets in this VLAN are being switched
by LP CPU:

LP-12#debug packet capture rx max 2
Rx capture enabled, maximum capture count 2.

[ppcr_rx_packet]: Packet received
Time stamp : 13 day(s) 14h 44m 53s:,
TM Header: [ 0564 eb24 0000 ]
Type: Fabric Unicast(0x00000000) Size: 1380 Class: 7 Src sys port: 2852
Dest Port: 0 Drop Prec: 0 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src: 0 Sys mc: 0
**********************************************************************
Packet size: 1374, XPP reason code: 0x000682b5
00: 05f0 c403 5c50 e30b-8481 fffe 0e00 0000 FID = 0x05f0
10: 0100 5e00 0083 f8c0-01e0 0a28 0800 4500 Offset = 0x10
20: 0540 0000 4000 1e11-fc4b 0a31 6cad e400 VLAN = 779(0x030b)
30: 0083 e1bd 04d2 052c-3c4a 4700 3113 4a9e CAM = 0x00ffff(R)
40: d024 f70f 5ee4 5e6b-389c f6a8 0bc4 f163 SFLOW = 0
50: f292 9e41 fdb0 282b-4af5 bb2f f9ab 7543 DBL TAG = 0
60: abcc 6e4f b66b 0846-f71e acc1 a676 614d
70: c1ab 018c 8e9f 5c16-0490 e345 9fb9 be74
Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
7 0 0 12/3 3 0 1 0 1 1 1 0 0 0 1 0

10.49.108.173 -> 228.0.0.131 UDP [57789 -> 1234]
**********************************************************************
[ppcr_tx_packet] ACTION: Forward packet using fid 0xa018
Packet capture reached the limit. 2Issue the command again to activate it.
Packet Capture Disabled.


In my understanding it should not happen.
Packet is coming to port 12/3 with src MAC f8c0.01e0.0a28 and dst
multicast MAC 0100.5e00.0083 and it should be simply flooded out of
ports 1/2 and 10/2. But instead that it is going to LP CPU.

What's wrong with such easy configuration?

--
MINO-RIPE
_______________________________________________
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp