Mailing List Archive

Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch
Hi,

I have a legacy Extreme Networks Summit1iTx L3 switch(running EW
7.8.4.1), which delays ICMP traffic routed through the switch and ICMP
traffic addressed to the switch while traffic switched through the
switch is not affected. For an odd reason, only ICMP traffic seems to
be affected and not for example TCP or UDP traffic. There is a clear
correlation between high RTT and low CPU utilization(less than 90%) of
tBGTask process. It is always the tNetTask process, which CPU usage
increased and thus forced tBGTask process utilization below 90%.
According to ExtremeWare 7.0 software manual, tNetTask(network stack
task) process is responsible for handling all the software-based
processing of packets including:

1) Packets that cannot be handled by the switch's ASIC because the
forwarding tables do not have entries built in.
2) Packets destined to the CPU for one of the router interfaces.
3) Packets that must be examined or snooped by the CPUPackets detected
for copying to the CPU.

Has anyone seen such behavior before? Why does low CPU utilization of
tBGTask process affect only ICMP traffic? Any suggestions how to see
exactly which traffic causes CPU utilization of tNetTask process to
increase?



thanks,
Martin
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
ICMP traffic is always software forwarded in EWare....so the RTT is higher
when compared to other traffic that is hardware forwarded....

Regards,
Shankar
On 25 Aug 2014 20:56, "Martin T" <m4rtntns@gmail.com> wrote:

> Hi,
>
> I have a legacy Extreme Networks Summit1iTx L3 switch(running EW
> 7.8.4.1), which delays ICMP traffic routed through the switch and ICMP
> traffic addressed to the switch while traffic switched through the
> switch is not affected. For an odd reason, only ICMP traffic seems to
> be affected and not for example TCP or UDP traffic. There is a clear
> correlation between high RTT and low CPU utilization(less than 90%) of
> tBGTask process. It is always the tNetTask process, which CPU usage
> increased and thus forced tBGTask process utilization below 90%.
> According to ExtremeWare 7.0 software manual, tNetTask(network stack
> task) process is responsible for handling all the software-based
> processing of packets including:
>
> 1) Packets that cannot be handled by the switch's ASIC because the
> forwarding tables do not have entries built in.
> 2) Packets destined to the CPU for one of the router interfaces.
> 3) Packets that must be examined or snooped by the CPUPackets detected
> for copying to the CPU.
>
> Has anyone seen such behavior before? Why does low CPU utilization of
> tBGTask process affect only ICMP traffic? Any suggestions how to see
> exactly which traffic causes CPU utilization of tNetTask process to
> increase?
>
>
>
> thanks,
> Martin
> _______________________________________________
> extreme-nsp mailing list
> extreme-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/extreme-nsp
>
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
I see, thanks! However, any suggestions how to see exactly which
traffic causes CPU utilization of tNetTask process to increase? It
looks like that in my case it's not the ICMP traffic which actually
loads the tNetTask process.



regards,
Martin

On 8/26/14, Shankar <shankarp5@gmail.com> wrote:
> ICMP traffic is always software forwarded in EWare....so the RTT is higher
> when compared to other traffic that is hardware forwarded....
>
> Regards,
> Shankar
> On 25 Aug 2014 20:56, "Martin T" <m4rtntns@gmail.com> wrote:
>
>> Hi,
>>
>> I have a legacy Extreme Networks Summit1iTx L3 switch(running EW
>> 7.8.4.1), which delays ICMP traffic routed through the switch and ICMP
>> traffic addressed to the switch while traffic switched through the
>> switch is not affected. For an odd reason, only ICMP traffic seems to
>> be affected and not for example TCP or UDP traffic. There is a clear
>> correlation between high RTT and low CPU utilization(less than 90%) of
>> tBGTask process. It is always the tNetTask process, which CPU usage
>> increased and thus forced tBGTask process utilization below 90%.
>> According to ExtremeWare 7.0 software manual, tNetTask(network stack
>> task) process is responsible for handling all the software-based
>> processing of packets including:
>>
>> 1) Packets that cannot be handled by the switch's ASIC because the
>> forwarding tables do not have entries built in.
>> 2) Packets destined to the CPU for one of the router interfaces.
>> 3) Packets that must be examined or snooped by the CPUPackets detected
>> for copying to the CPU.
>>
>> Has anyone seen such behavior before? Why does low CPU utilization of
>> tBGTask process affect only ICMP traffic? Any suggestions how to see
>> exactly which traffic causes CPU utilization of tNetTask process to
>> increase?
>>
>>
>>
>> thanks,
>> Martin
>> _______________________________________________
>> extreme-nsp mailing list
>> extreme-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/extreme-nsp
>>
>
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
Check for ipstats or l2stats for clues. ...
I see, thanks! However, any suggestions how to see exactly which
traffic causes CPU utilization of tNetTask process to increase? It
looks like that in my case it's not the ICMP traffic which actually
loads the tNetTask process.



regards,
Martin

On 8/26/14, Shankar <shankarp5@gmail.com> wrote:
> ICMP traffic is always software forwarded in EWare....so the RTT is higher
> when compared to other traffic that is hardware forwarded....
>
> Regards,
> Shankar
> On 25 Aug 2014 20:56, "Martin T" <m4rtntns@gmail.com> wrote:
>
>> Hi,
>>
>> I have a legacy Extreme Networks Summit1iTx L3 switch(running EW
>> 7.8.4.1), which delays ICMP traffic routed through the switch and ICMP
>> traffic addressed to the switch while traffic switched through the
>> switch is not affected. For an odd reason, only ICMP traffic seems to
>> be affected and not for example TCP or UDP traffic. There is a clear
>> correlation between high RTT and low CPU utilization(less than 90%) of
>> tBGTask process. It is always the tNetTask process, which CPU usage
>> increased and thus forced tBGTask process utilization below 90%.
>> According to ExtremeWare 7.0 software manual, tNetTask(network stack
>> task) process is responsible for handling all the software-based
>> processing of packets including:
>>
>> 1) Packets that cannot be handled by the switch's ASIC because the
>> forwarding tables do not have entries built in.
>> 2) Packets destined to the CPU for one of the router interfaces.
>> 3) Packets that must be examined or snooped by the CPUPackets detected
>> for copying to the CPU.
>>
>> Has anyone seen such behavior before? Why does low CPU utilization of
>> tBGTask process affect only ICMP traffic? Any suggestions how to see
>> exactly which traffic causes CPU utilization of tNetTask process to
>> increase?
>>
>>
>>
>> thanks,
>> Martin
>> _______________________________________________
>> extreme-nsp mailing list
>> extreme-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/extreme-nsp
>>
>
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
On Tue, 26 Aug 2014, Shankar wrote:

> ICMP traffic is always software forwarded in EWare....so the RTT is
> higher when compared to other traffic that is hardware forwarded....

Well, that is not strictly true. If you have newer ASICs in your hardware
(manufactured after 2005 or so depending on platform), you can do "disable
icmp access-list" (if I remember correctly) and ICMP will be hardware
forwarded, but downside is that ICMP becomes "invisible" to access-lists.

Otherwise all traffic is punted to CPU for forwarding and will experience
increased latency depending on current CPU load, and also will max out at
approximately 10 megabit/s or so.

--
Mikael Abrahamsson email: swmike@swm.pp.se
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
Thanks for all the information! So in case the utilization of tBGTask
process is less than 90%, all the other IP traffic should suffer as
well? For some odd reason I didn't observe this. Based on my
tests(hping and custom-made script utilizing bash UDP and TCP
sockets), only the RTT of ICMP traffic increased while RTT for example
for UDP or TCP traffic did not change at the time when CPU utilization
of tBGTask was <90%.


thanks,
Martin

On 9/1/14, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 26 Aug 2014, Shankar wrote:
>
>> ICMP traffic is always software forwarded in EWare....so the RTT is
>> higher when compared to other traffic that is hardware forwarded....
>
> Well, that is not strictly true. If you have newer ASICs in your hardware
> (manufactured after 2005 or so depending on platform), you can do "disable
> icmp access-list" (if I remember correctly) and ICMP will be hardware
> forwarded, but downside is that ICMP becomes "invisible" to access-lists.
>
> Otherwise all traffic is punted to CPU for forwarding and will experience
> increased latency depending on current CPU load, and also will max out at
> approximately 10 megabit/s or so.
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> extreme-nsp mailing list
> extreme-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/extreme-nsp
>
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
On Mon, 1 Sep 2014, Martin T wrote:

> Thanks for all the information! So in case the utilization of tBGTask
> process is less than 90%, all the other IP traffic should suffer as
> well?

No. By default ICMP is slow-pathed, everything else is fast-pathed. ICMP
can be made fast-pathed on some incarnations i-chipset platforms by
issuing "disable icmp access-lists". They re-spun the forwardig ASICs back
in 2004-2005 or so to enable this feature, as the inferno chipset in the
original design punted all ICMP to CPU, they introduced a system-wide flag
to disable this behavior. Since the ASIC doesn't have the capability to
inspect ICMP in fast-path, when you change this flag, you no longer can
ACL ICMP at all.

If you're running i-chipset devices as routers and have a decent amount of
flows, I also recommend to "enable ip-subnet lookup" feature. A masksize
of /20 or so is a balanced approach for Internet use. This helps to not
deplete the ipfdb and also speeds of convergence after a re-route.

> For some odd reason I didn't observe this. Based on my
> tests(hping and custom-made script utilizing bash UDP and TCP
> sockets), only the RTT of ICMP traffic increased while RTT for example
> for UDP or TCP traffic did not change at the time when CPU utilization
> of tBGTask was <90%.

That is default behavior.

--
Mikael Abrahamsson email: swmike@swm.pp.se
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp
Re: Extreme Networks Summit1iTx L3 switch delays ICMP traffic routed through the switch or addressed to the switch [ In reply to ]
Mikael,

in this case it all makes sense. Thanks!


regards,
Martin

On 9/1/14, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Mon, 1 Sep 2014, Martin T wrote:
>
>> Thanks for all the information! So in case the utilization of tBGTask
>> process is less than 90%, all the other IP traffic should suffer as
>> well?
>
> No. By default ICMP is slow-pathed, everything else is fast-pathed. ICMP
> can be made fast-pathed on some incarnations i-chipset platforms by
> issuing "disable icmp access-lists". They re-spun the forwardig ASICs back
> in 2004-2005 or so to enable this feature, as the inferno chipset in the
> original design punted all ICMP to CPU, they introduced a system-wide flag
> to disable this behavior. Since the ASIC doesn't have the capability to
> inspect ICMP in fast-path, when you change this flag, you no longer can
> ACL ICMP at all.
>
> If you're running i-chipset devices as routers and have a decent amount of
> flows, I also recommend to "enable ip-subnet lookup" feature. A masksize
> of /20 or so is a balanced approach for Internet use. This helps to not
> deplete the ipfdb and also speeds of convergence after a re-route.
>
>> For some odd reason I didn't observe this. Based on my
>> tests(hping and custom-made script utilizing bash UDP and TCP
>> sockets), only the RTT of ICMP traffic increased while RTT for example
>> for UDP or TCP traffic did not change at the time when CPU utilization
>> of tBGTask was <90%.
>
> That is default behavior.
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
_______________________________________________
extreme-nsp mailing list
extreme-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/extreme-nsp