Mailing List Archive

icmp problems tracing through m20's
Hi

has anyone ever experienced problems where icmp traces (using mtr) to
destinations through an m20 show no packet loss at the last hop but
varying amounts of packet loss on one of the juniper interfaces?

tracing to the juniper itself shows no packet loss.

It has been suggested that this may be down to default icmp throttling on
the junipers. does anyone know anything about this?

Thanks

David
icmp problems tracing through m20's [ In reply to ]
Hello,

I have not messed with mtr, but can confirm that ICMP rate limiting
on the fxp1 interface will result in some packet loss when performing
rapid (flood) pings that are destined to a PFE interface (this
traffic must transit fxp1). A recent email indicated these parameters
are now in effect; I have not confirmed:

The default rate limiting is 50 per second per logical interface
and I think 500 per box per second.



> -----Original Message-----
> From: juniper-nsp-bounces@puck.nether.net
> [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
> David Brazewell
> Sent: Thursday, May 22, 2003 11:00 AM
> To: juniper-nsp@puck.nether.net
> Subject: [j-nsp] icmp problems tracing through m20's
>
>
>
> Hi
>
> has anyone ever experienced problems where icmp traces
> (using mtr) to
> destinations through an m20 show no packet loss at the
> last hop but
> varying amounts of packet loss on one of the juniper interfaces?
>
> tracing to the juniper itself shows no packet loss.
>
> It has been suggested that this may be down to default
> icmp throttling on
> the junipers. does anyone know anything about this?
>
> Thanks
>
> David
>
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/juniper-nsp
icmp problems tracing through m20's [ In reply to ]
would this rate limiting show up in "show system statistics icmp"?

cos Ive got the following on this router:

0 drops due to rate limit

Cheers

david


On Thu, 22 May 2003, Harry Reynolds wrote:

> Hello,
>
> I have not messed with mtr, but can confirm that ICMP rate limiting
> on the fxp1 interface will result in some packet loss when performing
> rapid (flood) pings that are destined to a PFE interface (this
> traffic must transit fxp1). A recent email indicated these parameters
> are now in effect; I have not confirmed:
>
> The default rate limiting is 50 per second per logical interface
> and I think 500 per box per second.
>
>
>
> > -----Original Message-----
> > From: juniper-nsp-bounces@puck.nether.net
> > [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
> > David Brazewell
> > Sent: Thursday, May 22, 2003 11:00 AM
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] icmp problems tracing through m20's
> >
> >
> >
> > Hi
> >
> > has anyone ever experienced problems where icmp traces
> > (using mtr) to
> > destinations through an m20 show no packet loss at the
> > last hop but
> > varying amounts of packet loss on one of the juniper interfaces?
> >
> > tracing to the juniper itself shows no packet loss.
> >
> > It has been suggested that this may be down to default
> > icmp throttling on
> > the junipers. does anyone know anything about this?
> >
> > Thanks
> >
> > David
> >
> >
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > http://puck.nether.net/mailman/listinfo/juniper-nsp
>
> --
> Virus scanned by edNET.
>
icmp problems tracing through m20's [ In reply to ]
Yes, they should show up as ICMP drops. I have:

.5 .6
r3------------------r4

10.0.2.4/30

At r3:

root@r3% ping -l 100 10.0.2.6
PING 10.0.2.6 (10.0.2.6): 56 data bytes
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.....................................................................
.........ax/stddev = 0.642/1.757/41.217/3.911 ms

At r4:
[edit]
lab@r4# run show system statistics | find icmp
icmp:
173 drops due to rate limit <<<
0 calls to icmp_error
0 errors not generated because old message was icmp
Output histogram:
echo reply: 25703
0 messages with bad code fields
0 messages less than the minimum length
0 messages with bad checksum


[edit]
lab@r4# run show system statistics | find icmp
icmp:
181 drops due to rate limit <<<
0 calls to icmp_error
0 errors not generated because old message was icmp
Output histogram:
echo reply: 28611
0 messages with bad code fields
0 messages less than the minimum length
0 messages with bad checksum
0 messages with bad source address
0 messages with bad length
0 echo drops with broadcast or multicast destinaton address

Are there any policed discard occuring on the interface being pinged?

[edit]
lab@r4# run show interfaces so-0/1/0 extensive | match polic
Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0,
Bucket drops: 0, Policed discards: 0,
Policing bucket: Disabled




> -----Original Message-----
> From: David Brazewell [mailto:davidb@ednet.co.uk]
> Sent: Thursday, May 22, 2003 11:21 AM
> To: Harry Reynolds
> Cc: juniper-nsp@puck.nether.net
> Subject: RE: [j-nsp] icmp problems tracing through m20's
>
>
>
>
> would this rate limiting show up in "show system statistics icmp"?
>
> cos Ive got the following on this router:
>
> 0 drops due to rate limit
>
> Cheers
>
> david
>
>
> On Thu, 22 May 2003, Harry Reynolds wrote:
>
> > Hello,
> >
> > I have not messed with mtr, but can confirm that ICMP
> rate limiting
> > on the fxp1 interface will result in some packet loss
> when performing
> > rapid (flood) pings that are destined to a PFE interface (this
> > traffic must transit fxp1). A recent email indicated
> these parameters
> > are now in effect; I have not confirmed:
> >
> > The default rate limiting is 50 per second per logical interface
> > and I think 500 per box per second.
> >
> >
> >
> > > -----Original Message-----
> > > From: juniper-nsp-bounces@puck.nether.net
> > > [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
> > > David Brazewell
> > > Sent: Thursday, May 22, 2003 11:00 AM
> > > To: juniper-nsp@puck.nether.net
> > > Subject: [j-nsp] icmp problems tracing through m20's
> > >
> > >
> > >
> > > Hi
> > >
> > > has anyone ever experienced problems where icmp traces
> > > (using mtr) to
> > > destinations through an m20 show no packet loss at the
> > > last hop but
> > > varying amounts of packet loss on one of the juniper
> interfaces?
> > >
> > > tracing to the juniper itself shows no packet loss.
> > >
> > > It has been suggested that this may be down to default
> > > icmp throttling on
> > > the junipers. does anyone know anything about this?
> > >
> > > Thanks
> > >
> > > David
> > >
> > >
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > http://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> > --
> > Virus scanned by edNET.
> >
>
icmp problems tracing through m20's [ In reply to ]
On Thu, May 22, 2003 at 06:59:50PM +0100, David Brazewell wrote:

> has anyone ever experienced problems where icmp traces (using mtr) to
> destinations through an m20 show no packet loss at the last hop but
> varying amounts of packet loss on one of the juniper interfaces?
>
> tracing to the juniper itself shows no packet loss.

What version of JunOS are you running? We had this when we were running
5.2R2.3 but it went away when we recently upgraded to 5.6.
--
Niels Raijer | "For every choice that ends up wrong
niels@fusix.nl | Another one's right
http://www.fusix.nl | A change of scene would sure be great"
icmp problems tracing through m20's [ In reply to ]
On Thu, 22 May 2003, David Brazewell wrote:

> has anyone ever experienced problems where icmp traces (using mtr) to
> destinations through an m20 show no packet loss at the last hop but
> varying amounts of packet loss on one of the juniper interfaces?

Hi Dave,

Unless you've cranked up the packet rate on MTR (from the default 1 packet
per second), I wouldn't have thought that the rate limiting would be a
factor on this unless you specifically set the rate limit really low (the
defaults shouldn't be an issue).

> tracing to the juniper itself shows no packet loss.

Have your tried running mtr with the -n flag to make sure that the packet
interface you are seeing that packet loss on is the same one as the one
you are tracing to directly? Do you have default-address-selection
enabled? Might also be worthwhile either setting up a filter to log ICMP
packets from the IP you're running MTR from and/or tcpdumping the LAN
segment(s) it's on to make sure the packets are making all the way to the
M20.

Do you monitor the load on the M20? Is it busy? Does show system
statistics icmp show any other reasons for drops other than rate limiting?

HTH,

Rich
icmp problems tracing through m20's [ In reply to ]
On Thu, May 22, 2003 at 09:34:41PM +0100, variable@ednet.co.uk wrote:
> On Thu, 22 May 2003, David Brazewell wrote:
>
> > has anyone ever experienced problems where icmp traces (using mtr) to
> > destinations through an m20 show no packet loss at the last hop but
> > varying amounts of packet loss on one of the juniper interfaces?
>
> Hi Dave,
>
> Unless you've cranked up the packet rate on MTR (from the default 1 packet
> per second), I wouldn't have thought that the rate limiting would be a
> factor on this unless you specifically set the rate limit really low (the
> defaults shouldn't be an issue).

Last I knew this was a unchangable default that Juniper imposed
in a software release.

> > tracing to the juniper itself shows no packet loss.
>
> Have your tried running mtr with the -n flag to make sure that the packet
> interface you are seeing that packet loss on is the same one as the one
> you are tracing to directly? Do you have default-address-selection
> enabled? Might also be worthwhile either setting up a filter to log ICMP
> packets from the IP you're running MTR from and/or tcpdumping the LAN
> segment(s) it's on to make sure the packets are making all the way to the
> M20.
>
> Do you monitor the load on the M20? Is it busy? Does show system
> statistics icmp show any other reasons for drops other than rate limiting?

We've seen interesting things if you are using the
http://www.secsup.org/Tracking/ (backscatter) style traceback
on some devices if there is a large DoS that you blackhole on those
routers. There's some ways you can detect it I believe if you login
to the FPC. I believe there is an ER open to make this easily cli-visible
in future releases.

- Jared

--
Jared Mauch | pgp key available via finger from jared@puck.nether.net
clue++; | http://puck.nether.net/~jared/ My statements are only mine.
icmp problems tracing through m20's [ In reply to ]
Hi Harry

still not seeing anything on the icmp rate limiting:

dbrazewell@router> show system statistics icmp
icmp:
0 drops due to rate limit
5733 calls to icmp_error

and the icmp_errors are not climbing as the same rate as I am seeing
packets being dropped in my traces

its a similar story for policing:

dbrazewell@router> show interfaces ge-0/3/0 extensive | match
polic
Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Policed discards:
3743, L3 incompletes: 0,

these policed discards are not increasing at the same rate either

do you have any comment on what Niels Raijer said about changing code
versions? Although I suspect that this was because there are different
icmp throttling thresholds between versions

Thanks

David


On Thu, 22 May 2003, Harry Reynolds wrote:

> Yes, they should show up as ICMP drops. I have:
>
> .5 .6
> r3------------------r4
>
> 10.0.2.4/30
>
> At r3:
>
> root@r3% ping -l 100 10.0.2.6
> PING 10.0.2.6 (10.0.2.6): 56 data bytes
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .....................................................................
> .........ax/stddev = 0.642/1.757/41.217/3.911 ms
>
> At r4:
> [edit]
> lab@r4# run show system statistics | find icmp
> icmp:
> 173 drops due to rate limit <<<
> 0 calls to icmp_error
> 0 errors not generated because old message was icmp
> Output histogram:
> echo reply: 25703
> 0 messages with bad code fields
> 0 messages less than the minimum length
> 0 messages with bad checksum
>
>
> [edit]
> lab@r4# run show system statistics | find icmp
> icmp:
> 181 drops due to rate limit <<<
> 0 calls to icmp_error
> 0 errors not generated because old message was icmp
> Output histogram:
> echo reply: 28611
> 0 messages with bad code fields
> 0 messages less than the minimum length
> 0 messages with bad checksum
> 0 messages with bad source address
> 0 messages with bad length
> 0 echo drops with broadcast or multicast destinaton address
>
> Are there any policed discard occuring on the interface being pinged?
>
> [edit]
> lab@r4# run show interfaces so-0/1/0 extensive | match polic
> Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0,
> Bucket drops: 0, Policed discards: 0,
> Policing bucket: Disabled
>
>
>
>
> > -----Original Message-----
> > From: David Brazewell [mailto:davidb@ednet.co.uk]
> > Sent: Thursday, May 22, 2003 11:21 AM
> > To: Harry Reynolds
> > Cc: juniper-nsp@puck.nether.net
> > Subject: RE: [j-nsp] icmp problems tracing through m20's
> >
> >
> >
> >
> > would this rate limiting show up in "show system statistics icmp"?
> >
> > cos Ive got the following on this router:
> >
> > 0 drops due to rate limit
> >
> > Cheers
> >
> > david
> >
> >
> > On Thu, 22 May 2003, Harry Reynolds wrote:
> >
> > > Hello,
> > >
> > > I have not messed with mtr, but can confirm that ICMP
> > rate limiting
> > > on the fxp1 interface will result in some packet loss
> > when performing
> > > rapid (flood) pings that are destined to a PFE interface (this
> > > traffic must transit fxp1). A recent email indicated
> > these parameters
> > > are now in effect; I have not confirmed:
> > >
> > > The default rate limiting is 50 per second per logical interface
> > > and I think 500 per box per second.
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: juniper-nsp-bounces@puck.nether.net
> > > > [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
> > > > David Brazewell
> > > > Sent: Thursday, May 22, 2003 11:00 AM
> > > > To: juniper-nsp@puck.nether.net
> > > > Subject: [j-nsp] icmp problems tracing through m20's
> > > >
> > > >
> > > >
> > > > Hi
> > > >
> > > > has anyone ever experienced problems where icmp traces
> > > > (using mtr) to
> > > > destinations through an m20 show no packet loss at the
> > > > last hop but
> > > > varying amounts of packet loss on one of the juniper
> > interfaces?
> > > >
> > > > tracing to the juniper itself shows no packet loss.
> > > >
> > > > It has been suggested that this may be down to default
> > > > icmp throttling on
> > > > the junipers. does anyone know anything about this?
> > > >
> > > > Thanks
> > > >
> > > > David
> > > >
> > > >
> > > > _______________________________________________
> > > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > > http://puck.nether.net/mailman/listinfo/juniper-nsp
> > >
> > > --
> > > Virus scanned by edNET.
> > >
> >
>
> --
> Virus scanned by edNET.
>
icmp problems tracing through m20's [ In reply to ]
David,


let me add something here which might be useful. there are icmp
tasks which are handled on the PFE complex directly and don't
even go to the Routing Engine. ttl expired used for traceroute
and mtu exceeded are one of them. you can look at the
statistics when you enter the following command:
show pfe statistics ip icmp

For the icmp task on the PFE there is the rate-limiter of 50
pps per interface and 500 pps per PFE complext. T-Serie
contains more then one PFE complex usually. This is what has
been increased since version 5.3R3 do make traceroute more
happier.

show system statistics icmp is the view from the Routing-Engine
and not from the PFE ( PacketForwardingEngine). Here icmp is
rate limited for 1000pps with a token bucket. So ping to local
interfaces are handled by the Routing Engine.

I'm not sure though if you run into one of those throttled
situation but you can check now also with the pfe command and
see if traceroute is not happy since this would be handled on
the PFE side and is one of the tasks MTR does.


hope this helps
Josef



Friday, May 23, 2003, 1:12:07 PM, you wrote:


> Hi Harry

> still not seeing anything on the icmp rate limiting:

dbrazewell@router>> show system statistics icmp
> icmp:
> 0 drops due to rate limit
> 5733 calls to icmp_error

> and the icmp_errors are not climbing as the same rate as I am seeing
> packets being dropped in my traces

> its a similar story for policing:

dbrazewell@router>> show interfaces ge-0/3/0 extensive | match
> polic
> Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Policed discards:
> 3743, L3 incompletes: 0,

> these policed discards are not increasing at the same rate either

> do you have any comment on what Niels Raijer said about changing code
> versions? Although I suspect that this was because there are different
> icmp throttling thresholds between versions

> Thanks

> David


> On Thu, 22 May 2003, Harry Reynolds wrote:

>> Yes, they should show up as ICMP drops. I have:
>>
>> .5 .6
>> r3------------------r4
>>
>> 10.0.2.4/30
>>
>> At r3:
>>
>> root@r3% ping -l 100 10.0.2.6
>> PING 10.0.2.6 (10.0.2.6): 56 data bytes
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .........ax/stddev = 0.642/1.757/41.217/3.911 ms
>>
>> At r4:
>> [edit]
>> lab@r4# run show system statistics | find icmp
>> icmp:
>> 173 drops due to rate limit <<<
>> 0 calls to icmp_error
>> 0 errors not generated because old message was icmp
>> Output histogram:
>> echo reply: 25703
>> 0 messages with bad code fields
>> 0 messages less than the minimum length
>> 0 messages with bad checksum
>>
>>
>> [edit]
>> lab@r4# run show system statistics | find icmp
>> icmp:
>> 181 drops due to rate limit <<<
>> 0 calls to icmp_error
>> 0 errors not generated because old message was icmp
>> Output histogram:
>> echo reply: 28611
>> 0 messages with bad code fields
>> 0 messages less than the minimum length
>> 0 messages with bad checksum
>> 0 messages with bad source address
>> 0 messages with bad length
>> 0 echo drops with broadcast or multicast destinaton address
>>
>> Are there any policed discard occuring on the interface being pinged?
>>
>> [edit]
>> lab@r4# run show interfaces so-0/1/0 extensive | match polic
>> Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0,
>> Bucket drops: 0, Policed discards: 0,
>> Policing bucket: Disabled
>>
>>
>>
>>
>> > -----Original Message-----
>> > From: David Brazewell [mailto:davidb@ednet.co.uk]
>> > Sent: Thursday, May 22, 2003 11:21 AM
>> > To: Harry Reynolds
>> > Cc: juniper-nsp@puck.nether.net
>> > Subject: RE: [j-nsp] icmp problems tracing through m20's
>> >
>> >
>> >
>> >
>> > would this rate limiting show up in "show system statistics icmp"?
>> >
>> > cos Ive got the following on this router:
>> >
>> > 0 drops due to rate limit
>> >
>> > Cheers
>> >
>> > david
>> >
>> >
>> > On Thu, 22 May 2003, Harry Reynolds wrote:
>> >
>> > > Hello,
>> > >
>> > > I have not messed with mtr, but can confirm that ICMP
>> > rate limiting
>> > > on the fxp1 interface will result in some packet loss
>> > when performing
>> > > rapid (flood) pings that are destined to a PFE interface (this
>> > > traffic must transit fxp1). A recent email indicated
>> > these parameters
>> > > are now in effect; I have not confirmed:
>> > >
>> > > The default rate limiting is 50 per second per logical interface
>> > > and I think 500 per box per second.
>> > >
>> > >
>> > >
>> > > > -----Original Message-----
>> > > > From: juniper-nsp-bounces@puck.nether.net
>> > > > [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
>> > > > David Brazewell
>> > > > Sent: Thursday, May 22, 2003 11:00 AM
>> > > > To: juniper-nsp@puck.nether.net
>> > > > Subject: [j-nsp] icmp problems tracing through m20's
>> > > >
>> > > >
>> > > >
>> > > > Hi
>> > > >
>> > > > has anyone ever experienced problems where icmp traces
>> > > > (using mtr) to
>> > > > destinations through an m20 show no packet loss at the
>> > > > last hop but
>> > > > varying amounts of packet loss on one of the juniper
>> > interfaces?
>> > > >
>> > > > tracing to the juniper itself shows no packet loss.
>> > > >
>> > > > It has been suggested that this may be down to default
>> > > > icmp throttling on
>> > > > the junipers. does anyone know anything about this?
>> > > >
>> > > > Thanks
>> > > >
>> > > > David
>> > > >
>> > > >
>> > > > _______________________________________________
>> > > > juniper-nsp mailing list juniper-nsp@puck.nether.net
>> > > > http://puck.nether.net/mailman/listinfo/juniper-nsp
>> > >
>> > > --
>> > > Virus scanned by edNET.
>> > >
>> >
>>
>> --
>> Virus scanned by edNET.
>>


> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> http://puck.nether.net/mailman/listinfo/juniper-nsp
icmp problems tracing through m20's [ In reply to ]
Thanks for that Josef

it looks like this is what is causing it:

ICMP Errors:
0 unknown unreachables
0 unsupported ICMP type
0 unprocessed redirects
0 invalid ICMP type
0 invalid protocol
0 bad input interface
11068686 throttled icmps
0 runts


Thanks

David


On Sat, 24 May 2003, Josef Buchsteiner wrote:

> David,
>
>
> let me add something here which might be useful. there are icmp
> tasks which are handled on the PFE complex directly and don't
> even go to the Routing Engine. ttl expired used for traceroute
> and mtu exceeded are one of them. you can look at the
> statistics when you enter the following command:
> show pfe statistics ip icmp
>
> For the icmp task on the PFE there is the rate-limiter of 50
> pps per interface and 500 pps per PFE complext. T-Serie
> contains more then one PFE complex usually. This is what has
> been increased since version 5.3R3 do make traceroute more
> happier.
>
> show system statistics icmp is the view from the Routing-Engine
> and not from the PFE ( PacketForwardingEngine). Here icmp is
> rate limited for 1000pps with a token bucket. So ping to local
> interfaces are handled by the Routing Engine.
>
> I'm not sure though if you run into one of those throttled
> situation but you can check now also with the pfe command and
> see if traceroute is not happy since this would be handled on
> the PFE side and is one of the tasks MTR does.
>
>
> hope this helps
> Josef
>
>
>
> Friday, May 23, 2003, 1:12:07 PM, you wrote:
>
>
> > Hi Harry
>
> > still not seeing anything on the icmp rate limiting:
>
> dbrazewell@router>> show system statistics icmp
> > icmp:
> > 0 drops due to rate limit
> > 5733 calls to icmp_error
>
> > and the icmp_errors are not climbing as the same rate as I am seeing
> > packets being dropped in my traces
>
> > its a similar story for policing:
>
> dbrazewell@router>> show interfaces ge-0/3/0 extensive | match
> > polic
> > Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Policed discards:
> > 3743, L3 incompletes: 0,
>
> > these policed discards are not increasing at the same rate either
>
> > do you have any comment on what Niels Raijer said about changing code
> > versions? Although I suspect that this was because there are different
> > icmp throttling thresholds between versions
>
> > Thanks
>
> > David
>
>
> > On Thu, 22 May 2003, Harry Reynolds wrote:
>
> >> Yes, they should show up as ICMP drops. I have:
> >>
> >> .5 .6
> >> r3------------------r4
> >>
> >> 10.0.2.4/30
> >>
> >> At r3:
> >>
> >> root@r3% ping -l 100 10.0.2.6
> >> PING 10.0.2.6 (10.0.2.6): 56 data bytes
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .....................................................................
> >> .........ax/stddev = 0.642/1.757/41.217/3.911 ms
> >>
> >> At r4:
> >> [edit]
> >> lab@r4# run show system statistics | find icmp
> >> icmp:
> >> 173 drops due to rate limit <<<
> >> 0 calls to icmp_error
> >> 0 errors not generated because old message was icmp
> >> Output histogram:
> >> echo reply: 25703
> >> 0 messages with bad code fields
> >> 0 messages less than the minimum length
> >> 0 messages with bad checksum
> >>
> >>
> >> [edit]
> >> lab@r4# run show system statistics | find icmp
> >> icmp:
> >> 181 drops due to rate limit <<<
> >> 0 calls to icmp_error
> >> 0 errors not generated because old message was icmp
> >> Output histogram:
> >> echo reply: 28611
> >> 0 messages with bad code fields
> >> 0 messages less than the minimum length
> >> 0 messages with bad checksum
> >> 0 messages with bad source address
> >> 0 messages with bad length
> >> 0 echo drops with broadcast or multicast destinaton address
> >>
> >> Are there any policed discard occuring on the interface being pinged?
> >>
> >> [edit]
> >> lab@r4# run show interfaces so-0/1/0 extensive | match polic
> >> Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0,
> >> Bucket drops: 0, Policed discards: 0,
> >> Policing bucket: Disabled
> >>
> >>
> >>
> >>
> >> > -----Original Message-----
> >> > From: David Brazewell [mailto:davidb@ednet.co.uk]
> >> > Sent: Thursday, May 22, 2003 11:21 AM
> >> > To: Harry Reynolds
> >> > Cc: juniper-nsp@puck.nether.net
> >> > Subject: RE: [j-nsp] icmp problems tracing through m20's
> >> >
> >> >
> >> >
> >> >
> >> > would this rate limiting show up in "show system statistics icmp"?
> >> >
> >> > cos Ive got the following on this router:
> >> >
> >> > 0 drops due to rate limit
> >> >
> >> > Cheers
> >> >
> >> > david
> >> >
> >> >
> >> > On Thu, 22 May 2003, Harry Reynolds wrote:
> >> >
> >> > > Hello,
> >> > >
> >> > > I have not messed with mtr, but can confirm that ICMP
> >> > rate limiting
> >> > > on the fxp1 interface will result in some packet loss
> >> > when performing
> >> > > rapid (flood) pings that are destined to a PFE interface (this
> >> > > traffic must transit fxp1). A recent email indicated
> >> > these parameters
> >> > > are now in effect; I have not confirmed:
> >> > >
> >> > > The default rate limiting is 50 per second per logical interface
> >> > > and I think 500 per box per second.
> >> > >
> >> > >
> >> > >
> >> > > > -----Original Message-----
> >> > > > From: juniper-nsp-bounces@puck.nether.net
> >> > > > [mailto:juniper-nsp-bounces@puck.nether.net]On Behalf Of
> >> > > > David Brazewell
> >> > > > Sent: Thursday, May 22, 2003 11:00 AM
> >> > > > To: juniper-nsp@puck.nether.net
> >> > > > Subject: [j-nsp] icmp problems tracing through m20's
> >> > > >
> >> > > >
> >> > > >
> >> > > > Hi
> >> > > >
> >> > > > has anyone ever experienced problems where icmp traces
> >> > > > (using mtr) to
> >> > > > destinations through an m20 show no packet loss at the
> >> > > > last hop but
> >> > > > varying amounts of packet loss on one of the juniper
> >> > interfaces?
> >> > > >
> >> > > > tracing to the juniper itself shows no packet loss.
> >> > > >
> >> > > > It has been suggested that this may be down to default
> >> > > > icmp throttling on
> >> > > > the junipers. does anyone know anything about this?
> >> > > >
> >> > > > Thanks
> >> > > >
> >> > > > David
> >> > > >
> >> > > >
> >> > > > _______________________________________________
> >> > > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> > > > http://puck.nether.net/mailman/listinfo/juniper-nsp
> >> > >
> >> > > --
> >> > > Virus scanned by edNET.
> >> > >
> >> >
> >>
> >> --
> >> Virus scanned by edNET.
> >>
>
>
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > http://puck.nether.net/mailman/listinfo/juniper-nsp
>
> --
> Virus scanned by edNET.
>