Mailing List Archive

rsvp-te admission control - i don't see it
I have a functional mpls-te test running, seems fine.but, question about
bandwidth reservations please.



At the Headend router, I set bandwidth on my mpls-te tunnel, but I can't for
the life of me, find where in the network is this bandwidth actually being
admitted, or seen, or allocated or anything!



I mean I look on rsvp interfaces, I look in wireshark at the tspec field of
the path message, I look in the mpls te tunnels along the way, etc, etc, I
can't find where the network sees that bandwidth I'm asking for at the
tunnel Head end.



Using IOS-XR in EVE-NG for testing. XR 6.3.1



I'll give you other details if you want them.





Aaron

aaron1@gvtc.com



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: rsvp-te admission control - i don't see it [ In reply to ]
> aaron1@gvtc.com
> Sent: Thursday, September 3, 2020 6:48 PM
>
> I have a functional mpls-te test running, seems fine.but, question about
> bandwidth reservations please.
>
>
>
> At the Headend router, I set bandwidth on my mpls-te tunnel, but I can't
for
> the life of me, find where in the network is this bandwidth actually being
> admitted, or seen, or allocated or anything!
>
>
>
> I mean I look on rsvp interfaces, I look in wireshark at the tspec field
of the
> path message, I look in the mpls te tunnels along the way, etc, etc, I
can't find
> where the network sees that bandwidth I'm asking for at the tunnel Head
> end.
>
The obvious question would be whether you are using "Maximum Allocation" MAM
or "Russian Dolls" RDM model for your DS-TE.

My advice is don't do DiffServ Aware RSVP-TE.
Use IntServ i.e. your standard core QOS instead and use RSVP-TE just for
traffic engineering purposes (pure RSVP-TE is complex enough).



adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: rsvp-te admission control - i don't see it [ In reply to ]
Thanks Adam, but I was only talking about the RSVP-TE signaled bandwidth
reservation (not the actual qos, which I think is what you are referring to)
.... a guy on NANOG mail list answered it for me. This is what I was
missing on the Headend TE-Tunnel interface config...

interface tunnel-te1
signalled-bandwidth 5000


...that one simple command. Now all LSR's in the RSVP-TE path allocate that
bandwidth...

Seen with these commands...

RP/0/0/CPU0:r20#sh rsvp reservation detail | in ate

Rate: 0 bits/sec. Burst: 1K bytes. Peak: 0 bits/sec.
State expires in 0.000 sec.
Rate: 5000000 bits/sec. Burst: 1K bytes. Peak: 5M bits/sec.
State expires in 358.630 sec.

RP/0/0/CPU0:r20#sh rsvp int

*: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] (bc1)

Interface MaxBW (bps) MaxFlow (bps) Allocated (bps)
MaxSub (bps)
------------------------- ------------ ------------- --------------------
-------------
GigabitEthernet0/0/0/0 750M* 750M 0 ( 0%)
0*
GigabitEthernet0/0/0/1 750M* 750M 5M ( 0%)
0*


On transit lsr in core.

RP/0/0/CPU0:r24#sh rsvp session detail | in ate

Tspec: avg rate=0, burst=1K, peak rate=0
Fspec: avg rate=0, burst=1K, peak rate=0
Tspec: avg rate=5M, burst=1K, peak rate=5M
Fspec: avg rate=5M, burst=1K, peak rate=5M

RP/0/0/CPU0:r24#sh rsvp int

*: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] (bc1)

Interface MaxBW (bps) MaxFlow (bps) Allocated (bps)
MaxSub (bps)
------------------------- ------------ ------------- --------------------
-------------
GigabitEthernet0/0/0/0 750M* 750M 0 ( 0%)
0*
GigabitEthernet0/0/0/1 750M* 750M 5M ( 0%)
0*


-Aaron


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/