Mailing List Archive

Re: RSVP-TE was (no subject)
> emmanuel manoni
> Sent: Tuesday, April 7, 2020 5:42 PM
>
> Hi, I'm a Network Engineer in a Carrier Network, recently I have been
> learning MPLS TE. I have some questions regarding MPLS TE deployment in
> IOS XR.Kindly help
>
> 1.Using forwarding-adjacency as a method of distributing traffic into the
TE
> Tunnels, is it possible to do unequal cost load-balance?Which commands I
> use in IOS XR??
>
I guess if you put different IGP metric on each tunnel and then enable
unequal cost multipath (UCMP) load-balancing, subject to testing.


> 2.Again,if I'm using forwarding-adjacency with two Tunnels with the same
> destination and I have setup equal metric in those Tunnels,how will the
> router load-balance the traffic into those Tunnels?will it loadbalance the
> traffic equally between the Tunnels?
>
Yes it will use the same load-sharing algorithm like it would if you had two
physical links to the tunnel destination.
My advice,
Be careful with "forwarding-adjacency" -as it makes the head-end advertise
the tunnel in IGP to other nodes like it was a standard link -which might
really complicate your traffic flows if you're not careful.
Safer option is to use "autoroute-announce" -which also installs the
tail-end and all downstream nodes to local head-end's routing table -but
head-end will keep these to itself as a secret not advertising to anyone
else allowing for more deterministic routing in the MPLS network.
If head-end is a PE (edge-node) then autoroute-announce makes perfect sense.
However if head-end is some P-Core node then forwarding-adjacency might be
considered if you need to attract traffic to the core node hosting the
tunnel that is meant as a short-cut.


> 3.The term signaled-bandwidth in TE,is that a limit for the traffic
carried by a
> tunnel? or is it just a reserved bandwidth for rsvp along the path of that
> tunnel?
>
It is just reserved BW for the RSVP signalled LSP.
My advice,
Use RSVP-TE for traffic-engineering -i.e. steering traffic across the core
and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
Use standard simple QOS aka Diff-Serv to enforce quality of service in the
core.

adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
On Wed, 8 Apr 2020 at 10:46, <adamv0025@netconsultings.com> wrote:

> Use RSVP-TE for traffic-engineering -i.e. steering traffic across the core
> and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
> Use standard simple QOS aka Diff-Serv to enforce quality of service in the
> core.

+1

IGP - topology represents goal, this is where my customers are
happiest (IGP shouldn't be used to TE or QOS, it is the desired
topology, which is only changed if desired/ideal topology changes)
RSVP-TE - adjusts that goal to fit realities (delay in upgrade,
business driver precludes SPT upgrade now) ==> offSPT is capacity
report, here we need capacity but do not now have
QOS - to discriminate and protect higher priority traffic over lower
and to control queueing delay

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
Thanks man.One more question
1.I want to improve switchover time in case of failure in my MPLS TE
configured network,I know over L2 switch bfd does the trick,in my network
only bfd for igp has been configured,do I need to configure bfd for rsvp as
well?if yes,what will be its significance compared to the present bfd for
igp?

Again thanks in advance

On Wed, Apr 8, 2020, 11:54 Saku Ytti <saku@ytti.fi> wrote:

> On Wed, 8 Apr 2020 at 10:46, <adamv0025@netconsultings.com> wrote:
>
> > Use RSVP-TE for traffic-engineering -i.e. steering traffic across the
> core
> > and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
> > Use standard simple QOS aka Diff-Serv to enforce quality of service in
> the
> > core.
>
> +1
>
> IGP - topology represents goal, this is where my customers are
> happiest (IGP shouldn't be used to TE or QOS, it is the desired
> topology, which is only changed if desired/ideal topology changes)
> RSVP-TE - adjusts that goal to fit realities (delay in upgrade,
> business driver precludes SPT upgrade now) ==> offSPT is capacity
> report, here we need capacity but do not now have
> QOS - to discriminate and protect higher priority traffic over lower
> and to control queueing delay
>
> --
> ++ytti
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
I believe that the bfd can be tied with the RSVP signaling on the LSRs,
which triggers the RSVP signaling and reroute via alternate path that you
have provisioned within the MPLS domain.

HTH

On Mon, Apr 13, 2020 at 1:06 AM emmanuel manoni <emmanuelmadoshi@gmail.com>
wrote:

> Thanks man.One more question
> 1.I want to improve switchover time in case of failure in my MPLS TE
> configured network,I know over L2 switch bfd does the trick,in my network
> only bfd for igp has been configured,do I need to configure bfd for rsvp as
> well?if yes,what will be its significance compared to the present bfd for
> igp?
>
> Again thanks in advance
>
> On Wed, Apr 8, 2020, 11:54 Saku Ytti <saku@ytti.fi> wrote:
>
> > On Wed, 8 Apr 2020 at 10:46, <adamv0025@netconsultings.com> wrote:
> >
> > > Use RSVP-TE for traffic-engineering -i.e. steering traffic across the
> > core
> > > and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
> > > Use standard simple QOS aka Diff-Serv to enforce quality of service in
> > the
> > > core.
> >
> > +1
> >
> > IGP - topology represents goal, this is where my customers are
> > happiest (IGP shouldn't be used to TE or QOS, it is the desired
> > topology, which is only changed if desired/ideal topology changes)
> > RSVP-TE - adjusts that goal to fit realities (delay in upgrade,
> > business driver precludes SPT upgrade now) ==> offSPT is capacity
> > report, here we need capacity but do not now have
> > QOS - to discriminate and protect higher priority traffic over lower
> > and to control queueing delay
> >
> > --
> > ++ytti
> >
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
Thanks Hari, does this mean that it is important to have in MPLS TE
network,for even fast convergence??or is bfd for igp enough?
There is an automation tool to monitor Core traffic for the Provider I work
for,and usually disconnection between Core equipment vary up to 5 seconds
which seems a bit too much, I'm thinking of ways of improving it further if
possible,and I have just noticed bfd for rsvp missing


On Sun, Apr 12, 2020, 22:25 Hari Sapkota <sapkota.hari006@gmail.com> wrote:

> I believe that the bfd can be tied with the RSVP signaling on the LSRs,
> which triggers the RSVP signaling and reroute via alternate path that you
> have provisioned within the MPLS domain.
>
> HTH
>
> On Mon, Apr 13, 2020 at 1:06 AM emmanuel manoni <emmanuelmadoshi@gmail.com>
> wrote:
>
>> Thanks man.One more question
>> 1.I want to improve switchover time in case of failure in my MPLS TE
>> configured network,I know over L2 switch bfd does the trick,in my network
>> only bfd for igp has been configured,do I need to configure bfd for rsvp
>> as
>> well?if yes,what will be its significance compared to the present bfd for
>> igp?
>>
>> Again thanks in advance
>>
>> On Wed, Apr 8, 2020, 11:54 Saku Ytti <saku@ytti.fi> wrote:
>>
>> > On Wed, 8 Apr 2020 at 10:46, <adamv0025@netconsultings.com> wrote:
>> >
>> > > Use RSVP-TE for traffic-engineering -i.e. steering traffic across the
>> > core
>> > > and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
>> > > Use standard simple QOS aka Diff-Serv to enforce quality of service in
>> > the
>> > > core.
>> >
>> > +1
>> >
>> > IGP - topology represents goal, this is where my customers are
>> > happiest (IGP shouldn't be used to TE or QOS, it is the desired
>> > topology, which is only changed if desired/ideal topology changes)
>> > RSVP-TE - adjusts that goal to fit realities (delay in upgrade,
>> > business driver precludes SPT upgrade now) ==> offSPT is capacity
>> > report, here we need capacity but do not now have
>> > QOS - to discriminate and protect higher priority traffic over lower
>> > and to control queueing delay
>> >
>> > --
>> > ++ytti
>> >
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
Hi Emmanuel,

The RSVP should be informed in the case of link failure since BFD runs on
the line card and detects failure in quicker way. In order to terminate
RSVP signaling without having to wait for the hold timer, the BFD should be
bound to the RSVP so that the headend router can signal the use of backup
tunnel. I hope you have already provisioned the backup tunnel for the
temporary use by the PLR till the recalculation of new RSVP path is
completed.

Thanks,

On Mon, Apr 13, 2020 at 1:17 AM emmanuel manoni <emmanuelmadoshi@gmail.com>
wrote:

> Thanks Hari, does this mean that it is important to have in MPLS TE
> network,for even fast convergence??or is bfd for igp enough?
> There is an automation tool to monitor Core traffic for the Provider I
> work for,and usually disconnection between Core equipment vary up to 5
> seconds which seems a bit too much, I'm thinking of ways of improving it
> further if possible,and I have just noticed bfd for rsvp missing
>
>
> On Sun, Apr 12, 2020, 22:25 Hari Sapkota <sapkota.hari006@gmail.com>
> wrote:
>
>> I believe that the bfd can be tied with the RSVP signaling on the LSRs,
>> which triggers the RSVP signaling and reroute via alternate path that you
>> have provisioned within the MPLS domain.
>>
>> HTH
>>
>> On Mon, Apr 13, 2020 at 1:06 AM emmanuel manoni <
>> emmanuelmadoshi@gmail.com> wrote:
>>
>>> Thanks man.One more question
>>> 1.I want to improve switchover time in case of failure in my MPLS TE
>>> configured network,I know over L2 switch bfd does the trick,in my network
>>> only bfd for igp has been configured,do I need to configure bfd for rsvp
>>> as
>>> well?if yes,what will be its significance compared to the present bfd for
>>> igp?
>>>
>>> Again thanks in advance
>>>
>>> On Wed, Apr 8, 2020, 11:54 Saku Ytti <saku@ytti.fi> wrote:
>>>
>>> > On Wed, 8 Apr 2020 at 10:46, <adamv0025@netconsultings.com> wrote:
>>> >
>>> > > Use RSVP-TE for traffic-engineering -i.e. steering traffic across the
>>> > core
>>> > > and not for QOS (RSVP-TE "QOS" aka Int-Serv is crazy complex).
>>> > > Use standard simple QOS aka Diff-Serv to enforce quality of service
>>> in
>>> > the
>>> > > core.
>>> >
>>> > +1
>>> >
>>> > IGP - topology represents goal, this is where my customers are
>>> > happiest (IGP shouldn't be used to TE or QOS, it is the desired
>>> > topology, which is only changed if desired/ideal topology changes)
>>> > RSVP-TE - adjusts that goal to fit realities (delay in upgrade,
>>> > business driver precludes SPT upgrade now) ==> offSPT is capacity
>>> > report, here we need capacity but do not now have
>>> > QOS - to discriminate and protect higher priority traffic over lower
>>> > and to control queueing delay
>>> >
>>> > --
>>> > ++ytti
>>> >
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
On 12/Apr/20 21:58, Hari Sapkota wrote:
> Hi Emmanuel,
>
> The RSVP should be informed in the case of link failure since BFD runs on
> the line card and detects failure in quicker way. In order to terminate
> RSVP signaling without having to wait for the hold timer, the BFD should be
> bound to the RSVP so that the headend router can signal the use of backup
> tunnel. I hope you have already provisioned the backup tunnel for the
> temporary use by the PLR till the recalculation of new RSVP path is
> completed.

What we've generally done is make sure the lowest level protocol reacts
as quickly as possible. This way, it's (quick) reaction would inform
lateral or upper-layer protocols to react accordingly.

So we tune IS-IS to react quickly, and enable BFD just for it.

We haven't found the need to enable BFD for BGP, LDP, and other
upper-layer protocols that rely on the IGP to function.

But stuff like this is not a one-size-fits-all.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
Thanks Mark

I will try to enable BFD for rsvp and monitor if the convergence time will
improve.

Regards,
Emmanuel

On Mon, Apr 13, 2020, 00:43 Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 12/Apr/20 21:58, Hari Sapkota wrote:
> > Hi Emmanuel,
> >
> > The RSVP should be informed in the case of link failure since BFD runs on
> > the line card and detects failure in quicker way. In order to terminate
> > RSVP signaling without having to wait for the hold timer, the BFD should
> be
> > bound to the RSVP so that the headend router can signal the use of backup
> > tunnel. I hope you have already provisioned the backup tunnel for the
> > temporary use by the PLR till the recalculation of new RSVP path is
> > completed.
>
> What we've generally done is make sure the lowest level protocol reacts
> as quickly as possible. This way, it's (quick) reaction would inform
> lateral or upper-layer protocols to react accordingly.
>
> So we tune IS-IS to react quickly, and enable BFD just for it.
>
> We haven't found the need to enable BFD for BGP, LDP, and other
> upper-layer protocols that rely on the IGP to function.
>
> But stuff like this is not a one-size-fits-all.
>
> Mark.
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
On Mon, 13 Apr 2020 at 00:44, Mark Tinka <mark.tinka@seacom.mu> wrote:

> What we've generally done is make sure the lowest level protocol reacts
> as quickly as possible. This way, it's (quick) reaction would inform
> lateral or upper-layer protocols to react accordingly.
>
> So we tune IS-IS to react quickly, and enable BFD just for it.

+1, only ISIS (and other SPT protocols) are topoogy aware. If ISIS
converges fast, and the rest of the protocol stack is correct, then
everything converges fast. LFA, rLFA, PIC, edgePIC and you can locally
converge once you are aware of the outage, ~immediately. That is,
convergence can happen in linecard HW, as the backup path is already
programmed there, the only thing it needs to know is if-down. And the
rest of the network not converged yet, are protected by the node which
is aware of the failure.

In JNPR this is accomplished by ECMP mechanism, a backup path is added
to LC HW ECMP with inferior weight, so it's not actually used. So if a
linecard can for any reason remove the superior weight (int down) it
can also start using the inferior weight, without understanding
anything about the fault.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
> Saku Ytti
> Sent: Monday, April 13, 2020 7:37 AM
>
> On Mon, 13 Apr 2020 at 00:44, Mark Tinka <mark.tinka@seacom.mu> wrote:
>
> > What we've generally done is make sure the lowest level protocol
> > reacts as quickly as possible. This way, it's (quick) reaction would
> > inform lateral or upper-layer protocols to react accordingly.
> >
> > So we tune IS-IS to react quickly, and enable BFD just for it.
>
> +1, only ISIS (and other SPT protocols) are topoogy aware. If ISIS
> converges fast, and the rest of the protocol stack is correct, then
everything
> converges fast. LFA, rLFA, PIC, edgePIC and you can locally converge once
> you are aware of the outage, ~immediately. That is, convergence can happen
> in linecard HW, as the backup path is already programmed there, the only
> thing it needs to know is if-down. And the rest of the network not
converged
> yet, are protected by the node which is aware of the failure.
>
Is this applicable to RSVP-TE though?
In other words, will ISIS/OSPF BFD-triggered session down event on a
particular interface (if the interface stays up) trigger RSVP into switching
over the affected LSPs to a bypass LSP protecting the affected link/node?
Or does RSVP need to become a client of BFD (same way as ISIS/OSPF is)?
I honestly don't remember, currently all core links are "ae" bundles, so
relying on lfm/micro-bfd to modify interface state -which does trigger RSVP
into action.

Note, obviously we're talking only about failure cases where interface
remains up. In case of interface down any keepalive methods are irrelevant
and the failover happens in single digit [ms] timeframes.

adam



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: RSVP-TE was (no subject) [ In reply to ]
On 17/Apr/20 11:26, adamv0025@netconsultings.com wrote:

> Is this applicable to RSVP-TE though?
> In other words, will ISIS/OSPF BFD-triggered session down event on a
> particular interface (if the interface stays up) trigger RSVP into switching
> over the affected LSPs to a bypass LSP protecting the affected link/node?
> Or does RSVP need to become a client of BFD (same way as ISIS/OSPF is)?
> I honestly don't remember, currently all core links are "ae" bundles, so
> relying on lfm/micro-bfd to modify interface state -which does trigger RSVP
> into action.

RSVP can be BFD-aware as well, yes, but it normally works off of the
IGP. So as long as the IGP adjusts itself based on event (aided by BFD),
RSVP will just keep tracking that.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/