Mailing List Archive

1 2 3 4  View All
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 00:54, Sabri Berisha wrote:

> That will be very advantageous in a datacenter environment, or any other
> environment dealing with a lot of ECMP paths.
>
> I can't tell you how often during my eBay time I've been troubleshooting
> end-to-end packetloss between hosts in two datacenters where there were at least
> 10 or more layers of up to 16 way ECMP between them. Having a record of which
> path is being taken by a packet is very helpful to determine the one with a crappy
> transceiver.
>
> That work is already underway, albeit not specifically for MPLS. For example,
> I've worked with an experimental version of In-Band Network Telemetry (INT)
> as described in this draft: https://tools.ietf.org/html/draft-kumar-ippm-ifa-02
>
> I even demonstrated a very basic implementatoin during SuperCompute 19 in Denver
> last year. Most people who were interested in the demo were academics however,
> probably because it wasn't a real networking event.
>
> Note that there are several caveats that come with this draft and previous
> versions, and that it is still very much work in progress. But the potential is
> huge, at least in the DC.

Alright, we'll wait and see, then.



> That's a different story, but not entirely impossible. A probe packet can
> be sent across AS borders, and as long as the two NOCs are cooperating, the
> entire path can be reconstructed.

Yes, for once-off troubleshooting, I suppose that would work.

My concern is if it's for normal day-to-day operations. But who knows,
maybe someone will propose that too :-).

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 20/Jun/20 22:00, Baldur Norddahl wrote:

>
> I can't speak for the year 2000 as I was not doing networking at this
> level at that time. But when I check the specs for the base mx204 it
> says something like 32 VRFs, 2 million routes in FIB and 6 million
> routes in RIB. Clearly those numbers are the total of routes across
> all VRFs otherwise you arrive at silly numbers (64 million FIB if you
> multiply, 128k FIB if you divide by 32). My conclusion is that scale
> wise you are ok as long you do not try to have more than one VRF with
> a complete copy of the DFZ.

I recall a number of networks holding multiple VRF's, including at least
2x Internet VRF's, for numerous use-cases. I don't know if they still do
that today, but one can get creative real quick :-).


>
> More worrying is that 2 million routes will soon not be enough to
> install all routes with a backup route, invalidating BGP FRR.

I have a niggling feeling this will be solved before we get there.

Now, whether we can afford it is a whole other matter.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Mark Tinka wrote:

>> There are many. So, our research group tried to improve RSVP.
>
> I'm a lot younger than the Internet, but I read a fair bit about its
> history. I can't remember ever coming across an implementation of RSVP
> between a host and the network in a commercial setting.

No, of course, because, as we agreed, RSVP has a lot of problems.

> Was "S-RSVP" ever implemented, and deployed?

It was implemented and some technology was used by commercial
router from Furukawa (a Japanese vendor selling optical
fiber now not selling routers).

>> However, perhaps, most people think show stopper to RSVP is lack
>> of scalability of weighted fair queueing, though, it is not a
>> problem specific to RSVP and MPLS shares the same problem.
>
> QoS has nothing to do with MPLS. You can do QoS with or without MPLS.

GMPLS, you are using, is the mechanism to guarantee QoS by
reserving wavelength resource. It is impossible for GMPLS
not to offer QoS.

Moreover, as some people says they offer QoS with MPLS, they
should be using some prioritized queueing mechanisms, perhaps
not poor WFQ.

> I should probably point out, also, that RSVP (or RSVP-TE) is not MPLS.

They are different, of course. But, GMPLS is to reserve bandwidth
resource. MPLS, in general, is to reserve label values, at least.

> All MPLS can do is convey IPP or DSCP values as an EXP code point in the
> core. I'm not sure how that creates a scaling problem within MPLS itself.

I didn't say scaling problem caused by QoS.

But, as you are avoiding to extensively use MPLS, I think you
are aware that extensive use of MPLS needs management of a
lot of labels, which does not scale.

Or, do I misunderstand something?

> If I understand this correctly, would this be the IntServ QoS model?

No. IntServ specifies format to carry QoS specification in RSVP
packets without assuming any specific model of QoS.

>> I didn't attempt to standardize our result in IETF, partly
>> because optical packet switching was a lot more interesting.
>
> Still is, even today :-)?

No. As experimental switches are working years ago and making
it work >10Tbps is not difficult (switching is easy, generating
10Tbps packets needs a lot of parallel equipment), there is little
remaining for research.

https://www.osapublishing.org/abstract.cfm?URI=OFC-2010-OWM4

>> Assuming a central controller (and its collocated or distributed
>> back up controllers), we don't need complicated protocols in
>> the network to maintain integrity of the entire network.
>
> Well, that's a point of view, I suppose.
>
> I still can't walk into a shop and "buy a controller". I don't know what
> this controller thing is, 10 years on.

SDN, maybe. Though I'm not saying SDN scale, it should be no
worse than MPLS.

> I can't say I've ever come across that scenario running MPLS since 2004.

I did some retrospective research.

https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching
History
1994: Toshiba presented Cell Switch Router (CSR) ideas to IETF BOF
1996: Ipsilon, Cisco and IBM announced label switching plans
1997: Formation of the IETF MPLS working group
1999: First MPLS VPN (L3VPN) and TE deployments
2000: MPLS traffic engineering
2001: First MPLS Request for Comments (RFCs) released

as I was a co-chair of 1994 BOF and my knowledge on MPLS is
mostly on 1997 ID:

https://tools.ietf.org/html/draft-ietf-mpls-arch-00

there seems to be a lot of terminology changes.

I'm saying that, if some failure occurs and IGP changes, a
lot of LSPs must be recomputed, which does not scale
if # of LSPs is large, especially in a large network
where IGP needs hierarchy (such as OSPF area).

Masataka Ohta
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Sun, Jun 21, 2020 at 9:56 AM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 20/Jun/20 22:00, Baldur Norddahl wrote:
>
>
> I can't speak for the year 2000 as I was not doing networking at this
> level at that time. But when I check the specs for the base mx204 it says
> something like 32 VRFs, 2 million routes in FIB and 6 million routes in
> RIB. Clearly those numbers are the total of routes across all VRFs
> otherwise you arrive at silly numbers (64 million FIB if you multiply, 128k
> FIB if you divide by 32). My conclusion is that scale wise you are ok as
> long you do not try to have more than one VRF with a complete copy of the
> DFZ.
>
>
> I recall a number of networks holding multiple VRF's, including at least
> 2x Internet VRF's, for numerous use-cases. I don't know if they still do
> that today, but one can get creative real quick :-).
>
>
Yes I once made a plan to have one VRF per transit provider plus a peering
VRF. That way our BGP customers could have a session with each of those
VRFs to allow them full control of the route mix. I would of course also
need a Internet VRF for our own needs.

But the reality of that would be too many copies of the DFZ in the routing
tables. Although not necessary in the FIB as each of the transit VRFs could
just have a default route installed.

Regards,

Baldur
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 12:10, Masataka Ohta wrote:

>  
> It was implemented and some technology was used by commercial
> router from Furukawa (a Japanese vendor selling optical
> fiber now not selling routers).

I won't lie, never heard of it.


> GMPLS, you are using, is the mechanism to guarantee QoS by
> reserving wavelength resource. It is impossible for GMPLS
> not to offer QoS.

That is/was the idea.

In practice (at least in our Transport network), deploying capacity as
an offline exercise is significantly simpler. In such a case, we
wouldn't use GMPLS for capacity reservation, just path re-computation in
failure scenarios.

Our Transport network isn't overly meshed. It's just stretchy. Perhaps
if one was trying to build a DWDM backbone into, out of and through
every city in the U.S., capacity reservation in GMPLS may be a use-case.
But unless someone is willing to pipe up and confess to implementing it
in this way, I've not heard of it.


>
> Moreover, as some people says they offer QoS with MPLS, they
> should be using some prioritized queueing mechanisms, perhaps
> not poor WFQ.

It would be a combination - PQ and WFQ depending on the traffic type and
how much customers want to pay.

But carrying an MPLS EXP code point does not make MPLS unscalable. It's
no different to carrying a DSCP or IPP code point in plain IP. Or even
an 802.1p code point in Ethernet.


> They are different, of course. But, GMPLS is to reserve bandwidth
> resource.

In theory. What are people doing in practice? I just told you our story.


> MPLS, in general, is to reserve label values, at least.

MPLS is the forwarding paradigm. Label reservation/allocation can be
done manually or with a label distribution protocol. MPLS doesn't care
how labels are generated and learned. It will just push, swap and pop as
it needs to.


> I didn't say scaling problem caused by QoS.
>
> But, as you are avoiding to extensively use MPLS, I think you
> are aware that extensive use of MPLS needs management of a
> lot of labels, which does not scale.
>
> Or, do I misunderstand something?

I'm not avoiding extensive use of MPLS. I want extensive use of MPLS.

In IPv4, we forward in MPLS 100%. In IPv6, we forward in MPLS 80%. This
is due to vendor nonsense. Trying to fix.



> No. IntServ specifies format to carry QoS specification in RSVP
> packets without assuming any specific model of QoS.

Then I'm failing to understand your point, especially since it doesn't
sound like any operator is deploying such a model, or if so, publicly
suffering from it.



> No. As experimental switches are working years ago and making
> it work >10Tbps is not difficult (switching is easy, generating
> 10Tbps packets needs a lot of parallel equipment), there is little
> remaining for research.

We'll get there. This doesn't worry me so much :-). Either horizontally
or vertically. I can see a few models to scale IP/MPLS carriage.


>    
> SDN, maybe. Though I'm not saying SDN scale, it should be no
> worse than MPLS.

I still can't tell you what SDN is :-). I won't suffer it in this
decade, thankfully.


> I did some retrospective research.
>
>    https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching
>    History
>    1994: Toshiba presented Cell Switch Router (CSR) ideas to IETF BOF
>    1996: Ipsilon, Cisco and IBM announced label switching plans
>    1997: Formation of the IETF MPLS working group
>    1999: First MPLS VPN (L3VPN) and TE deployments
>    2000: MPLS traffic engineering
>    2001: First MPLS Request for Comments (RFCs) released
>
> as I was a co-chair of 1994 BOF and my knowledge on MPLS is
> mostly on 1997 ID:
>
>    https://tools.ietf.org/html/draft-ietf-mpls-arch-00
>
> there seems to be a lot of terminology changes.

My comment to that was in reference to your text, below:

    "What if, an inner label becomes invalidated around the
    destination, which is hidden, for route scalability,
    from the equipments around the source?"

I've never heard of such an issue in 16 years.


>
> I'm saying that, if some failure occurs and IGP changes, a
> lot of LSPs must be recomputed, which does not scale
> if # of LSPs is large, especially in a large network
> where IGP needs hierarchy (such as OSPF area).

That happens everyday, already. Links fail, IGP re-converges, LDP keeps
humming. RSVP-TE too, albeit all that state does need some consideration
especially if code is buggy.

Particularly, where you have LFA/IP-FRR both in the IGP and LDP, I've
not come across any issue where IGP re-convergence caused LSP's to fail.

In practice, IGP hierarchy (OSPF Areas or IS-IS Levels) doesn't help
much if you are running MPLS. FEC's are forged against /32 and /128
addresses. Yes, as with everything else, it's a trade-off.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 12:45, Baldur Norddahl wrote:

>
> Yes I once made a plan to have one VRF per transit provider plus a
> peering VRF. That way our BGP customers could have a session with each
> of those VRFs to allow them full control of the route mix. I would of
> course also need a Internet VRF for our own needs.
>
> But the reality of that would be too many copies of the DFZ in the
> routing tables. Although not necessary in the FIB as each of the
> transit VRFs could just have a default route installed.

We just opted for BGP communities :-).

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Sun, Jun 21, 2020 at 1:30 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 21/Jun/20 12:45, Baldur Norddahl wrote:
>
>
> Yes I once made a plan to have one VRF per transit provider plus a peering
> VRF. That way our BGP customers could have a session with each of those
> VRFs to allow them full control of the route mix. I would of course also
> need a Internet VRF for our own needs.
>
> But the reality of that would be too many copies of the DFZ in the routing
> tables. Although not necessary in the FIB as each of the transit VRFs could
> just have a default route installed.
>
>
> We just opted for BGP communities :-).
>
>
Not really the same. Lets say the best path is through transit 1 but the
customer thinks transit 1 sucks balls and wants his egress traffic to go
through your transit 2. Only the VRF approach lets every BGP customer, even
single homed ones, make his own choices about upstream traffic.

You would be more like a transit broker than a traditional ISP with a
routing mix. Your service is to buy one place, but get the exact same
product as you would have if you bought from top X transits in your area.
Delivered as X distinct BGP sessions to give you total freedom to send
traffic via any of the transit providers.

This is also the reason you do not actually need any routes in the FIB for
each of those transit VRFs. Just a default route because all traffic will
unconditionally go to said transit provider. The customer routes would
still be there of course.

Regards,

Baldur
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> I'm saying that, if some failure occurs and IGP changes, a
> lot of LSPs must be recomputed, which does not scale
> if # of LSPs is large, especially in a large network
> where IGP needs hierarchy (such as OSPF area).
>
> Masataka Ohta
>


Actually when IGP changes LSPs are not recomputed with LDP or SR-MPLS (when
used without TE :).

"LSP" term is perhaps what drives your confusion --- in LDP MPLS there is
no "Path" - in spite of the acronym (Labeled Switch *Path*). Labels are
locally significant and swapped at each LSR - resulting essentially with a
bunch of one hop crossconnects.

In other words MPLS LDP strictly follows IGP SPT at each LSR hop.

Many thx,
R.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 14:58, Baldur Norddahl wrote:

>
> Not really the same. Lets say the best path is through transit 1 but
> the customer thinks transit 1 sucks balls and wants his egress traffic
> to go through your transit 2. Only the VRF approach lets every BGP
> customer, even single homed ones, make his own choices about upstream
> traffic.
>
> You would be more like a transit broker than a traditional ISP with a
> routing mix. Your service is to buy one place, but get the exact same
> product as you would have if you bought from top X transits in your
> area. Delivered as X distinct BGP sessions to give you total freedom
> to send traffic via any of the transit providers.

We received such requests years ago, and calculated the cost of
complexity vs. BGP communities. In the end, if the customer wants to use
a particular upstream on our side, we'd rather setup an EoMPLS circuit
between them and they can have their own contract.

Practically, 90% of our traffic is peering. We don't that much with
upstreams providers.


>
> This is also the reason you do not actually need any routes in the FIB
> for each of those transit VRFs. Just a default route because all
> traffic will unconditionally go to said transit provider. The customer
> routes would still be there of course.

Glad it works for you. We just found it too complex, not just for the
problems it would solve, but also for the parity issues between VRF's
and the global table.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 15:48, Robert Raszuk wrote:

>
>
> Actually when IGP changes LSPs are not recomputed with LDP or SR-MPLS
> (when used without TE :). 
>
> "LSP" term is perhaps what drives your confusion --- in LDP MPLS there
> is no "Path" - in spite of the acronym (Labeled Switch *Path*). Labels
> are locally significant and swapped at each LSR - resulting
> essentially with a bunch of one hop crossconnects. 
>
> In other words MPLS LDP strictly follows IGP SPT at each LSR hop.

Yep, which is what I tried to explain as well. With LDP, MPLS-enabled
hosts simply push, swap and pop. There is not concept of an "end-to-end
LSP" as such. We just use the term "LSP" to define an FEC. But really,
each node in the FEC's path is making its own push, swap and pop decisions.

The LFIB in each node need only be as large as the number of LDP-enabled
routers in the network. You can get scenarios where FEC's are also
created for infrastructure links, but if you employ filtering to save on
FIB slots, you really just need to allocate labels to Loopback addresses
only.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> The LFIB in each node need only be as large as the number of LDP-enabled
routers in the network.

That is true for P routers ... not so much for PEs.

Please observe that label space in each PE router is divided for IGP and
BGP as well as other label hungy services ... there are many consumers of
local label block.

So it is always the case that LFIB table (max 2^20 entries - 1M) on PEs is
much larger then LFIB on P nodes.

Thx,
R.




On Sun, Jun 21, 2020 at 6:01 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 21/Jun/20 15:48, Robert Raszuk wrote:
>
>
>
> Actually when IGP changes LSPs are not recomputed with LDP or SR-MPLS
> (when used without TE :).
>
> "LSP" term is perhaps what drives your confusion --- in LDP MPLS there is
> no "Path" - in spite of the acronym (Labeled Switch *Path*). Labels are
> locally significant and swapped at each LSR - resulting essentially with a
> bunch of one hop crossconnects.
>
> In other words MPLS LDP strictly follows IGP SPT at each LSR hop.
>
>
> Yep, which is what I tried to explain as well. With LDP, MPLS-enabled
> hosts simply push, swap and pop. There is not concept of an "end-to-end
> LSP" as such. We just use the term "LSP" to define an FEC. But really, each
> node in the FEC's path is making its own push, swap and pop decisions.
>
> The LFIB in each node need only be as large as the number of LDP-enabled
> routers in the network. You can get scenarios where FEC's are also created
> for infrastructure links, but if you employ filtering to save on FIB slots,
> you really just need to allocate labels to Loopback addresses only.
>
> Mark.
>
RE: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> From: NANOG <nanog-bounces@nanog.org> On Behalf Of Mark Tinka
> Sent: Friday, June 19, 2020 7:28 PM
>
>
> On 19/Jun/20 17:13, Robert Raszuk wrote:
>
> >
> > So I think Ohta-san's point is about scalability services not flat
> > underlay RIB and FIB sizes. Many years ago we had requests to support
> > 5M L3VPN routes while underlay was just 500K IPv4.
>
> Ah, if the context, then, was l3vpn scaling, yes, that is a known issue.
>
I wouldn't say it's known to many as not many folks are actually limited by only up to ~1M customer connections, or next level up, only up to ~1M customer VPNs.

> Apart from the global table vs. VRF parity concerns I've always had (one of
> which was illustrated earlier this week, on this list, with RPKI in a VRF),
>
Well yeah, things work differently in VRFs, not a big surprise.
And what about an example of bad flowspec routes/filters cutting the boxes off net -where having those flowspec routes/filters contained within an Internet VRF would not have such an effect.
See, it goes either way.
Would be interesting to see a comparison of good vs bad for the Internet routes in VRF vs in Internet routes in global/default routing table.


> the
> other reason I don't do Internet in a VRF is because it was always a trade-off:
>
> - More routes per VRF = fewer VRF's.
> - More VRF's = fewer routes per VRF.
>
No, that's just a result of having a finite FIB/RIB size -if you want to cut these resources into virtual pieces you'll naturally get your equations above.
But if you actually construct your testing to showcase the delta between how much FIB/RIB space is taken by x prefixes with each in a VRF as opposed to all in a single default VRF (global routing table) the delta is negligible.
(Yes negligible even in case of per prefix VPN label allocation method -which I'm assuming no one is using anyways as it inherently doesn't scale and would limit you to ~1M VPN prefixes though per-CE/per-next-hop VPN label allocation method gives one the same functionality as per-prefix one while pushing the limit to ~1M PE-CE links/IFLs which from my experience is sufficient for most folks out there).

adam
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 19:34, Robert Raszuk wrote:
>
> That is true for P routers ... not so much for PEs. 
>
> Please observe that label space in each PE router is divided for IGP
> and BGP as well as other label hungy services ... there are many
> consumers of local label block. 
>
> So it is always the case that LFIB table (max 2^20 entries - 1M) on
> PEs is much larger then LFIB on P nodes.

I should point out that all of my input here is based on simple MPLS
forwarding of IP traffic in the global table. In this scenario, labels
are only assigned to BGP next-hops, which is typically an IGP Loopback
address.

Labels don't get assigned to BGP routes in a global table. There is no
use for that.

Of course, as this is needed in VRF's and other BGP-based VPN services,
the extra premium customers pay for that priviledge may be considered
warranted :-).

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
>
> I should point out that all of my input here is based on simple MPLS
> forwarding of IP traffic in the global table. In this scenario, labels
> are only assigned to BGP next-hops, which is typically an IGP Loopback
> address.
>

Well this is true for one company :) Name starts with j ....

Other company name starting with c - at least some time back by default
allocated labels for all routes in the RIB either connected or static or
sourced from IGP. Sure you could always limit that with a knob if desired.

The issue with allocating labels only for BGP next hops is that your
IP/MPLS LFA breaks (or more directly is not possible) as you do not have a
label to PQ node upon failure. Hint: PQ node is not even running BGP :).

Sure selective folks still count of "IGP Convergence" to restore
connectivity. But I hope those will move to much faster connectivity
restoration techniques soon.


> Labels don't get assigned to BGP routes in a global table. There is no
> use for that.
>

Sure - True.

Cheers,
R,
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 21:15, adamv0025@netconsultings.com wrote:

> I wouldn't say it's known to many as not many folks are actually limited by only up to ~1M customer connections, or next level up, only up to ~1M customer VPNs.

It's probably less of a problem now than it was 10 years ago. But, yes,
I don't have any real-world experience.



> Well yeah, things work differently in VRFs, not a big surprise.
> And what about an example of bad flowspec routes/filters cutting the boxes off net -where having those flowspec routes/filters contained within an Internet VRF would not have such an effect.
> See, it goes either way.
> Would be interesting to see a comparison of good vs bad for the Internet routes in VRF vs in Internet routes in global/default routing table.

Well, the global table is the basics, and VRF's is where sexy lives :-).


> No, that's just a result of having a finite FIB/RIB size -if you want to cut these resources into virtual pieces you'll naturally get your equations above.
> But if you actually construct your testing to showcase the delta between how much FIB/RIB space is taken by x prefixes with each in a VRF as opposed to all in a single default VRF (global routing table) the delta is negligible.
> (Yes negligible even in case of per prefix VPN label allocation method -which I'm assuming no one is using anyways as it inherently doesn't scale and would limit you to ~1M VPN prefixes though per-CE/per-next-hop VPN label allocation method gives one the same functionality as per-prefix one while pushing the limit to ~1M PE-CE links/IFLs which from my experience is sufficient for most folks out there).

Like I said, with today's CPU's and memory, probably not an issue. But
it's not an area I play in, so those with more experience - like
yourself - would know better.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 22:21, Robert Raszuk wrote:

>
> Well this is true for one company :) Name starts with j .... 
>
> Other company name starting with c - at least some time back by
> default allocated labels for all routes in the RIB either connected or
> static or sourced from IGP. Sure you could always limit that with a
> knob if desired.


Juniper allocates labels to the Loopback only.

Cisco allocates labels to all IGP and interface routes.

Neither allocate labels to BGP routes for the global table.


>
> The issue with allocating labels only for BGP next hops is that your
> IP/MPLS LFA breaks (or more directly is not possible) as you do not
> have a label to PQ node upon failure.  Hint: PQ node is not even
> running BGP :).

Wouldn't T-LDP fix this, since LDP LFA is a targeted session?

Need to test.


>
> Sure selective folks still count of "IGP Convergence" to restore
> connectivity. But I hope those will move to much faster connectivity
> restoration techniques soon.

We are happy :-).

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> Wouldn't T-LDP fix this, since LDP LFA is a targeted session?

Nope. You need to get to PQ node via potentially many hops. So you need to
have even ordered or independent label distribution to its loopback in
place.

Best,
R.

On Sun, Jun 21, 2020 at 10:58 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 21/Jun/20 22:21, Robert Raszuk wrote:
>
>
> Well this is true for one company :) Name starts with j ....
>
> Other company name starting with c - at least some time back by default
> allocated labels for all routes in the RIB either connected or static or
> sourced from IGP. Sure you could always limit that with a knob if desired.
>
>
>
> Juniper allocates labels to the Loopback only.
>
> Cisco allocates labels to all IGP and interface routes.
>
> Neither allocate labels to BGP routes for the global table.
>
>
>
> The issue with allocating labels only for BGP next hops is that your
> IP/MPLS LFA breaks (or more directly is not possible) as you do not have a
> label to PQ node upon failure. Hint: PQ node is not even running BGP :).
>
>
> Wouldn't T-LDP fix this, since LDP LFA is a targeted session?
>
> Need to test.
>
>
>
> Sure selective folks still count of "IGP Convergence" to restore
> connectivity. But I hope those will move to much faster connectivity
> restoration techniques soon.
>
>
> We are happy :-).
>
> Mark.
>
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 21/Jun/20 23:01, Robert Raszuk wrote:

>
> Nope. You need to get to PQ node via potentially many hops. So you
> need to have even ordered or independent label distribution to its
> loopback in place.

I have some testing I want to do with IS-IS only announcing the Loopback
from a set of routers to the rest of the backbone, and LDP allocating
labels for it accordingly, to solve a particular problem.

I'll test this out and see what happens re: LDP LFA.

Mark.
RE: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> From: NANOG <nanog-bounces@nanog.org> On Behalf Of Masataka Ohta
> Sent: Friday, June 19, 2020 5:01 PM
>
> Robert Raszuk wrote:
>
> > So I think Ohta-san's point is about scalability services not flat
> > underlay RIB and FIB sizes. Many years ago we had requests to support
> > 5M L3VPN routes while underlay was just 500K IPv4.
>
> That is certainly a problem. However, worse problem is to know label
values
> nested deeply in MPLS label chain.
>
> Even worse, if route near the destination expected to pop the label chain
> goes down, how can the source knows that the router goes down and
> choose alternative router near the destination?
>
Via IGP or controller, but for sub 50ms convergence there are edge node
protection mechanisms, so the point is the source doesn't even need to know
about for the restoration to happen.

adam
RE: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Hi Baldur,



From memory mx204 FIB is 10M (v4/v6) and RIB 30M for each v4 and v6.

And remember the FIB is hierarchical -so it’s the next-hops per prefix you are referring to with BGP FRR. And also going from memory of past scaling testing, if pfx1+NH1 == x, then Pfx1+NH1+NH2 !== 2x, where x is used FIB space.



adam



From: NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> On Behalf Of Baldur Norddahl
Sent: Saturday, June 20, 2020 9:00 PM



I can't speak for the year 2000 as I was not doing networking at this level at that time. But when I check the specs for the base mx204 it says something like 32 VRFs, 2 million routes in FIB and 6 million routes in RIB. Clearly those numbers are the total of routes across all VRFs otherwise you arrive at silly numbers (64 million FIB if you multiply, 128k FIB if you divide by 32). My conclusion is that scale wise you are ok as long you do not try to have more than one VRF with a complete copy of the DFZ.



More worrying is that 2 million routes will soon not be enough to install all routes with a backup route, invalidating BGP FRR.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Wed, 17 Jun 2020 at 22:09, <adamv0025@netconsultings.com> wrote:
>
> > From: NANOG <nanog-bounces@nanog.org> On Behalf Of Mark Tinka
> > Sent: Wednesday, June 17, 2020 6:07 PM
> >
> >
> > I've heard a lot about "network programmability", e.t.c.,
> First of all the "SR = network programmability" is BS, SR = MPLS, any programmability we've had for MPLS since ever works the same way for SR.

It works because SR != MPLS.

SR is a protocol which describes many aspects, such as how traffic
forwarding decisions made at the ingress node to a PSN can be
guaranteed across the PSN, even though the nodes along the PSN path
use per-hop forwarding behaviour and different nodes along the path
have made different forwarding decisions.

When using SR MPLS segment IDs are used as an index into the label
range (SRGB) and so SIDs don't correlate 1:1 to MPLS labels, equally
with SRv6 the segment IDs are encoded as IPv6 addresses and don't
correlate 1:1 to an IPv6 address. There is a venn diagram with an
overlapping section in the middle which is "generic SR" with a bunch
of core features that are supported agnostic of the encoding
mechanism.

Cheers,
James.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Wed, 17 Jun 2020 at 18:08, Mark Tinka <mark.tinka@seacom.mu> wrote:
>
> Hi all.
>
> When the whole SR concept was being first dreamed up, I was mildly excited about it. But then real life happened and global deployment (be it basic SR-MPLS or SRv6) is what it is, and I became less excited. This was back in 2015.
>
> All the talk about LDPv6 this and last week has had me reflecting a great deal on where we are, as an industry, in why we are having to think about SR and all its incarnations.
>
> So, let me be the one that stirs up the hornets' nest...
>
> Why do we really need SR? Be it SR-MPLS or SRv6 or SRv6+?

I am clearly very far behind on my emails, but of the emails I've read
so far in this thread though you have mentioned at least twice:

On Wed, 17 Jun 2020 at 18:08, Mark Tinka <mark.tinka@seacom.mu> wrote:
> What I am less enthused about is being forced

On Wed, 17 Jun 2020 at 23:22, Mark Tinka <mark.tinka@seacom.mu> wrote:
> it tastes funny when you are forced

Mark, does someone have a gun to your head? Are you in trouble? Blink
63 times for yes, 64 times for no ;)

Cheers,
James.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Wed, 17 Jun 2020 at 23:19, Mark Tinka <mark.tinka@seacom.mu> wrote:
> Yes, we all love less state, I won't argue that. But it's the same question that is being asked less and less with each passing year - what scales better in 2020, OSPF or IS-IS. That is becoming less relevant as control planes keep getting faster and cheaper.
>
> I'm not saying that if you are dealing with 100,000 T-LDP sessions you should not consider SR, but if you're not, and SR still requires a bit more development (never mind deployment experience), what's wrong with having LDPv6? If it makes near-as-no-difference to your control plane in 2020 or 2030 as to whether your 10,000-node network is running LDP or SR, why not have the choice?

I'm going to kick the nest in the other direction now :D ... There
would be no need to massively scale an IGP or worry about running
LDPv4 + LDv6 or SR MPLS if we had put more development time into MPLS
over UDP. I think it's a great technology which solves a lot of
problems and I've been itching to deploy it for ages now, but vendor
support for it is nowhere near the level of MPLS over Ethernet.

Cheers,
James.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 30/Jun/20 20:37, James Bensley wrote:

> Mark, does someone have a gun to your head? Are you in trouble? Blink
> 63 times for yes, 64 times for no ;)

You're pretty late to this party, mate...

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Tue, 30 Jun 2020 at 22:07, Mark Tinka <mark.tinka@seacom.com> wrote:
>
>
>
> On 30/Jun/20 20:37, James Bensley wrote:
>
> > Mark, does someone have a gun to your head? Are you in trouble? Blink
> > 63 times for yes, 64 times for no ;)
>
> You're pretty late to this party, mate...

True, but what's changed in two weeks with regards to LDv6 and SR?

What was your use case / requirement for LDv6 - to remove the full
table v6 feed from your core or to remove IPv4 from your IGP or both?

Cheers,
James.

1 2 3 4  View All