Mailing List Archive

1 2 3 4  View All
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 09:20, Radu-Adrian Feurdean wrote:
>
> A whole ocean of "datacenter" hardware, from pretty much evey vendor.

You mean the ones deliberately castrated so that we can create a
specific "DC vertical", even if they are, pretty much, the same box a
service provider will buy, just given a darker color so it can glow more
brightly in the data centre night?

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 09:50, Saku Ytti wrote:


> I'm sure such devices exist, I can't name any from top of my head. But
> this market perversion is caused by DC people who did not understand
> networks and suffer from not-invented-here. Everyone needsa tunnel
> solution, but DC people decided before looking into or understanding
> the topic that MPLS is bad and complex, let's invent something new.
> Then we re-invented solutions that already had _MORE_ efficient
> solutions in MPLS, and a lot of those technologies are now becoming
> established in DC space, creating confusion in SP space.
>
> Maybe these inferior technologies will win, due to the marketing
> strength of DC solutions. Or maybe DC will later figure out the
> fundamental aspect in tunneling cost, and invent even-better-MPLS,
> which is entirely possible now that we have a bit more understanding
> how we use MPLS.

Let me work out how to print all this on a t-shirt.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Fri, Jun 19, 2020, at 10:11, Mark Tinka wrote:
>
> On 19/Jun/20 09:20, Radu-Adrian Feurdean wrote:
> >
> > A whole ocean of "datacenter" hardware, from pretty much evey vendor.
>
> You mean the ones deliberately castrated so that we can create a
> specific "DC vertical", even if they are, pretty much, the same box a
> service provider will buy, just given a darker color so it can glow more
> brightly in the data centre night?

Yes, exactly that one.
Which also happens to spill outside the DC area, because the main "vertical" allows it to be sold at lower prices.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 10:57, Radu-Adrian Feurdean wrote:

>
> Yes, exactly that one.
> Which also happens to spill outside the DC area, because the main "vertical" allows it to be sold at lower prices.

These days, half the gig is filtering the snake oil.

Mark.
RE: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> Saku Ytti
> Sent: Friday, June 19, 2020 8:50 AM
>
> On Fri, 19 Jun 2020 at 10:24, Radu-Adrian Feurdean <nanog@radu-
> adrian.feurdean.net> wrote:
>
>
> > > I don't understand the point of SRv6. What equipment can support
> > > IPv6 routing, but can't support MPLS label switching?
> >
> Maybe these inferior technologies will win, due to the marketing strength of
> DC solutions. Or maybe DC will later figure out the fundamental aspect in
> tunneling cost, and invent even-better-MPLS, which is entirely possible now
> that we have a bit more understanding how we use MPLS.
>
Looking back at history (VXLAN or the Google's Espresso "architecture") I'm not holding my breath for anything reasonable coming out of the DC camp....

adam
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 16:45, Masataka Ohta wrote:

> The problem of MPLS, or label switching in general, is that, though
> it was advertised to be topology driven to scale better than flow
> driven, it is actually flow driven with poor scalability.
>
> Thus, it is impossible to deploy any technology scalably over MPLS.
>
> MPLS was considered to scale, because it supports nested labels
> corresponding to hierarchical, thus, scalable, routing table.
>
> However, to assign nested labels at the source, the source
> must know hierarchical routing table at the destination, even
> though the source only knows hierarchical routing table at
> the source itself.
>
> So, the routing table must be flat, which dose not scale, or
> the source must detect flows to somehow request hierarchical
> destination routing table on demand, which means MPLS is flow
> driven.
>
> People, including some data center people, avoiding MPLS, know
> network scalability better than those deploying MPLS.
>
> It is true that some performance improvement is possible with
> label switching by flow driven ways, if flows are manually
> detected. But, it means extra label-switching-capable equipment
> and administrative effort to detect flows, neither of which do
> not scale and cost a lot.
>
> It cost a lot less to have more plain IP routers than insisting
> on having a little fewer MPLS routers.

I wouldn't agree.

MPLS is a purely forwarding paradigm, as is hop-by-hop IP. Even with
hop-by-hop IP, you need the edge to be routing-aware.

I wasn't at the table when the MPLS spec. was being dreamed up, but I'd
find it very hard to accept that someone drafting the idea advertised it
as being a replacement or alternative for end-to-end IP routing and
forwarding.

Whether you run MPLS or not, you will always have routing table scaling
concerns. So I'm not quite sure how that is MPLS's problem. If you can
tell me how NOT running MPLS affords you a "hierarchical, scalable"
routing table, I'm all ears.

Whether you forward in IP or in MPLS, scaling routing is an ever clear &
present concern. Where MPLS can directly mitigate that particular
concern is in the core, where you can remove BGP. But you still need
routing in the edge, whether you forward in IP or MPLS.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Hi Mark,

As actually someone who was at that table you are referring to - I must say
that MPLS was never proposed as replacement for IP.

MPLS was since day one proposed as enabler for services originally L3VPNs
and RSVP-TE. Then bunch of others jumped on the same encapsulation train.
If at that very time GSR would be able to do right GRE encapsulation at
line rate in all of its engines MPLS for transport would never take off. As
service demux - sure but this is completely separate.

But since at that time shipping hardware could not do the right
encapsulation and since SPs were looking for more revenue and new way to
move ATM and FR customers to IP backbones L3VPN was proposed which really
required to hide the service addresses from anyone's core. So some form of
encapsulation was a MUST. Hence tag switching then mpls switching was
rolled out.

So I think Ohta-san's point is about scalability services not flat underlay
RIB and FIB sizes. Many years ago we had requests to support 5M L3VPN
routes while underlay was just 500K IPv4.

Last - when I originally discussed just plain MPLS with customers with
single application of hierarchical routing (no BGP in the core) frankly no
one was interested. Till L3VPN arrived which was game changer and run for
new revenue streams ...

Best,
R.


On Fri, Jun 19, 2020 at 5:00 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 19/Jun/20 16:45, Masataka Ohta wrote:
>
> > The problem of MPLS, or label switching in general, is that, though
> > it was advertised to be topology driven to scale better than flow
> > driven, it is actually flow driven with poor scalability.
> >
> > Thus, it is impossible to deploy any technology scalably over MPLS.
> >
> > MPLS was considered to scale, because it supports nested labels
> > corresponding to hierarchical, thus, scalable, routing table.
> >
> > However, to assign nested labels at the source, the source
> > must know hierarchical routing table at the destination, even
> > though the source only knows hierarchical routing table at
> > the source itself.
> >
> > So, the routing table must be flat, which dose not scale, or
> > the source must detect flows to somehow request hierarchical
> > destination routing table on demand, which means MPLS is flow
> > driven.
> >
> > People, including some data center people, avoiding MPLS, know
> > network scalability better than those deploying MPLS.
> >
> > It is true that some performance improvement is possible with
> > label switching by flow driven ways, if flows are manually
> > detected. But, it means extra label-switching-capable equipment
> > and administrative effort to detect flows, neither of which do
> > not scale and cost a lot.
> >
> > It cost a lot less to have more plain IP routers than insisting
> > on having a little fewer MPLS routers.
>
> I wouldn't agree.
>
> MPLS is a purely forwarding paradigm, as is hop-by-hop IP. Even with
> hop-by-hop IP, you need the edge to be routing-aware.
>
> I wasn't at the table when the MPLS spec. was being dreamed up, but I'd
> find it very hard to accept that someone drafting the idea advertised it
> as being a replacement or alternative for end-to-end IP routing and
> forwarding.
>
> Whether you run MPLS or not, you will always have routing table scaling
> concerns. So I'm not quite sure how that is MPLS's problem. If you can
> tell me how NOT running MPLS affords you a "hierarchical, scalable"
> routing table, I'm all ears.
>
> Whether you forward in IP or in MPLS, scaling routing is an ever clear &
> present concern. Where MPLS can directly mitigate that particular
> concern is in the core, where you can remove BGP. But you still need
> routing in the edge, whether you forward in IP or MPLS.
>
> Mark.
>
>
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> MPLS was since day one proposed as enabler for services originally
> L3VPNs and RSVP-TE.

MPLS day one was mike o'dell wanting to move his city/city traffic
matrix from ATM to tag switching and open cascade's hold on tags.

randy
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Mark Tinka wrote:

> I wouldn't agree.
>
> MPLS is a purely forwarding paradigm, as is hop-by-hop IP.

As the first person to have proposed the forwarding paradigm of
label switching, I have been fully aware from the beginning that:

https://tools.ietf.org/html/draft-ohta-ip-over-atm-01

Conventional Communication over ATM in a Internetwork Layer

The conventional communication, that is communication that does not
assume connectivity, is no different from that of the existing IP, of
course.

special, prioritized forwarding should be done only by special
request by end users (by properly designed signaling mechanism, for
which RSVP failed to be) or administration does not scale.

> Even with
> hop-by-hop IP, you need the edge to be routing-aware.

The edge to be routing-aware around itself does scale.

The edge to be routing-aware at the destinations of all the flows
over it does not scale, which is the problem of MPLS.

Though the lack of equipment scalability was unnoticed by many,
thanks to Moore' law, inscalable administration costs a lot.

As a result, administration of MPLS has been costing a lot.

> I wasn't at the table when the MPLS spec. was being dreamed up,

I was there before poor MPLS was dreamed up.

> If you can
> tell me how NOT running MPLS affords you a "hierarchical, scalable"
> routing table, I'm all ears.

Are you saying inter-domain routing table is not "hierarchical,
scalable" except for the reason of multihoming?

As for multihoming problem, see, for example:

https://tools.ietf.org/html/draft-ohta-e2e-multihoming-03

> Whether you forward in IP or in MPLS, scaling routing is an ever clear &
> present concern.

Not. Even without MPLS, fine tuning of BGP does not scale.

However, just as using plain IP router costs less than using
MPLS capable IP routers, BGP-only administration costs less than
BGP and MPLS administration.

For better networking infrastructure, extra cost should be spent
for L1, not MPLS or very complicated technologies around it.

Masataka Ohta
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Robert Raszuk wrote:

> MPLS was since day one proposed as enabler for services originally L3VPNs
> and RSVP-TE.
There seems to be serious confusions between label switching
with explicit flows and MPLS, which was believed to scale
without detecting/configuring flows.

At the time I proposed label switching, there already was RSVP
but RSVP-TE was proposed long after MPLS was proposed.

But, today, people are seems to be using, so called, MPLS, with
explicitly configured flows, administration of which does not
scale and is annoying.

Remember that the original point of MPLS was that it should work
scalably without a lot of configuration, which is not the reality
recognized by people on this thread.

> So I think Ohta-san's point is about scalability services not flat underlay
> RIB and FIB sizes. Many years ago we had requests to support 5M L3VPN
> routes while underlay was just 500K IPv4.

That is certainly a problem. However, worse problem is to know
label values nested deeply in MPLS label chain.

Even worse, if route near the destination expected to pop the label
chain goes down, how can the source knows that the router goes down
and choose alternative router near the destination?

> Last - when I originally discussed just plain MPLS with customers with
> single application of hierarchical routing (no BGP in the core) frankly no
> one was interested.

MPLS with hierarchical routing just does not scale.

Masataka Ohta
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 17:13, Robert Raszuk wrote:

>
> So I think Ohta-san's point is about scalability services not flat
> underlay RIB and FIB sizes. Many years ago we had requests to support
> 5M L3VPN routes while underlay was just 500K IPv4.

Ah, if the context, then, was l3vpn scaling, yes, that is a known issue.

Apart from the global table vs. VRF parity concerns I've always had (one
of which was illustrated earlier this week, on this list, with RPKI in a
VRF), the other reason I don't do Internet in a VRF is because it was
always a trade-off:

    - More routes per VRF = fewer VRF's.
    - More VRF's  = fewer routes per VRF.

Going forward, I believe the l3vpn pressures (for pure VPN services, not
Internet in a VRF) should begin to subside as businesses move on-prem
workloads to the cloud, bite into the SD-WAN train, and generally, do
more stuff over the public Internet than via inter-branch WAN links
formerly driven by l3vpn.

Time will tell, but in Africa, bar South Africa, l3vpn's were never a
big thing, mostly because Internet connectivity was best served from one
or two major cities, where most businesses had a branch that warranted
connectivity.

But even in South Africa (as the rest of our African market), 98% of our
business is plain IP. The other 2% is mostly l2vpn. l3vpn's don't really
feature, except for some in-house enterprise VoIP carriage + some
high-speed in-band management.

Even with the older South African operators that made a killing off
l3vpn's, these are falling away as their customers either move to the
cloud and/or accept SD-WAN thingies.


>
> Last - when I originally discussed just plain MPLS with customers with
> single application of hierarchical routing (no BGP in the core)
> frankly no one was interested. Till L3VPN arrived which was game
> changer and run for new revenue streams ...

The BGP-free core has always sounded like a dark art. More so in the
days when hardware was precious, core routers doubled as inline route
reflectors and the size of the IPv4 DFZ wasn't rapidly exploding like it
is today, and no one was even talking about the IPv6 DFZ.

Might be useful speaking with them again, in 2020 :-).

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
>
> But, today, people are seems to be using, so called, MPLS, with
>
explicitly configured flows, administration of which does not
> scale and is annoying.
>

I am actually not sure what you are talking about here.

The only per flow action in any MPLS deployments I have seen was mapping
flow groups to specific TE-LSPs. In all other TDP or LDP cases flow == IP
destination so it is exact based on the destination reachability. And such
mapping is based on the LDP FEC to IGP (or BGP) match.

Even worse, if route near the destination expected to pop the label
> chain goes down, how can the source knows that the router goes down
> and choose alternative router near the destination?
>

In normal MPLS the src does not pick the transit paths. Transit is 100%
driven by IGP and if you loose a node local connectivity restoration
techniques (FRR or IGP convergence applies). If egress signalled
implicit NULL it would signal it to any IGP peer.

That is also possible with SR-MPLS too. No change ... no per flow state at
all more then per IP destination routing. If you want to control your
transit hops you can - but this is an option not w requirement.

MPLS with hierarchical routing just does not scale.


While I am not defending MPLS here and 100% agree that IP as transit is a
much better option today and tomorrow I also would like to make sure we
communicate true points. So when you say it does not scale - it could be
good to list what exactly does not scale by providing a real network
operational example.

Many thx,
R.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> On Jun 19, 2020, at 11:34 AM, Randy Bush <randy@psg.com> wrote:
>
> ?
>>
>> MPLS was since day one proposed as enabler for services originally
>> L3VPNs and RSVP-TE.
>
> MPLS day one was mike o'dell wanting to move his city/city traffic
> matrix from ATM to tag switching and open cascade's hold on tags.

And IIRC, Tag switching day one was Cisco overreacting to Ipsilon.

-dorian
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Wed, Jun 17, 2020 at 11:40 AM Dave Bell <me@geordish.org> wrote:

>
>
> On Wed, 17 Jun 2020 at 18:42, Saku Ytti <saku@ytti.fi> wrote:
>
>> Hey,
>>
>> > Why do we really need SR? Be it SR-MPLS or SRv6 or SRv6+?
>>
>> I don't like this, SR-MPLS and SRv6 are just utterly different things
>> to me, and no answer meaningfully applies to both.
>>
>
> I don't understand the point of SRv6. What equipment can support IPv6
> routing, but can't support MPLS label switching?
>
> I'm a big fan of SR-MPLS however.
>
> One of the advantages cited for SRv6 over MPLS is that the packet contains
a record of where it has been.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
>
> One of the advantages cited for SRv6 over MPLS is that the packet contains
>> a record of where it has been.
>>
>
Not really ... packets are not tourists in a bus.

First there are real studies proving that most large production networks
for the goal of good TE only need to place 1, 2 or 3 hops to traverse
through. Rest is the shortest path between those hops.

Then even if you place those node SIDs you have no control which interfaces
are chosen as outbound. There is often more then one IGP ECMP path in
between. You would need to insert adj. SIDs which does require pretty fine
level of controller's capabilities to start with.

I just hope that no one sane proposes that now all packets should get
encapsulated in a new IPv6 header while entering a transit ISP network and
carry long list of hop by hop adjacencies it is to travel by. Besides even
if it would it would be valid only within given ASN and had no visibility
outside.

Thx,
R.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 17:40, Masataka Ohta wrote:
 
>
> As the first person to have proposed the forwarding paradigm of
> label switching, I have been fully aware from the beginning that:
>
>    https://tools.ietf.org/html/draft-ohta-ip-over-atm-01
>
>    Conventional Communication over ATM in a Internetwork Layer
>
>    The conventional communication, that is communication that does not
>    assume connectivity, is no different from that of the existing IP, of
>    course.
>
> special, prioritized forwarding should be done only by special
> request by end users (by properly designed signaling mechanism, for
> which RSVP failed to be) or administration does not scale.

I could be wrong, but I get the feeling that you are speaking about RSVP
in its original form, where hosts were meant to make calls (CAC) into
the network to reserve resources on their behalf.

As we all know, that never took off, even though I saw some ideas about
it being proposed for mobile phones as well.

I don't think there ever was another attempt to get hosts to reserve
resources within the network, since the RSVP failure.



>
> Not. Even without MPLS, fine tuning of BGP does not scale.

We all know this, and like I said, that is a current concern.


>
> However, just as using plain IP router costs less than using
> MPLS capable IP routers, BGP-only administration costs less than
> BGP and MPLS administration.
>
> For better networking infrastructure, extra cost should be spent
> for L1, not MPLS or very complicated technologies around it.

In the early 2000's, I would have agreed with that.

Nowadays, there is a very good chance that a box you require a BGP DFZ
on inherently supports MPLS, likely without extra licensing.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 19/Jun/20 18:00, Masataka Ohta wrote:
 
> There seems to be serious confusions between label switching
> with explicit flows and MPLS, which was believed to scale
> without detecting/configuring flows.
>
> At the time I proposed label switching, there already was RSVP
> but RSVP-TE was proposed long after MPLS was proposed.

RSVP failed to take off, for whatever reason (I can think of many).

I'm not sure any network operator, today, would allow an end-host to
make reservation requests in their core.

Even in the Transport world, this was the whole point of GMPLS. After
they saw how terrible that idea was, it shifted from customers to being
an internal fight between the IP teams and the Transport teams.
Ultimately, I don't think anybody really cared about routers
automatically using GMPLS to reserve and direct the DWDM network.

In our Transport network, we use GMPLS/ASON in the Transport network
only. When the IP team needs capacity, it's a telephone job :-).


>
> But, today, people are seems to be using, so called, MPLS, with
> explicitly configured flows, administration of which does not
> scale and is annoying.
>
> Remember that the original point of MPLS was that it should work
> scalably without a lot of configuration, which is not the reality
> recognized by people on this thread.

Well, you get the choice of LDP (low-touch) or RSVP-TE (high-touch).

Pick your poison.

We don't use RSVP-TE because of the issues you describe above.

We use LDP to avoid the issues you describe above.

In the end, SR-MPLS is meant to solve this issue for TE requirements. So
the signaling state-of-the-art improves with time.


> That is certainly a problem. However, worse problem is to know
> label values nested deeply in MPLS label chain.

Why, how, is that a problem? For load balancing?


>
> Even worse, if route near the destination expected to pop the label
> chain goes down, how can the source knows that the router goes down
> and choose alternative router near the destination?

If by source you mean end-host, if the edge router they are connected to
only ran IP and they were single-homed, they'd still go down.

If the end-host were multi-homed to two edge routers, one of them
failing won't cause an outage for the host.

Unless I misunderstand.


> MPLS with hierarchical routing just does not scale.

With Internet in a VRF, I truly agree.

But if you run a simple global BGP table and no VRF's, I don't see an
issue. This is what we do, and our scaling concerns are exactly the same
whether we run plain IP or IP/MPLS.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Sat, Jun 20, 2020 at 11:08 AM Mark Tinka <mark.tinka@seacom.mu> wrote:

> > MPLS with hierarchical routing just does not scale.
>
> With Internet in a VRF, I truly agree.
>
> But if you run a simple global BGP table and no VRF's, I don't see an
> issue. This is what we do, and our scaling concerns are exactly the same
> whether we run plain IP or IP/MPLS.
>
> Mark.
>
>
We run the Internet in a VRF to get watertight separation between
management and the Internet. I do also have a CGN vrf but that one has very
few routes in it (99% being subscriber management created, eg. one route
per customer). Why would this create a scaling issue? If you collapse our
three routing tables into one, you would have exactly the same number of
routes. All we did was separate the routes into namespaces, to establish a
firewall that prevents traffic to flow where it shouldn't.

Regards,

Baldur
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 20/Jun/20 11:27, Baldur Norddahl wrote:

>
>
> We run the Internet in a VRF to get watertight separation between
> management and the Internet. I do also have a CGN vrf but that one has
> very few routes in it (99% being subscriber management created, eg.
> one route per customer). Why would this create a scaling issue? If you
> collapse our three routing tables into one, you would have exactly the
> same number of routes. All we did was separate the routes into
> namespaces, to establish a firewall that prevents traffic to flow
> where it shouldn't.

It may be less of an issue in 2020 with the current control planes and
how far the code has come, but in the early days of l3vpn's, the number
of VRF's you could have was directly proportional to the number of
routes you had in each one. More VRF's, less routes for each. More
routes per VRF, less VRF's in total.

I don't know if that's still an issue today, as we don't run the
Internet in a VRF. I'd defer to those with that experience, who knew
about the scaling limitations of the past.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
Mark Tinka wrote:

>> At the time I proposed label switching, there already was RSVP
>> but RSVP-TE was proposed long after MPLS was proposed.
>
> RSVP failed to take off, for whatever reason (I can think of many).

There are many. So, our research group tried to improve RSVP.

Practically, the most serious problem of RSVP is, like OSPF, using
unreliable link multicast to reliably exchange signalling messages
between routers, making specification and implementations very
complicated.

So, we developed SRSVP (Simple RSVP) replacing link multicast by,
like BGP, link local TCP mesh (thanks to the CATENET model, unlike
BGP, there is no scalability concern). Then, it was not so difficult
to remove other problems.

However, perhaps, most people think show stopper to RSVP is lack
of scalability of weighted fair queueing, though, it is not a
problem specific to RSVP and MPLS shares the same problem.

Obviously, weighted fair queueing does not scale because it is
based on deterministic traffic model of token bucket model
and, these days, people just use some ad-hoc ways for BW
guarantee implicitly assuming stochastic traffic model. I
even developed a little formal theory on scalable queueing
with stochastic traffic model.

So, we have specification and working implementation of
hop-by-hop, scalable, stable unicast/multicast interdomain
QoS routing protocol supporting routing hierarchy without
clank back.

See

http://www.isoc.org/inet2000/cdproceedings/1c/1c_1.htm

for rough description of design guideline.

> I'm not sure any network operator, today, would allow an end-host to
> make reservation requests in their core.

I didn't attempt to standardize our result in IETF, partly
because optical packet switching was a lot more interesting.

> Even in the Transport world, this was the whole point of GMPLS. After
> they saw how terrible that idea was, it shifted from customers to being
> an internal fight between the IP teams and the Transport teams.
> Ultimately, I don't think anybody really cared about routers
> automatically using GMPLS to reserve and direct the DWDM network.

That should be a reasonable way of practical operation, though I'm
not very interested in OCS (optical circuit switching) of GMPLS

> In our Transport network, we use GMPLS/ASON in the Transport network
> only. When the IP team needs capacity, it's a telephone job :-).

For IP layer, that should be enough. For ASON, so complicated
GMPLS is actually overkill.

When I was playing with ATM switches, I established control
plain network with VPI/VCI=0/0 and assign control plain IP
addresses to ATM switches. To control other VCs, simple UDP
packets are sent to switches from controlling hosts.

Similar technology should be applicable to ASON. Maintaining
integrity between wavelength switches is responsibility
of controllers.

>> Remember that the original point of MPLS was that it should work
>> scalably without a lot of configuration, which is not the reality
>> recognized by people on this thread.
>
> Well, you get the choice of LDP (low-touch) or RSVP-TE (high-touch).

No, I just explained what was advertised to be MPLS by people
around Cisco against Ipsilon.

According to the advertisements, you should call what you
are using LS or GLS, not MPLS or GMPLS.

> We don't use RSVP-TE because of the issugaes you describe above.
>
> We use LDP to avoid the issues you describe above.

Good.

> In the end, SR-MPLS is meant to solve this issue for TE requirements. So
> the signaling state-of-the-art improves with time.
Assuming a central controller (and its collocated or distributed
back up controllers), we don't need complicated protocols in
the network to maintain integrity of the entire network.

>> That is certainly a problem. However, worse problem is to know
>> label values nested deeply in MPLS label chain.
>
> Why, how, is that a problem? For load balancing?
What if, an inner label becomes invalidated around the
destination, which is hidden, for route scalability,
from the equipments around the source?

>> Even worse, if route near the destination expected to pop the label
>> chain goes down, how can the source knows that the router goes down
>> and choose alternative router near the destination?
>
> If by source you mean end-host, if the edge router they are connected to
> only ran IP and they were single-homed, they'd still go down.

No, as "the destination expected to pop the label" is located somewhere
around the final destination end-host.

If, at the destination site, connectivity between a router to pop nested
label and the fine destination end-host is lost, we are at a loss,
unless source side changes inner label.

>> MPLS with hierarchical routing just does not scale.
>
> With Internet in a VRF, I truly agree.
>
> But if you run a simple global BGP table and no VRF's, I don't see an
> issue. This is what we do, and our scaling concerns are exactly the same
> whether we run plain IP or IP/MPLS.

If you are using intra-domain hierarchical routing for
scalability within the domain, you still suffer from
lack of scalability of MPLS.

And, VRF is, in a sense, a form of intra-domain hierarchical
routing with a lot of flexibility, which means a lot of
unnecessary complications.

Masataka Ohta
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 20/Jun/20 14:41, Masataka Ohta wrote:

>  
> There are many. So, our research group tried to improve RSVP.

I'm a lot younger than the Internet, but I read a fair bit about its
history. I can't remember ever coming across an implementation of RSVP
between a host and the network in a commercial setting. If I missed it,
kindly share, as I'd be keen to see how that went.


>
> Practically, the most serious problem of RSVP is, like OSPF, using
> unreliable link multicast to reliably exchange signalling messages
> between routers, making specification and implementations very
> complicated.
>
> So, we developed SRSVP (Simple RSVP) replacing link multicast by,
> like BGP, link local TCP mesh (thanks to the CATENET model, unlike
> BGP, there is no scalability concern). Then, it was not so difficult
> to remove other problems.

Was "S-RSVP" ever implemented, and deployed?


>
> However, perhaps, most people think show stopper to RSVP is lack
> of scalability of weighted fair queueing, though, it is not a
> problem specific to RSVP and MPLS shares the same problem.

QoS has nothing to do with MPLS. You can do QoS with or without MPLS.

I should probably point out, also, that RSVP (or RSVP-TE) is not MPLS.
They collaborate, yes, but we'd be doing the community a disservice by
interchanging them for one another.


>
> Obviously, weighted fair queueing does not scale because it is
> based on deterministic traffic model of token bucket model
> and, these days, people just use some ad-hoc ways for BW
> guarantee implicitly assuming stochastic traffic model. I
> even developed a little formal theory on scalable queueing
> with stochastic traffic model.

Maybe so, but I still don't see the relation to MPLS.

All MPLS can do is convey IPP or DSCP values as an EXP code point in the
core. I'm not sure how that creates a scaling problem within MPLS itself.

If you didn't have MPLS, you'd be encoding those values in IPP or DSCP.
So what's the issue?


>
> So, we have specification and working implementation of
> hop-by-hop, scalable, stable unicast/multicast interdomain
> QoS routing protocol supporting routing hierarchy without
> clank back.
>
> See
>
>     http://www.isoc.org/inet2000/cdproceedings/1c/1c_1.htm
>
> for rough description of design guideline.
>

If I understand this correctly, would this be the IntServ QoS model?

 
>
> I didn't attempt to standardize our result in IETF, partly
> because optical packet switching was a lot more interesting.

Still is, even today :-)?


> That should be a reasonable way of practical operation, though I'm
> not very interested in OCS (optical circuit switching) of GMPLS

Design goals are often what they are, and then the real world hits you.



> For IP layer, that should be enough. For ASON, so complicated
> GMPLS is actually overkill.
>
> When I was playing with ATM switches, I established control
> plain network with VPI/VCI=0/0 and assign control plain IP
> addresses to ATM switches. To control other VCs, simple UDP
> packets are sent to switches from controlling hosts.
>
> Similar technology should be applicable to ASON. Maintaining
> integrity between wavelength switches is responsibility
> of controllers.

Well, GMPLS and ASON is basically skinny OSPF, IS-IS and RSVP running in
a DWDM node's control plane.


>
> No, I just explained what was advertised to be MPLS by people
> around Cisco against Ipsilon.
>
> According to the advertisements, you should call what you
> are using LS or GLS, not MPLS or GMPLS.

It takes a while for new technology to be fully understood, which is why
I'm not rushing on to the SR bandwagon :-).

I can't blame the sales droids or the customers of the day. It probably
sounded like dark magic.


> Assuming a central controller (and its collocated or distributed
> back up controllers), we don't need complicated protocols in
> the network to maintain integrity of the entire network.

Well, that's a point of view, I suppose.

I still can't walk into a shop and "buy a controller". I don't know what
this controller thing is, 10 years on.

IGP's, BGP and label distribution protocols have proven themselves, in
the interim.


> What if, an inner label becomes invalidated around the
> destination, which is hidden, for route scalability,
> from the equipments around the source?

I can't say I've ever come across that scenario running MPLS since 2004.

Do you have an example from a production network that you can share with
us? I'd really like to understand this better.


> No, as "the destination expected to pop the label" is located somewhere
> around the final destination end-host.
>
> If, at the destination site, connectivity between a router to pop nested
> label and the fine destination end-host is lost, we are at a loss,
> unless source side changes inner label.

Maybe a diagram would help, as I still don't get this failure scenario.

If a host lost connectivity with the service provider network, getting
label switching to work is pretty low on the priority list.

Again, unless I misunderstand.


>
> If you are using intra-domain hierarchical routing for
> scalability within the domain, you still suffer from
> lack of scalability of MPLS.
>
> And, VRF is, in a sense, a form of intra-domain hierarchical
> routing with a lot of flexibility, which means a lot of
> unnecessary complications.

I don't think stuffing your VRF's full of routes is an intrinsic problem
of MPLS.

MPLS works whether you run l3vpn's or not. That MPLS provides a
forwarding paradigm for VRF's does not put it and the potential poor
scalability VRF's in the same WhatsApp group.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On Sat, Jun 20, 2020 at 12:38 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 20/Jun/20 11:27, Baldur Norddahl wrote:
>
>
>>
> We run the Internet in a VRF to get watertight separation between
> management and the Internet. I do also have a CGN vrf but that one has very
> few routes in it (99% being subscriber management created, eg. one route
> per customer). Why would this create a scaling issue? If you collapse our
> three routing tables into one, you would have exactly the same number of
> routes. All we did was separate the routes into namespaces, to establish a
> firewall that prevents traffic to flow where it shouldn't.
>
>
> It may be less of an issue in 2020 with the current control planes and how
> far the code has come, but in the early days of l3vpn's, the number of
> VRF's you could have was directly proportional to the number of routes you
> had in each one. More VRF's, less routes for each. More routes per VRF,
> less VRF's in total.
>
> I don't know if that's still an issue today, as we don't run the Internet
> in a VRF. I'd defer to those with that experience, who knew about the
> scaling limitations of the past.
>
>
I can't speak for the year 2000 as I was not doing networking at this level
at that time. But when I check the specs for the base mx204 it says
something like 32 VRFs, 2 million routes in FIB and 6 million routes in
RIB. Clearly those numbers are the total of routes across all VRFs
otherwise you arrive at silly numbers (64 million FIB if you multiply, 128k
FIB if you divide by 32). My conclusion is that scale wise you are ok as
long you do not try to have more than one VRF with a complete copy of the
DFZ.

More worrying is that 2 million routes will soon not be enough to install
all routes with a backup route, invalidating BGP FRR.

Regards,

Baldur
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
On 20/Jun/20 00:41, Anoop Ghanwani wrote:

> One of the advantages cited for SRv6 over MPLS is that the packet
> contains a record of where it has been.

I can't see how advantageous that is, or how possible it would be to
implement, especially for inter-domain traffic.

Mark.
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
----- On Jun 20, 2020, at 2:27 PM, Mark Tinka <mark.tinka@seacom.mu> wrote:

Hi Mark,

> On 20/Jun/20 00:41, Anoop Ghanwani wrote:

>> One of the advantages cited for SRv6 over MPLS is that the packet contains a
>> record of where it has been.

> I can't see how advantageous that is,

That will be very advantageous in a datacenter environment, or any other
environment dealing with a lot of ECMP paths.

I can't tell you how often during my eBay time I've been troubleshooting
end-to-end packetloss between hosts in two datacenters where there were at least
10 or more layers of up to 16 way ECMP between them. Having a record of which
path is being taken by a packet is very helpful to determine the one with a crappy
transceiver.

> or how possible it would be to implement,

That work is already underway, albeit not specifically for MPLS. For example,
I've worked with an experimental version of In-Band Network Telemetry (INT)
as described in this draft: https://tools.ietf.org/html/draft-kumar-ippm-ifa-02

I even demonstrated a very basic implementatoin during SuperCompute 19 in Denver
last year. Most people who were interested in the demo were academics however,
probably because it wasn't a real networking event.

Note that there are several caveats that come with this draft and previous
versions, and that it is still very much work in progress. But the potential is
huge, at least in the DC.

> especially for inter-domain traffic.

That's a different story, but not entirely impossible. A probe packet can
be sent across AS borders, and as long as the two NOCs are cooperating, the
entire path can be reconstructed.

Thanks,

Sabri
Re: Devil's Advocate - Segment Routing, Why? [ In reply to ]
> On Jun 20, 2020, at 2:27 PM, Mark Tinka <mark.tinka@seacom.mu> wrote:
>
>
>
> On 20/Jun/20 00:41, Anoop Ghanwani wrote:
>
>> One of the advantages cited for SRv6 over MPLS is that the packet contains a record of where it has been.
>
> I can't see how advantageous that is, or how possible it would be to implement, especially for inter-domain traffic.
>
> Mark.
>

Since the packet is essentially source-routed, and the labels aren’t popped off the way they are in MPLS, but preserved in the hop by hop headers (AIUI), the implementation isn’t particularly difficult.

Owen

1 2 3 4  View All