Mailing List Archive

1 2 3 4  View All
Re: LDPv6 Census Check [ In reply to ]
On 12/Jun/20 17:19, David Sinn wrote:

> Yes. Path enumeration when you use mult-tier Clos topologies within a PoP causes you many, many problem.

Okay, got you. I thought you were running into these problems on the
"usual suspect" platforms.

Yes, commodity hardware certainly has a number of trade-offs for the
cost benefit. We've been evaluating this path since 2013, and for our
use-case, it will actually make less sense because we are more large
capacity, small footprint, i.e., typical network service provider.

If we were a cloud provider operating at scale in several data centres,
our current model of running Cisco's, Juniper's, Arista's, e.t.c., may
not necessarily be the right choice in 2020, particularly in the edge.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On Fri, 12 Jun 2020 at 23:25, David Sinn <dsinn@dsinn.com> wrote:

> > Should we design a rational cost-efficient solution, we should choose
> > the lowest overhead and narrowest working keys.
>
> In the abstract, sure. But if you want a practical, deployable, production network, it's multi-dimensioned.

We have probably largely converged to the same place. Your vantage
point sees practical offerings where IPIP may make more sense to you
than MPLS, my vantage point definitely only implements the rich
features I need in MPLS tunnels (RSVP-TE, L2 pseudowires, FRR, L3 MPLS
VPN, all of which technically could be done of course with IPIPIP
tunnels) And the theory we agree that less is more.

ECMP appears to be your main pain point, the rich features are not
relevant, and you mentioned commodity hardware being able to hash on
IPIP. I feel this may be a very special case where HW can do IPIP hash
but not MPLSIP hash. Out of curiosity, what is this hardware? Jericho
can do MPLSIP, I know JNPR's pipeline offering, Paradise, can. Or
perhaps it's not even that the underlaying hardware you have, cannot
do, it's that the NOS you are being offered is so focused on your
use-case, it doesn't do anything else reasonably, then of course that
use-case is best by default.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On 13/Jun/20 08:00, Saku Ytti wrote:

>
> ECMP appears to be your main pain point, the rich features are not
> relevant, and you mentioned commodity hardware being able to hash on
> IPIP. I feel this may be a very special case where HW can do IPIP hash
> but not MPLSIP hash. Out of curiosity, what is this hardware? Jericho
> can do MPLSIP, I know JNPR's pipeline offering, Paradise, can. Or
> perhaps it's not even that the underlaying hardware you have, cannot
> do, it's that the NOS you are being offered is so focused on your
> use-case, it doesn't do anything else reasonably, then of course that
> use-case is best by default.

One of the biggest challenges we found of leveraging commodity hardware
was locating a suitable OS that is not only fit-for-purpose in our
service provider environment, but that could also leverage the hardware
at its disposal to its fullest potential.

It's hard enough for one vendor to get both their own hardware and
software right most of the time. We posited it would be doubly hard for
an operator to marry hardware and software vendors that do not
necessarily co-ordinate with one another, if your goal is to run a
profit-oriented operational network.

Sure, the idea is great on paper, but there aren't that many shops that
can throw warm bodies at this problem like some of the more established
content and cloud folk.

If it was easy, I certainly wouldn't have started this thread in the
first place :-).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On Fri, Jun 12, 2020 at 10:22 PM David Sinn <dsinn@dsinn.com> wrote:

> Except that is actually the problem if you look at it in hardware. And to
> be very specific, I'm talking about commodity hardware, not flexible
> pipelines like you find in the MX and a number of the ASR's. I'm also
> talking about the more recent approach of using Clos in PoP's instead of
> "big iron" or chassis based systems.
>

TE gives you the most powerful traffic engineering tool kit available.
Naturally it has a bit more weight than just a single screwdriver. It can
you build nearly any kind of multipath transport while that Clos thing is
just one architecture hunting for the cheapest implementation of
IP/LDP-style ECMP.

On those boxes, it's actually better to not do shared labels, as this
> pushes the ECMP decision to the ingress node. That does mean you have to
> enumerate every possible path (or some approximate) through the network,
> however the action on the commodity gear is greatly reduced. It's a pure
> label swap, so you don't run into any egress next-hop problems. You
> definitely do on the ingress nodes. Very, very badly actually.
>

Actually shared links are not a swap but just a pop similar to SR. But
indeed this would shift your ECMP issue just to the headend. So for your
ECMP scaling there would still be an option left to use an implementation
which offers you a merge-point with a single label to all upstreams for a
certain equal-cost multipath downstream. This does exist, so would
certainly fix your ECMP scaling problem. But advanced control-plane code is
certainly not cheap so in the end, like it was already said before, if a
simple and cheap platform can solve all your needs then it might be the
better one. Let‘s see what problems we need to solve in five years again.

What I'm getting at is that IP allows re-write sharing in that what needs
> to change on two IP frames taking the same paths but ultimately reaching
> different destinations are re-written (e.g. DMAC, egress-port) identically.
> And, at least with IPIP, you are able to look at the inner-frame for ECMP
> calculations. Depending on your MPLS design, that may not be the case. If
> you have too deep of a label stack (3-5 depending on ASIC), you can't look
> at the payload and you end up with polarization.
>

Not really as you are still forced to rewrite on imposition for the
simplest form of tunneling, and for TE as often as you need to go against
your SPT as well, it‘s just happening on IP (and IP rewrites are more
expensive than MPLS rewrites / forwarding operations).
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> From: David Sinn
> Sent: Friday, June 12, 2020 4:19 PM
>
> > On Jun 11, 2020, at 2:02 PM, Mark Tinka <mark.tinka@seacom.mu> wrote:
> >
> > On 11/Jun/20 17:32, David Sinn wrote:
> >
> >> Respectfully, that is deployment dependent. In a traditional SP
topology
> that focuses on large do everything boxes, where the topology is fairly
point-
> to-point and you only have a small handful of nodes at a PoP, labels can
be
> fast, cheap and easy. Given the lack of ECMP/WECMP, they remain fairly
> efficient within the hardware.
> >>
> >> However if you move away from large multi-chip systems, which hide
> internal links which can only be debugged and monitored if you know the
the
> obscure, often different ways in which they are partially exposed to the
> operator, and to a system of fixed form-factor, single chip systems,
labels fall
> apart at scale with high ECMP.
> >
> > I'm curious about this statement - have you hit practical ECMP issues
> > with label switching at scale?
>
> Yes. Path enumeration when you use mult-tier Clos topologies within a PoP
> causes you many, many problem.
>
Hi David,

Can you be more specific please? Maybe some examples with numbers.

I can see how you might run out of L2 rewrite/adjacency table space on a
particular node if you enumerate every possible path downstream of it
(especially on leaf nodes), cause that number is has a dependency on the
size of the fabric in terms of the total number of links in the fabric
(which balloons quickly).
Let's focus on the alternate case then, (the deep label-stack one)
At each node in the multi-tier Clos (which I assume is the Russ White's
butterfly model? But any Clos or Benes fabric needs the same) you need to
program a label to uniquely identify each egress interface ,so now there's
this nice one-to-one relationship between label and egress interface. Now
the depth of the label-stack depends on the number of hops the packet needs
to traverse across the fabric. But how deep could the fabric realistically
be? Even in the "butterfly" model with separate pods instead of leaf nodes I
counted 9 hops (that's not ultra-deep is it)? It's the VM doing label
imposition as programmed by the fabric controller (all in SW so can go as
deep as you want) and all fabric nodes are just popping top label so no big
deal.

adam



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> David Sinn
> Sent: Friday, June 12, 2020 4:42 PM
>
> > On Jun 12, 2020, at 8:26 AM, Saku Ytti <saku@ytti.fi> wrote:
> >
> > On Fri, 12 Jun 2020 at 18:16, David Sinn <dsinn@dsinn.com> wrote:
> >
> The label stack question is about the comparisons between the two
> extremes of SR that you can be in. You either label your packet just for
it's
> ultimate destination or you apply the stack of the points you want to pass
> through.
>
> In the former case you are, at the forwarding plane, equal to what you see
> with traditional MPLS today, with every node along the path needing to
> know how to reach the end-point. Yes, you have lowered label space from
> traditional MPLS, but that can be done with site-cast labels already. And,
> while the nodes don't have to actually swap labels, when you look at
> commodity implementations (across the last three generations since you
> want to do this with what is deployed, not wholesale replace the network)
a
> null swap still ends up eating a unique egress next-hop entry. So from a
> hardware perspective, you haven't improved anything. Your ECMP group
> count is high.
>
Yes this is where each node needs to have a label uniquely identifying every
LSP passing through it.
Saku,
With IP header you don't need this,
Consider this:
PE1 to PE2 via 3 P-core nodes
With ECMP in IP, then PE1 just needs single FEC the DST-IP of PE2, which
will be load-shared across all 3 paths.
Using MPLS If you need to uniquely identify each path you need 3 FECs (3
LSPs one via each P core node), now imagine you have 100K possible paths
across the fabric
-that's a lot of FECs on PE1 or any node in the fabric where each has to
have a unique label for every possible unique path via the core that the
particular node is part of.

> In the extreme latter case, you have to, on ingress, place the full stack
of
> every "site" you want to pass through. That has the benefits of "sites"
only
> need labels for their directly connected sites, so you have optimized the
> implications on commodity hardware. However, you now have a label stack
> that can be quite tall. At least if you want to walk the long-way around
the
> world, say due to failure. On top, that depth of label stack means devices
in
> the middle can't look at the original payload to make ECMP decisions. So
you
> can turn to entropy labels, but that sort of makes matters worse.
>
David,
You can use hierarchy of tunnels to help with the deep label-stack
imposition problem.

adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On Mon, 15 Jun 2020 at 12:24, <adamv0025@netconsultings.com> wrote:


> Yes this is where each node needs to have a label uniquely identifying every
> LSP passing through it.
> Saku,
> With IP header you don't need this,
> Consider this:
> PE1 to PE2 via 3 P-core nodes
> With ECMP in IP, then PE1 just needs single FEC the DST-IP of PE2, which
> will be load-shared across all 3 paths.
> Using MPLS If you need to uniquely identify each path you need 3 FECs (3
> LSPs one via each P core node), now imagine you have 100K possible paths
> across the fabric
> -that's a lot of FECs on PE1 or any node in the fabric where each has to
> have a unique label for every possible unique path via the core that the
> particular node is part of.

Are we talking about specific implementations of fundamentals? It
sounds like we are talking about a specific case where IP next-hop is
unilist of N next-hops, and MPLS next-hop is a single item without
indirection? This is not a fundamental difference, this is
implementation detail.
There is no particular reason MPLS next-hop couldn't be unilist of N
destinations.

I think people are too focused on thinking IP and MPLS have some
inherent magical property differences, they don't. We only care about
lookup cost (again IP can be made cheap with IPinIPinIP tunnels and
telling LSR devices all lookups are LEM host lookups) and we care
about key width. Rest is implementation detail.

Yes on typical case there are some biases in IP and MPLS, but these
can be rendered away leaving the fundamental differences. Briding the
gap from MPLS to IP is far smaller than bridging the gap from IP to
MPLS, there is so much added value which depends on MPLS tunnels.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> From: Saku Ytti <saku@ytti.fi>
> Sent: Monday, June 15, 2020 10:31 AM
>
> On Mon, 15 Jun 2020 at 12:24, <adamv0025@netconsultings.com> wrote:
>
>
> > Yes this is where each node needs to have a label uniquely identifying
> > every LSP passing through it.
> > Saku,
> > With IP header you don't need this,
> > Consider this:
> > PE1 to PE2 via 3 P-core nodes
> > With ECMP in IP, then PE1 just needs single FEC the DST-IP of PE2,
> > which will be load-shared across all 3 paths.
> > Using MPLS If you need to uniquely identify each path you need 3 FECs
> > (3 LSPs one via each P core node), now imagine you have 100K possible
> > paths across the fabric -that's a lot of FECs on PE1 or any node in
> > the fabric where each has to have a unique label for every possible
> > unique path via the core that the particular node is part of.
>
> Are we talking about specific implementations of fundamentals? It sounds
> like we are talking about a specific case where IP next-hop is unilist of N next-
> hops, and MPLS next-hop is a single item without indirection? This is not a
> fundamental difference, this is implementation detail.
> There is no particular reason MPLS next-hop couldn't be unilist of N
> destinations.
>
Yes it can indeed, and that's moving towards the centre between the extreme cases that David laid out.
It's about how granular one wants to be in identifying an end-to-end path between a pair of edge nodes.
I agree with you that MPLS is still better than IP,
and I tried to illustrate that even enumerating every possible paths using deep label stack is not a problem (and even that can be alleviated using hierarchy of LSPs).

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On Mon, 15 Jun 2020 at 12:46, <adamv0025@netconsultings.com> wrote:

> Yes it can indeed, and that's moving towards the centre between the extreme cases that David laid out.
> It's about how granular one wants to be in identifying an end-to-end path between a pair of edge nodes.
> I agree with you that MPLS is still better than IP,
> and I tried to illustrate that even enumerating every possible paths using deep label stack is not a problem (and even that can be alleviated using hierarchy of LSPs).

The entirety of my point is, if we were rational, we'd move towards
increasingly efficient solutions. And technically everything we do in
MPLS tunnels, we can do in IP tunnels and converse. Should we imagine
a future where all features and functions are supported in both, it's
clear we should want to do MPLS tunnels. Just the [IGP][BGP-LU] 8B
overhead, compared to IP 40B overhead should drive the point home, and
ultimately, that's the only difference, rest is implementation.

And I'm saddened we've been marketed snake-oil like SRv6 with fake
promises of inherent advantages or simplicity 'just IP'.

We can do better than MPLS, absolutely. But IP is worse.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> From: Saku Ytti <saku@ytti.fi>
> Sent: Monday, June 15, 2020 11:02 AM
>
> On Mon, 15 Jun 2020 at 12:46, <adamv0025@netconsultings.com> wrote:
>
> > Yes it can indeed, and that's moving towards the centre between the
> extreme cases that David laid out.
> > It's about how granular one wants to be in identifying an end-to-end path
> between a pair of edge nodes.
> > I agree with you that MPLS is still better than IP, and I tried to
> > illustrate that even enumerating every possible paths using deep label
> stack is not a problem (and even that can be alleviated using hierarchy of
> LSPs).
>
> The entirety of my point is, if we were rational, we'd move towards
> increasingly efficient solutions. And technically everything we do in MPLS
> tunnels, we can do in IP tunnels and converse. Should we imagine a future
> where all features and functions are supported in both, it's clear we should
> want to do MPLS tunnels. Just the [IGP][BGP-LU] 8B overhead, compared to
> IP 40B overhead should drive the point home, and ultimately, that's the only
> difference, rest is implementation.
>
> And I'm saddened we've been marketed snake-oil like SRv6 with fake
> promises of inherent advantages or simplicity 'just IP'.
>
> We can do better than MPLS, absolutely. But IP is worse.
>
Yes I absolutely agree,

Not to mention this whole thread is focused solely on next-hop identification -which is just the lowest of the layers of abstraction in the vertical stack.
We haven’t talked about other "entities" that need identification like: VPNs, applications, policies (yes I'm looking at you VXLAN!) etc... - all of which are way better identified by a simple label rather than IPinIPinIP....

adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> ECMP appears to be your main pain point, the rich features are not
> relevant, and you mentioned commodity hardware being able to hash on
> IPIP. I feel this may be a very special case where HW can do IPIP hash
> but not MPLSIP hash. Out of curiosity, what is this hardware? Jericho
> can do MPLSIP, I know JNPR's pipeline offering, Paradise, can. Or
> perhaps it's not even that the underlaying hardware you have, cannot
> do, it's that the NOS you are being offered is so focused on your
> use-case, it doesn't do anything else reasonably, then of course that
> use-case is best by default.

I'm referring to the DC not SP class chips. So XGS if you are a Broadcom fan or any of the half a dozen other options. Because if you really are going to go commodity, why go commodity when there is only one option (DNX) vs. multiple with more coming in at a decent pace in the DC class. Your really trading out lock-in if stick to SP class chips in the commodity space, at least until there are more alternatives.

David
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> What I'm getting at is that IP allows re-write sharing in that what needs to change on two IP frames taking the same paths but ultimately reaching different destinations are re-written (e.g. DMAC, egress-port) identically. And, at least with IPIP, you are able to look at the inner-frame for ECMP calculations. Depending on your MPLS design, that may not be the case. If you have too deep of a label stack (3-5 depending on ASIC), you can't look at the payload and you end up with polarization.
>
> Not really as you are still forced to rewrite on imposition for the simplest form of tunneling, and for TE as often as you need to go against your SPT as well, it‘s just happening on IP (and IP rewrites are more expensive than MPLS rewrites / forwarding operations).

Sure, but not following SPT in IP isn't rocket science. You can do it using traditional protocols if you really want to, or you can write a controller to do it for you. And it doesn't take 100's of people to do so. It doesn't even take 10. So, yes, you need to justify the funding of those people, so milage will vary based on size and scope of your network.

David
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On 15/Jun/20 12:13, adamv0025@netconsultings.com wrote:

> Not to mention this whole thread is focused solely on next-hop identification -which is just the lowest of the layers of abstraction in the vertical stack.
> We haven’t talked about other "entities" that need identification like: VPNs, applications, policies (yes I'm looking at you VXLAN!) etc... - all of which are way better identified by a simple label rather than IPinIPinIP....

The problem is if you want to have MPLS services signaled over IPv6, you
first need an IPv6 control plane that will signal the labels that will
support those services, i.e., LDPv6 and RSVPv6.

Right now, all vendors that support LDPv6 do so only for pure MPLS-IPv6
forwarding. If you want l2vpnv6, l3vpnv6, EVPNv6, 4PE, 4VPE, TE-FRRv6,
e.t.c., you can't get that today. You'd have to signal those services
over LDPv4 or RSVPv4.

The guys did a great gap analysis back in 2015, when LDPv6 began to
appear in IOS XR and Junos, that showed the challenges toward an
IPv6-only MPLS network. I believe much of this is still relevant in 2020:

    https://tools.ietf.org/html/rfc7439

I have to commend Vishwas, together with both Rajiv's, for kicking this
off way back in 2008. The road has been long and hard.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On 15/Jun/20 12:13, adamv0025@netconsultings.com wrote:

> Not to mention this whole thread is focused solely on next-hop
> identification -which is just the lowest of the layers of abstraction
> in the vertical stack. We haven’t talked about other "entities" that
> need identification like: VPNs, applications, policies (yes I'm
> looking at you VXLAN!) etc... - all of which are way better identified
> by a simple label rather than IPinIPinIP....

The problem is if you want to have MPLS services signaled over IPv6, you
first need an IPv6 control plane that will signal the labels that will
support those services, i.e., LDPv6 and RSVPv6.

Right now, all vendors that support LDPv6 do so only for pure MPLS-IPv6
forwarding. If you want l2vpnv6, l3vpnv6, EVPNv6, 4PE, 4VPE, TE-FRRv6,
e.t.c., you can't get that today. You'd have to signal those services
over LDPv4 or RSVPv4.

The guys did a great gap analysis back in 2015, when LDPv6 began to
appear in IOS XR and Junos, that showed the challenges toward an
IPv6-only MPLS network. I believe much of this is still relevant in 2020:

    https://tools.ietf.org/html/rfc7439

I have to commend Vishwas, together with both Rajiv's, for kicking this
off way back in 2008. The road has been long and hard.

Mark.


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> From: Mark Tinka <mark.tinka@seacom.mu>
> Sent: Monday, June 15, 2020 4:07 PM
>
> On 15/Jun/20 12:13, adamv0025@netconsultings.com wrote:
>
> > Not to mention this whole thread is focused solely on next-hop
> identification -which is just the lowest of the layers of abstraction in the
> vertical stack.
> > We haven’t talked about other "entities" that need identification like:
> VPNs, applications, policies (yes I'm looking at you VXLAN!) etc... - all of
> which are way better identified by a simple label rather than IPinIPinIP....
>
> The problem is if you want to have MPLS services signaled over IPv6, you first
> need an IPv6 control plane that will signal the labels that will support those
> services, i.e., LDPv6 and RSVPv6.
>
> Right now, all vendors that support LDPv6 do so only for pure MPLS-IPv6
> forwarding. If you want l2vpnv6, l3vpnv6, EVPNv6, 4PE, 4VPE, TE-FRRv6,
> e.t.c., you can't get that today. You'd have to signal those services over LDPv4
> or RSVPv4.
>
Hence my earlier comment on why I think it's not commercially feasible to switch to v6 control plane, (the only thing I'd be getting is MPLS-IPv6 which I already have (or have had) via L3VPN-6VPE), time will tell if I ever need to make the switch.
But I'm thankful to you for doing the "ice breaking" for the rest of the community.

adam

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On 16/Jun/20 12:00, adamv0025@netconsultings.com wrote:

> Hence my earlier comment on why I think it's not commercially feasible to switch to v6 control plane,

Personally, I've never been a fan of a single-stack backbone. I can,
however, understand the use-case where a new or growing network is
unable to obtain anymore IPv4 space and don't want to use RFC 1918 space
(yuck!).


> (the only thing I'd be getting is MPLS-IPv6 which I already have (or have had) via L3VPN-6VPE),

Well, not quite.

What you currently have with 6PE is IPv6 tunneled inside MPLSv4 which
runs over IPv4. While you get the benefits of MPLS forwarding for your
IPv6 traffic, you now create a condition where your IPv6 network is in
the hands of your IPv4 network. Granted, there are many folk that run
6PE, so whether the fate-sharing is of concern to you or not is an
exercise left up to the reader. Personally, I'd rather avoid
fate-sharing whenever I can.

On the other hand, MPLSv6 is native, runs over IPv6 and does not depend
on IPv4 at all.

Ultimately, plenty of energy will need to go into supporting the
additional VPN services that go beyond plain-old MPLSv6 switching. But
that can only be promoted with the vendors after we jump the first
hurdle of deploying the 1st application, basic MPLSv6 switching; get
that widely adopted, and create more awareness within the vendor
community about its overall viability.

80% of our network currently runs LDPv6 and switches IPv6 traffic in
MPLSv6. Our immediate task is to get the remaining 20% supported (IOS
XE) as well.


> But I'm thankful to you for doing the "ice breaking" for the rest of the community.

As Eriq La Salle unashamedly claimed in the 1988 Eddie Murphy picture,
Coming to America, "You know me, anything for the kids :-)".

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
> From: Mark Tinka <mark.tinka@seacom.mu>
> Sent: Tuesday, June 16, 2020 12:09 PM
>
> On 16/Jun/20 12:00, adamv0025@netconsultings.com wrote:
>
> > Hence my earlier comment on why I think it's not commercially feasible
> > to switch to v6 control plane,
>
> Personally, I've never been a fan of a single-stack backbone. I can, however,
> understand the use-case where a new or growing network is unable to
> obtain anymore IPv4 space and don't want to use RFC 1918 space (yuck!).
>
Actually I was exactly I that situation and v4 RFC 1918 space worked out just fine.

>
> > (the only thing I'd be getting is MPLS-IPv6 which I already have (or
> > have had) via L3VPN-6VPE),
>
> Well, not quite.
>
> What you currently have with 6PE is IPv6 tunneled inside MPLSv4 which runs
> over IPv4. While you get the benefits of MPLS forwarding for your
> IPv6 traffic, you now create a condition where your IPv6 network is in the
> hands of your IPv4 network. Granted, there are many folk that run 6PE, so
> whether the fate-sharing is of concern to you or not is an exercise left up to
> the reader. Personally, I'd rather avoid fate-sharing whenever I can.
>
I've been dependent solely on v4 all my life and I still am.
But I see your fate-sharing argument, similar to my argument around separate iBGP infrastructure (Route-Reflector plane) for Internet vs for other customer private VPN prefixes.
But in the multiplanar iBGP case one plane is statistically more likely to fail than the other, whereas in case of v4 vs v6 control plane I'd say it's actually the NEW v6 that's more likely hiding some unforeseen bug.
So let me ask the following "devil's advocate" type of question, under the assumption that the LDPv6 is a new thing (not as proven as LDPv4), are you taking dependency away by splitting control plane to v4 and v6 or actually adding dependency - where the NEW v6 control plane components could negatively affect the existing v4 control plane components after all they share a common RE (or even RDP in Junos).

> On the other hand, MPLSv6 is native, runs over IPv6 and does not depend on
> IPv4 at all.
>
> Ultimately, plenty of energy will need to go into supporting the additional
> VPN services that go beyond plain-old MPLSv6 switching. But that can only be
> promoted with the vendors after we jump the first hurdle of deploying the
> 1st application, basic MPLSv6 switching; get that widely adopted, and create
> more awareness within the vendor community about its overall viability.
>
> 80% of our network currently runs LDPv6 and switches IPv6 traffic in MPLSv6.
> Our immediate task is to get the remaining 20% supported (IOS
> XE) as well.
>
>
> > But I'm thankful to you for doing the "ice breaking" for the rest of the
> community.
>
> As Eriq La Salle unashamedly claimed in the 1988 Eddie Murphy picture,
> Coming to America, "You know me, anything for the kids :-)".
>
That was a good movie :D :D :D

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: LDPv6 Census Check [ In reply to ]
On 16/Jun/20 14:24, adamv0025@netconsultings.com wrote:

> Actually I was exactly I that situation and v4 RFC 1918 space worked out just fine.

In that way, you are braver than me. But hey, if you need IPv4 and can't
get the public stuff, I won't fault you for going with the private stuff
:-).


> I've been dependent solely on v4 all my life and I still am.
> But I see your fate-sharing argument, similar to my argument around separate iBGP infrastructure (Route-Reflector plane) for Internet vs for other customer private VPN prefixes.
> But in the multiplanar iBGP case one plane is statistically more likely to fail than the other, whereas in case of v4 vs v6 control plane I'd say it's actually the NEW v6 that's more likely hiding some unforeseen bug.
> So let me ask the following "devil's advocate" type of question, under the assumption that the LDPv6 is a new thing (not as proven as LDPv4), are you taking dependency away by splitting control plane to v4 and v6 or actually adding dependency - where the NEW v6 control plane components could negatively affect the existing v4 control plane components after all they share a common RE (or even RDP in Junos).

Well, that's a bottomless rabbit hole that go could all to the way to
the data centre providing A+B power feeds, but connected to a single
grid on the outside. At some point, redundancy stops making sense and
eats into your margins as much as it does your sanity :-).

But back to the question at hand, even with 6PE, you can't avoid running
a dual-stack network... you'd need to at the edge. So if your goal is to
use 6PE to avoid running IPv6 in some native form anywhere in your
network, that won't work out, as I'm sure you know :-).

But more importantly, I, as have many others on this group, have been
running IPv6 since about 2003 (others, even longer, I'm sure). IPv6, in
and of itself, has never been an issue for me. The problems have always
been the ancillary services that need to run on top of it in order to
work. For the past 17 years, my IPv6 headaches have been about feature
parity with IPv4, mostly, in routers and switches (in general-purpose
server OS's too, but those got fixed much more quickly):

* DNS over IPv6 took a while to arrive.
* TACACS+ over IPv6 took a while to arrive.
* IPv6 ACL's took a while to get proper support.
* SNMP over IPv6 took a while to arrive.
* NTP over IPv6 took a while to arrive.
* SSH over IPv6 took a while to arrive.
* OSPFv3 Authentication was very clunky.
* Multi Topoloy IS-IS support was very clunky.

You get the idea.

I've always operated a native dual-stack network, so having to go back
and upgrade routers every so often when one of the above limitations got
fixed in a later revision of code was tiresome, but worthwhile. We take
a lot of these things for granted in 2020, but it was no joke more than
a decade ago.

So for me, I've never really experienced any problems from basic IPv6
that have negatively impacted IPv4.

The corner case I am aware of that didn't even bother IPv4 was Ethernet
switches and some popular Chinese GPON AN's that silently dropped
Ethernet frames carrying IPv6 packets because they did not know how to
handle the 0x86DD EtherType. But AFAIK, these have all been since fixed.

So based on pure experience, I don't expect this "32-year old new IPv6"
thing to be hiding some unforeseen bug that will break our IPv4 network :-).

LDPv6 was first implemented in IOS XR, Junos and SR-OS in 2015/2016, so
it has been around for a while. The biggest challenge was with IOS XR in
2015 (5.3.0) which didn't support dual-stack TLV's. So if the LDP
neighbor negotiated LDPv4 and LDPv6 in the same LDP session, IOS XR
didn't know what to do. It could do LDPv4-only or LDPv6-only, and not
both. That issue was fixed in IOS XR 6.0.1, when the Dual-Stack
Capability TLV feature was introduced. That was May of 2016, so also not
that new.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

1 2 3 4  View All