Mailing List Archive

Hardware configuration for cRPD as RR
Hey!

cRPD documentation is quite terse about resource requirements:
https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/concept/crpd-hardware-requirements.html

When used as a route reflector with about 20 million routes, what kind
of hardware should we use? Documentation says about 64 GB of memory, but
for everything else? Notably, should we have many cores but lower boost
frequency, or not too many cores but higher boost frequency?

There is a Day One book about cRPD, but they show a very outdated
processor (Sandy Lake, 10 years old).

Is anyone using cRPD as RR with a similar scale and can share the
hardware configuration they use? Did you also optimize the underlying OS
in some way or just use a stock configuration?

Thanks.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
Also very curious in this regard.

Best Regards,
-Thomas Scott


On Wed, Dec 6, 2023 at 12:58?PM Vincent Bernat via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hey!
>
> cRPD documentation is quite terse about resource requirements:
>
> https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/concept/crpd-hardware-requirements.html
>
> When used as a route reflector with about 20 million routes, what kind
> of hardware should we use? Documentation says about 64 GB of memory, but
> for everything else? Notably, should we have many cores but lower boost
> frequency, or not too many cores but higher boost frequency?
>
> There is a Day One book about cRPD, but they show a very outdated
> processor (Sandy Lake, 10 years old).
>
> Is anyone using cRPD as RR with a similar scale and can share the
> hardware configuration they use? Did you also optimize the underlying OS
> in some way or just use a stock configuration?
>
> Thanks.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
From a RPD, not cRPD perspective.

- 64GB is certainly fine, you might be able to do with 32GB
- Unless RRs are physically next to clients, you want to bump default
16kB TCP window to maximum 64kB window, and probably ask account team
for window scaling support (unsure if this is true for cRPD, or if
cRPD lets underlying kernel do this right, but you need to do same in
client end anyhow)
- You absolutely need sharding to put work on more than 1 core.

Sharding goes up-to 31, but very likely 31 is too much, and the
overhead of sharding will make it slower than running lower counts
like 4-8. Your core count likely shouldn't be higher than shards+1.

The sharding count and DRAM count are not specifically answerable, as
it depends on what the contents of the RIB is. Do a binary search for
both and measure convergence time, to find a good-enough number, I
think 64/32GB and 4-8 cores are likely good picks.

On Wed, 6 Dec 2023 at 22:30, Thomas Scott via juniper-nsp
<juniper-nsp@puck.nether.net> wrote:
>
> Also very curious in this regard.
>
> Best Regards,
> -Thomas Scott
>
>
> On Wed, Dec 6, 2023 at 12:58?PM Vincent Bernat via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
>
> > Hey!
> >
> > cRPD documentation is quite terse about resource requirements:
> >
> > https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/concept/crpd-hardware-requirements.html
> >
> > When used as a route reflector with about 20 million routes, what kind
> > of hardware should we use? Documentation says about 64 GB of memory, but
> > for everything else? Notably, should we have many cores but lower boost
> > frequency, or not too many cores but higher boost frequency?
> >
> > There is a Day One book about cRPD, but they show a very outdated
> > processor (Sandy Lake, 10 years old).
> >
> > Is anyone using cRPD as RR with a similar scale and can share the
> > hardware configuration they use? Did you also optimize the underlying OS
> > in some way or just use a stock configuration?
> >
> > Thanks.
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.

-Michael

> -----Original Message-----
> From: juniper-nsp <juniper-nsp-bounces@puck.nether.net> On Behalf Of
> Saku Ytti via juniper-nsp
> Sent: Thursday, December 7, 2023 12:24 AM
> To: Thomas Scott <mr.thomas.scott@gmail.com>
> Cc: Vincent Bernat <bernat@luffy.cx>; juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] Hardware configuration for cRPD as RR
>
> From a RPD, not cRPD perspective.
>
> - 64GB is certainly fine, you might be able to do with 32GB
> - Unless RRs are physically next to clients, you want to bump default
> 16kB TCP window to maximum 64kB window, and probably ask account team
> for window scaling support (unsure if this is true for cRPD, or if
> cRPD lets underlying kernel do this right, but you need to do same in
> client end anyhow)
> - You absolutely need sharding to put work on more than 1 core.
>
> Sharding goes up-to 31, but very likely 31 is too much, and the
> overhead of sharding will make it slower than running lower counts
> like 4-8. Your core count likely shouldn't be higher than shards+1.
>
> The sharding count and DRAM count are not specifically answerable, as
> it depends on what the contents of the RIB is. Do a binary search for
> both and measure convergence time, to find a good-enough number, I
> think 64/32GB and 4-8 cores are likely good picks.
>
> On Wed, 6 Dec 2023 at 22:30, Thomas Scott via juniper-nsp
> <juniper-nsp@puck.nether.net> wrote:
> >
> > Also very curious in this regard.
> >
> > Best Regards,
> > -Thomas Scott
> >
> >
> > On Wed, Dec 6, 2023 at 12:58?PM Vincent Bernat via juniper-nsp <
> > juniper-nsp@puck.nether.net> wrote:
> >
> > > Hey!
> > >
> > > cRPD documentation is quite terse about resource requirements:
> > >
> > > https://www.juniper.net/documentation/us/en/software/crpd/crpd-
> deployment/topics/concept/crpd-hardware-requirements.html
> > >
> > > When used as a route reflector with about 20 million routes, what kind
> > > of hardware should we use? Documentation says about 64 GB of memory,
> but
> > > for everything else? Notably, should we have many cores but lower boost
> > > frequency, or not too many cores but higher boost frequency?
> > >
> > > There is a Day One book about cRPD, but they show a very outdated
> > > processor (Sandy Lake, 10 years old).
> > >
> > > Is anyone using cRPD as RR with a similar scale and can share the
> > > hardware configuration they use? Did you also optimize the underlying OS
> > > in some way or just use a stock configuration?
> > >
> > > Thanks.
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> > >
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
> ++ytti
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Thu, 7 Dec 2023 at 16:22, Michael Hare via juniper-nsp
<juniper-nsp@puck.nether.net> wrote:

> I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.

2914 uses it, not out of desire (too new, too rare), but out of
necessity at scale 2914 needs. Surprisingly mature/robust for what it
is, and how rare routing suites are to support any type of
multithreading.

Of course the design is a relatively conservative and clever
compromise between building a truly multithreaded routing suite and
delivering something practical on a legacy codebase. It wouldn't help
in every RIB, but probably helps in every practical RIB. If you have a
low amount of duplicate RIB entries it might not be very useful, as
final collation of unique entries will be more or less single threaded
anyhow. But I believe anyone having a truly large RIB, like 20M, will
have massive duplication and will see significant benefit.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2023-12-07 15:21, Michael Hare via juniper-nsp wrote:
> I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.

About the scale, I said routes, but they are paths. We plan to use add
path to ensure optimal routing (ORR could be another option, but it is
less common).
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Fri, 8 Dec 2023 at 18:42, Vincent Bernat via juniper-nsp
<juniper-nsp@puck.nether.net> wrote:

> On 2023-12-07 15:21, Michael Hare via juniper-nsp wrote:
> > I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.
>
> About the scale, I said routes, but they are paths. We plan to use add
> path to ensure optimal routing (ORR could be another option, but it is
> less common).

Given a sufficient count of path options, they're not really
alternatives, but you need both. Like you can't do add-path <max>, as
the clients won't scale. And you probably don't want only ORR, because
of the convergence cost of clients not having a backup option or the
lack of ECMP opportunity.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
Why not both add-path + ORR?
--

Thomas Scott
Sr. Network Engineer
+1-480-241-7422
tscott@digitalocean.com


On Fri, Dec 8, 2023 at 11:57?AM Saku Ytti via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> On Fri, 8 Dec 2023 at 18:42, Vincent Bernat via juniper-nsp
> <juniper-nsp@puck.nether.net> wrote:
>
> > On 2023-12-07 15:21, Michael Hare via juniper-nsp wrote:
> > > I recognize Saku's recommendation of rib sharding is a practical one
> at 20M routes, I'm curious if anyone is willing to admit to using it in
> production and on what version of JunOS. I admit to have not played with
> this in the lab yet, we are much smaller [3.5M RIB] worst case at this
> point.
> >
> > About the scale, I said routes, but they are paths. We plan to use add
> > path to ensure optimal routing (ORR could be another option, but it is
> > less common).
>
> Given a sufficient count of path options, they're not really
> alternatives, but you need both. Like you can't do add-path <max>, as
> the clients won't scale. And you probably don't want only ORR, because
> of the convergence cost of clients not having a backup option or the
> lack of ECMP opportunity.
>
> --
> ++ytti
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
I tried to advocate for both, sorry if I was unclear.

ORR for good options, add-path for redundancy and/or ECMPability.

On Fri, 8 Dec 2023 at 19:13, Thomas Scott <tscott@digitalocean.com> wrote:
>
> Why not both add-path + ORR?
> --
>
> Thomas Scott
> Sr. Network Engineer
> +1-480-241-7422
> tscott@digitalocean.com
>
>
> On Fri, Dec 8, 2023 at 11:57?AM Saku Ytti via juniper-nsp <juniper-nsp@puck.nether.net> wrote:
>>
>> On Fri, 8 Dec 2023 at 18:42, Vincent Bernat via juniper-nsp
>> <juniper-nsp@puck.nether.net> wrote:
>>
>> > On 2023-12-07 15:21, Michael Hare via juniper-nsp wrote:
>> > > I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.
>> >
>> > About the scale, I said routes, but they are paths. We plan to use add
>> > path to ensure optimal routing (ORR could be another option, but it is
>> > less common).
>>
>> Given a sufficient count of path options, they're not really
>> alternatives, but you need both. Like you can't do add-path <max>, as
>> the clients won't scale. And you probably don't want only ORR, because
>> of the convergence cost of clients not having a backup option or the
>> lack of ECMP opportunity.
>>
>> --
>> ++ytti
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp



--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
I’ll also comment that many software suites don’t scale to 10’s or 100’s of million of paths

Keep in mind paths != routes and many folks don’t always catch the difference between them. If you have a global network like 2914 (for example) you may be peering with someone in 10-20 places globally so if they send you 10k routes, * 20 locations that’s 200k paths(exits), then move to someone with 100k or 400k prefixes like 3356 had at one point, those numbers go up quite a bit.

- Jared

> On Dec 8, 2023, at 12:16 PM, Saku Ytti via juniper-nsp <juniper-nsp@puck.nether.net> wrote:
>
> I tried to advocate for both, sorry if I was unclear.
>
> ORR for good options, add-path for redundancy and/or ECMPability.
>
> On Fri, 8 Dec 2023 at 19:13, Thomas Scott <tscott@digitalocean.com> wrote:
>>
>> Why not both add-path + ORR?
>> --
>>
>> Thomas Scott
>> Sr. Network Engineer
>> +1-480-241-7422
>> tscott@digitalocean.com
>>
>>
>> On Fri, Dec 8, 2023 at 11:57?AM Saku Ytti via juniper-nsp <juniper-nsp@puck.nether.net> wrote:
>>>
>>> On Fri, 8 Dec 2023 at 18:42, Vincent Bernat via juniper-nsp
>>> <juniper-nsp@puck.nether.net> wrote:
>>>
>>>> On 2023-12-07 15:21, Michael Hare via juniper-nsp wrote:
>>>>> I recognize Saku's recommendation of rib sharding is a practical one at 20M routes, I'm curious if anyone is willing to admit to using it in production and on what version of JunOS. I admit to have not played with this in the lab yet, we are much smaller [3.5M RIB] worst case at this point.
>>>>
>>>> About the scale, I said routes, but they are paths. We plan to use add
>>>> path to ensure optimal routing (ORR could be another option, but it is
>>>> less common).
>>>
>>> Given a sufficient count of path options, they're not really
>>> alternatives, but you need both. Like you can't do add-path <max>, as
>>> the clients won't scale. And you probably don't want only ORR, because
>>> of the convergence cost of clients not having a backup option or the
>>> lack of ECMP opportunity.
>>>
>>> --
>>> ++ytti
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
> ++ytti
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 12/7/23 17:05, Saku Ytti via juniper-nsp wrote:

> If you have a
> low amount of duplicate RIB entries it might not be very useful, as
> final collation of unique entries will be more or less single threaded
> anyhow. But I believe anyone having a truly large RIB, like 20M, will
> have massive duplication and will see significant benefit.

So essentially, outfits running BGP Add-Paths setups that have 6 or more
paths per route, then...

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 12/8/23 18:57, Saku Ytti via juniper-nsp wrote:

> Given a sufficient count of path options, they're not really
> alternatives, but you need both. Like you can't do add-path <max>, as
> the clients won't scale. And you probably don't want only ORR, because
> of the convergence cost of clients not having a backup option or the
> lack of ECMP opportunity.

I found that if you run 6 or more paths on an RR client, the need for
ORR was negated.

But yes, it does put a lot of pressure on the RE, with a 64GB RAM system
being the recommended minimum, I would say.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 12/8/23 19:16, Saku Ytti via juniper-nsp wrote:

> I tried to advocate for both, sorry if I was unclear.
>
> ORR for good options, add-path for redundancy and/or ECMPability.

IME, when we got all available paths, ORR was irrelevant.

But yes, at the cost of some control plane resources.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 12/8/23 19:36, Jared Mauch via juniper-nsp wrote:

> I’ll also comment that many software suites don’t scale to 10’s or 100’s of million of paths
>
> Keep in mind paths != routes and many folks don’t always catch the difference between them. If you have a global network like 2914 (for example) you may be peering with someone in 10-20 places globally so if they send you 10k routes, * 20 locations that’s 200k paths(exits), then move to someone with 100k or 400k prefixes like 3356 had at one point, those numbers go up quite a bit.

Our outfit was not as large as 2914 or 3356 when I worked there, but our
RR's saw about 12.5 million IPv4 paths and 2.9 million IPv6 paths.

The clients saw about 6 million paths and 1.2 million paths, respectively.

The biggest issues to think about his how the RE handles path churn,
which can be very high in a setup such as this, because while it
provides excellent path stability for downstream eBGP customers, it
creates a lot of noise inside your core.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Tue, 6 Feb 2024 at 18:35, Mark Tinka <mark@tinka.africa> wrote:

> IME, when we got all available paths, ORR was irrelevant.
>
> But yes, at the cost of some control plane resources.

Not just opinion, fact. If you see everything, ORR does nothing but adds cost.

You only need AddPath and ORR, when everything is too expensive, but
you still need good choices.

But even if you have resources to see all, you may not actually want
to have a lot of useless signalling and overhead, as it'll add
convergence time and risk of encouraging rare bugs to surface. In the
case where I deployed it, having all was not realistic possibly, in
that, having all would mean network upgrade cycle is determined when
enough peers are added, causing RIB scale to demand triggering full
upgrade cycle, despite not selling the ports already paid.
You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
scale, you should only need to upgrade your boxes, if you don't have
holes to stick paying fiber into.


--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2/6/24 18:53, Saku Ytti wrote:

> Not just opinion, fact. If you see everything, ORR does nothing but adds cost.
>
> You only need AddPath and ORR, when everything is too expensive, but
> you still need good choices.
>
> But even if you have resources to see all, you may not actually want
> to have a lot of useless signalling and overhead, as it'll add
> convergence time and risk of encouraging rare bugs to surface. In the
> case where I deployed it, having all was not realistic possibly, in
> that, having all would mean network upgrade cycle is determined when
> enough peers are added, causing RIB scale to demand triggering full
> upgrade cycle, despite not selling the ports already paid.
> You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
> scale, you should only need to upgrade your boxes, if you don't have
> holes to stick paying fiber into.

I agree.

We started with 6 paths to see how far the network could go, and how
well ECMP would work across customers who connected to us in multiple
cities/countries with the same AS. That was exceedingly successful and
customers were very happy that they could increase their capacity
through multiple, multi-site links, without paying anything extra and
improving performance all around.

Same for peers.

But yes, it does cost a lot of control plane for anything less than 32GB
on the MX. The MX204 played well if you unleased it's "hidden memory"
hack :-).

This was not a massive issue for the RR's which were running on CSR1000v
(now replaced with Cat8000v). But certainly, it did test the 16GB
Juniper RE's we had.

The next step, before I left, was to work on how many paths we can
reduce to from 6 without losing the gains we had made for our customers
and peers. That would have lowered pressure on the control plane, but
not sure how it would have impacted the improvement in multi-site load
balancing.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
Hi

I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
the infrastructure that cRPD runs on?

BMS with basic Docker or K8s? (kind of an appliance approach)
VM in hypervisor with the above?
Existing K8s cluster?

I can imagine that many networking teams would like an AIO cRPD appliance
from Juniper, rather than giving away the "control" to the server/container
team.

What are your thoughts on this?

Regards
Roger


On Tue, Feb 6, 2024 at 6:02?PM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 2/6/24 18:53, Saku Ytti wrote:
>
> > Not just opinion, fact. If you see everything, ORR does nothing but adds
> cost.
> >
> > You only need AddPath and ORR, when everything is too expensive, but
> > you still need good choices.
> >
> > But even if you have resources to see all, you may not actually want
> > to have a lot of useless signalling and overhead, as it'll add
> > convergence time and risk of encouraging rare bugs to surface. In the
> > case where I deployed it, having all was not realistic possibly, in
> > that, having all would mean network upgrade cycle is determined when
> > enough peers are added, causing RIB scale to demand triggering full
> > upgrade cycle, despite not selling the ports already paid.
> > You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
> > scale, you should only need to upgrade your boxes, if you don't have
> > holes to stick paying fiber into.
>
> I agree.
>
> We started with 6 paths to see how far the network could go, and how
> well ECMP would work across customers who connected to us in multiple
> cities/countries with the same AS. That was exceedingly successful and
> customers were very happy that they could increase their capacity
> through multiple, multi-site links, without paying anything extra and
> improving performance all around.
>
> Same for peers.
>
> But yes, it does cost a lot of control plane for anything less than 32GB
> on the MX. The MX204 played well if you unleased it's "hidden memory"
> hack :-).
>
> This was not a massive issue for the RR's which were running on CSR1000v
> (now replaced with Cat8000v). But certainly, it did test the 16GB
> Juniper RE's we had.
>
> The next step, before I left, was to work on how many paths we can
> reduce to from 6 without losing the gains we had made for our customers
> and peers. That would have lowered pressure on the control plane, but
> not sure how it would have impacted the improvement in multi-site load
> balancing.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Thu, 8 Feb 2024 at 09:51, Roger Wiklund via juniper-nsp
<juniper-nsp@puck.nether.net> wrote:


> I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
> the infrastructure that cRPD runs on?

Same concerns, I would just push it back and be a late adopter. Rock
existing vRR while supported, not pre-empt into cRPD because vendor
says that's the future. Let someone else work with the vendor to
ensure feature parity and indeed perhaps get some appliance from the
vendor.

With HPE, I feel like there is a lot more incentive to sell integrated
appliances to you than before.



--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2/8/24 09:50, Roger Wiklund via juniper-nsp wrote:

> Hi
>
> I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
> the infrastructure that cRPD runs on?

I run cRPD on my laptop for nothing really useful apart from testing
configuration commands, e.t.c.

I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
thing) seems to make more sense.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2/8/24 09:56, Saku Ytti via juniper-nsp wrote:

> Same concerns, I would just push it back and be a late adopter. Rock
> existing vRR while supported, not pre-empt into cRPD because vendor
> says that's the future. Let someone else work with the vendor to
> ensure feature parity and indeed perhaps get some appliance from the
> vendor.

Agreed.

> With HPE, I feel like there is a lot more incentive to sell integrated
> appliances to you than before.

Is the MX150 still a current product? My understand is it's an x86
platform running vMX.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2/8/24 09:56, Saku Ytti via juniper-nsp wrote:

> Same concerns, I would just push it back and be a late adopter. Rock
> existing vRR while supported, not pre-empt into cRPD because vendor
> says that's the future. Let someone else work with the vendor to
> ensure feature parity and indeed perhaps get some appliance from the
> vendor.

Agreed.

> With HPE, I feel like there is a lot more incentive to sell integrated
> appliances to you than before.

Is the MX150 still a current product? My understanding is it's an x86
platform running vMX.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Thu, 8 Feb 2024 at 10:16, Mark Tinka <mark@tinka.africa> wrote:

> Is the MX150 still a current product? My understanding is it's an x86 platform running vMX.

No longer orderable.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>

For any use cases that you want protocol interaction, but not substantive
traffic forwarding capabilities , cRPD is by far the better option.

It can handle around 1M total RIB/FIB using around 2G RAM, right in Docker
or k8. The last version of vMX I played with required at least 5G RAM / 4
cores to even start the vRE and vPFEs up, plus you have to do a bunch of
KVM tweaking and customization, along with NIC driver fun. All of that has
to work right just to START the thing, even if you have no intent to use it
for forwarding. You could have cRPD up in 20 minutes on even a crappy Linux
host. vMX has a lot more overhead.



On Thu, Feb 8, 2024 at 3:13?AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 2/8/24 09:50, Roger Wiklund via juniper-nsp wrote:
>
> > Hi
> >
> > I'm curious, when moving from vRR to cRPD, how do you plan to
> manage/setup
> > the infrastructure that cRPD runs on?
>
> I run cRPD on my laptop for nothing really useful apart from testing
> configuration commands, e.t.c.
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On 2/8/24 17:10, Tom Beecher wrote:

>
> For any use cases that you want protocol interaction, but not
> substantive traffic forwarding capabilities , cRPD is by far the
> better option.
>
> It can handle around 1M total RIB/FIB using around 2G RAM, right in
> Docker or k8. The last version of vMX I played with required at least
> 5G RAM / 4 cores to even start the vRE and vPFEs up, plus you have to
> do a bunch of KVM tweaking and customization, along with NIC driver
> fun. All of that has to work right just to START the thing, even if
> you have no intent to use it for forwarding. You could have cRPD up in
> 20 minutes on even a crappy Linux host. vMX has a lot more overhead.

Is the same true for VMware?

I had a similar experience trying to get CSR1000v on KVM going back in
2014 (and Junos vRR, as it were). Gave up and moved to CSR1000v on
VMware where it was all sweeter. Back then, vRR did not support
VMware... only KVM.

On the other hand, if you are deploying one of these as an RR, hardware
resources are going to be the least of your worries. In other words,
some splurging is in order. I'd rather do that and be able to run a
solid software-only OS than be a test-bed for cRPD in such a use-case.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
>
> Is the same true for VMware?
>

Never tried it there myself.

be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.


AFAIK, cRPD is part of the same build pipeline as 'full' JUNOS, so if
there's a bug in any given version, it will catch you on Juniper's metal ,
or your own metal for vMX, or cRPD ( assuming said bug is not hardware
dependent/related ).

On Thu, Feb 8, 2024 at 10:21?AM Mark Tinka <mark@tinka.africa> wrote:

>
>
> On 2/8/24 17:10, Tom Beecher wrote:
>
> >
> > For any use cases that you want protocol interaction, but not
> > substantive traffic forwarding capabilities , cRPD is by far the
> > better option.
> >
> > It can handle around 1M total RIB/FIB using around 2G RAM, right in
> > Docker or k8. The last version of vMX I played with required at least
> > 5G RAM / 4 cores to even start the vRE and vPFEs up, plus you have to
> > do a bunch of KVM tweaking and customization, along with NIC driver
> > fun. All of that has to work right just to START the thing, even if
> > you have no intent to use it for forwarding. You could have cRPD up in
> > 20 minutes on even a crappy Linux host. vMX has a lot more overhead.
>
> Is the same true for VMware?
>
> I had a similar experience trying to get CSR1000v on KVM going back in
> 2014 (and Junos vRR, as it were). Gave up and moved to CSR1000v on
> VMware where it was all sweeter. Back then, vRR did not support
> VMware... only KVM.
>
> On the other hand, if you are deploying one of these as an RR, hardware
> resources are going to be the least of your worries. In other words,
> some splurging is in order. I'd rather do that and be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.
>
> Mark.
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

1 2  View All