Mailing List Archive

1 2  View All
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Thu, 8 Feb 2024 at 17:11, Tom Beecher via juniper-nsp
<juniper-nsp@puck.nether.net> wrote:

> For any use cases that you want protocol interaction, but not substantive
> traffic forwarding capabilities , cRPD is by far the better option.

No one is saying that cRPD isn't the future, just that there are a lot
of existing deployments with vRR, which are run with some success, and
the entire stability of the network depends on it. Whereas cRPD is a
newer entrant, and early on back when I tested it, it was very feature
incomplete in comparison.
So those who are already running vRR, and are happy with it, changing
to cRPD just to change to cRPD is simply bad risk. Many of us don't
care about DRAM of vCPU, because you only need a small number of RRs,
and DRAM/vCPU grows on trees. But we live in constant fear of the
entire RR setup blowing up, so motivation for change needs to be solid
and ideally backed by examples of success in a similar role in your
circle of people.


--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
>
> No one is saying that cRPD isn't the future, just that there are a lot
> of existing deployments with vRR, which are run with some success, and
> the entire stability of the network depends on it. Whereas cRPD is a
> newer entrant, and early on back when I tested it, it was very feature
> incomplete in comparison.
> So those who are already running vRR, and are happy with it, changing
> to cRPD just to change to cRPD is simply bad risk. Many of us don't
> care about DRAM of vCPU, because you only need a small number of RRs,
> and DRAM/vCPU grows on trees. But we live in constant fear of the
> entire RR setup blowing up, so motivation for change needs to be solid
> and ideally backed by examples of success in a similar role in your
> circle of people.
>

Completely fair, yes. My comments were mostly aimed at a vMX/cRPD
comparison. I probably wasn't clear about that. Completely agree that it
doesn't make much sense to move from an existing vRR to cRPD just because.
For a greenfield thing I'd certainly lean cRPD over VRR at least in
planning. Newer cRPD has definitely come a long way relative to older. (
Although I haven't had reason or cycles to really ride it hard and see
where I can break it.... yet. :) )



On Fri, Feb 9, 2024 at 3:51?AM Saku Ytti <saku@ytti.fi> wrote:

> On Thu, 8 Feb 2024 at 17:11, Tom Beecher via juniper-nsp
> <juniper-nsp@puck.nether.net> wrote:
>
> > For any use cases that you want protocol interaction, but not substantive
> > traffic forwarding capabilities , cRPD is by far the better option.
>
> No one is saying that cRPD isn't the future, just that there are a lot
> of existing deployments with vRR, which are run with some success, and
> the entire stability of the network depends on it. Whereas cRPD is a
> newer entrant, and early on back when I tested it, it was very feature
> incomplete in comparison.
> So those who are already running vRR, and are happy with it, changing
> to cRPD just to change to cRPD is simply bad risk. Many of us don't
> care about DRAM of vCPU, because you only need a small number of RRs,
> and DRAM/vCPU grows on trees. But we live in constant fear of the
> entire RR setup blowing up, so motivation for change needs to be solid
> and ideally backed by examples of success in a similar role in your
> circle of people.
>
>
> --
> ++ytti
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
Juniper does not have a lot of guidelines on this. This is a bit
surprising to us too. I would have expect some guidelines about IRQ and
CPU pinning. It seems they think this does not matter much for a RR.

However, cRPD comes with better performance than vRR and therefore,
Juniper pushes to cRPD instead of vRR.

On 2024-02-08 08:50, Roger Wiklund via juniper-nsp wrote:
> Hi
>
> I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
> the infrastructure that cRPD runs on?
>
> BMS with basic Docker or K8s? (kind of an appliance approach)
> VM in hypervisor with the above?
> Existing K8s cluster?
>
> I can imagine that many networking teams would like an AIO cRPD appliance
> from Juniper, rather than giving away the "control" to the server/container
> team.
>
> What are your thoughts on this?
>
> Regards
> Roger
>
>
> On Tue, Feb 6, 2024 at 6:02?PM Mark Tinka via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
>
>>
>>
>> On 2/6/24 18:53, Saku Ytti wrote:
>>
>>> Not just opinion, fact. If you see everything, ORR does nothing but adds
>> cost.
>>>
>>> You only need AddPath and ORR, when everything is too expensive, but
>>> you still need good choices.
>>>
>>> But even if you have resources to see all, you may not actually want
>>> to have a lot of useless signalling and overhead, as it'll add
>>> convergence time and risk of encouraging rare bugs to surface. In the
>>> case where I deployed it, having all was not realistic possibly, in
>>> that, having all would mean network upgrade cycle is determined when
>>> enough peers are added, causing RIB scale to demand triggering full
>>> upgrade cycle, despite not selling the ports already paid.
>>> You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
>>> scale, you should only need to upgrade your boxes, if you don't have
>>> holes to stick paying fiber into.
>>
>> I agree.
>>
>> We started with 6 paths to see how far the network could go, and how
>> well ECMP would work across customers who connected to us in multiple
>> cities/countries with the same AS. That was exceedingly successful and
>> customers were very happy that they could increase their capacity
>> through multiple, multi-site links, without paying anything extra and
>> improving performance all around.
>>
>> Same for peers.
>>
>> But yes, it does cost a lot of control plane for anything less than 32GB
>> on the MX. The MX204 played well if you unleased it's "hidden memory"
>> hack :-).
>>
>> This was not a massive issue for the RR's which were running on CSR1000v
>> (now replaced with Cat8000v). But certainly, it did test the 16GB
>> Juniper RE's we had.
>>
>> The next step, before I left, was to work on how many paths we can
>> reduce to from 6 without losing the gains we had made for our customers
>> and peers. That would have lowered pressure on the control plane, but
>> not sure how it would have impacted the improvement in multi-site load
>> balancing.
>>
>> Mark.
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Hardware configuration for cRPD as RR [ In reply to ]
On Fri, 9 Feb 2024 at 17:50, Tom Beecher <beecher@beecher.cc> wrote:

> Completely fair, yes. My comments were mostly aimed at a vMX/cRPD comparison. I probably wasn't clear about that. Completely agree that it doesn't make much sense to move from an existing vRR to cRPD just because. For a greenfield thing I'd certainly lean cRPD over VRR at least in planning. Newer cRPD has definitely come a long way relative to older. ( Although I haven't had reason or cycles to really ride it hard and see where I can break it.... yet. :) )

Agreed on green field straight to cRPD today, and fallback to vRR if
needed. Just because it is clear that vendor focus is there and wants
to see you there.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

1 2  View All