Mailing List Archive

MX960 vs MX10K
dear Juniper community

is there any limitation of using MX960 as DC-GW compared to MX10K ?

juniper always recommends to use MX10K , but i my case i need MS-MPC which
is not supported on MX10K and i want to knwo if i will have some limitation
on MX960.

Thanks
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
What model of MX10k?


?Em 04/03/2020 08:56, "juniper-nsp em nome de Ibariouen Khalid" <juniper-nsp-bounces@puck.nether.net em nome de ibariouen@gmail.com> escreveu:

dear Juniper community

is there any limitation of using MX960 as DC-GW compared to MX10K ?

juniper always recommends to use MX10K , but i my case i need MS-MPC which
is not supported on MX10K and i want to knwo if i will have some limitation
on MX960.

Thanks
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=d3qAF5t8mugacLDeGpoAguKDWyMVANad_HfrWBCDH1s&m=fZPomAfgI6F_gCmglyCCQEd7ffiHarAb7El2RzioVt8&s=LfmhcIovDqSZJMirXRIbBV7E4uNs9PqHzR_R6ZnPMKw&e=


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
mx10008

On Wed, Mar 4, 2020 at 12:59 PM Alexandre Guimaraes <
alexandre.guimaraes@ascenty.com> wrote:

>
>
> What model of MX10k?
>
>
> ?Em 04/03/2020 08:56, "juniper-nsp em nome de Ibariouen Khalid" <
> juniper-nsp-bounces@puck.nether.net em nome de ibariouen@gmail.com>
> escreveu:
>
> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC
> which
> is not supported on MX10K and i want to knwo if i will have some
> limitation
> on MX960.
>
> Thanks
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=d3qAF5t8mugacLDeGpoAguKDWyMVANad_HfrWBCDH1s&m=fZPomAfgI6F_gCmglyCCQEd7ffiHarAb7El2RzioVt8&s=LfmhcIovDqSZJMirXRIbBV7E4uNs9PqHzR_R6ZnPMKw&e=
>
>
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 13:55, Ibariouen Khalid wrote:

> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC which
> is not supported on MX10K and i want to knwo if i will have some limitation
> on MX960.

Juniper's future lies in the MX10000.

If your needs are not too complicated, the MX960/480 are still great
options.

For us, the MX480 is our edge workhorse. But where we need to deliver
100Gbps service ports, the MX1000 makes more sense.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
It really depends on what you're going to be doing,but I still have quite a
few MX960s out there running pretty significant workloads without issues.

I would suspect you hit the limits of the MS-MPCs way before the limits of
the chassis.

On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid <ibariouen@gmail.com> wrote:

> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC which
> is not supported on MX10K and i want to knwo if i will have some limitation
> on MX960.
>
> Thanks
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 16:36, Tom Beecher wrote:
> It really depends on what you're going to be doing,but I still have quite a
> few MX960s out there running pretty significant workloads without issues.
>
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.

The classic MX chassis are nowhere close to running out of ideas.

But Juniper have to always be pushing the tech., so emphasis will be on
the MX1000 (although not necessarily at the expense of the MX960/480/240).

I still believe if your use-case is not overly complicated, you may find
the MX960/480 to be cheaper if you don't need 100Gbps ports.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
You can still get 100G ports on the 960 chassis with MPC5E/6/7s , depending
on what kind of density you require.

On Wed, Mar 4, 2020 at 9:42 AM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 4/Mar/20 16:36, Tom Beecher wrote:
> > It really depends on what you're going to be doing,but I still have
> quite a
> > few MX960s out there running pretty significant workloads without issues.
> >
> > I would suspect you hit the limits of the MS-MPCs way before the limits
> of
> > the chassis.
>
> The classic MX chassis are nowhere close to running out of ideas.
>
> But Juniper have to always be pushing the tech., so emphasis will be on
> the MX1000 (although not necessarily at the expense of the MX960/480/240).
>
> I still believe if your use-case is not overly complicated, you may find
> the MX960/480 to be cheaper if you don't need 100Gbps ports.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 16:47, Tom Beecher wrote:
> You can still get 100G ports on the 960 chassis with MPC5E/6/7s ,
> depending on what kind of density you require.

I didn't say the MX960/480 doesn't support 100Gbps ports; I said they
would be cheaper on an MX10000 if you need more than a handful per slot.

We have some MPC7E's with 100Gbps ports on some of our MX480's. Because
we needed so few, it was cheaper than getting an MX10000. But there are
instances where an MX10000 makes more sense because we have a large need
of 100Gbps ports per slot in those areas.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in mx240 , mx480 or mx960

But you will need premium 3 chassis with scbe3 boards to have maximum capacity.



Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: juniper-nsp <juniper-nsp-bounces@puck.nether.net> on behalf of Tom Beecher <beecher@beecher.cc>
Sent: Wednesday, March 4, 2020 11:47:29 AM
To: Mark Tinka <mark.tinka@seacom.mu>
Cc: juniper-nsp <juniper-nsp@puck.nether.net>
Subject: Re: [j-nsp] MX960 vs MX10K

You can still get 100G ports on the 960 chassis with MPC5E/6/7s , depending
on what kind of density you require.

On Wed, Mar 4, 2020 at 9:42 AM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 4/Mar/20 16:36, Tom Beecher wrote:
> > It really depends on what you're going to be doing,but I still have
> quite a
> > few MX960s out there running pretty significant workloads without issues.
> >
> > I would suspect you hit the limits of the MS-MPCs way before the limits
> of
> > the chassis.
>
> The classic MX chassis are nowhere close to running out of ideas.
>
> But Juniper have to always be pushing the tech., so emphasis will be on
> the MX1000 (although not necessarily at the expense of the MX960/480/240).
>
> I still believe if your use-case is not overly complicated, you may find
> the MX960/480 to be cheaper if you don't need 100Gbps ports.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpuck.nether.net%2Fmailman%2Flistinfo%2Fjuniper-nsp&amp;data=02%7C01%7Cgiuliano%40wztech.com.br%7C0677ff9683cb447dc8ca08d7c04b78df%7C584787b077bd4312bf8815412b8ae504%7C1%7C0%7C637189302716146887&amp;sdata=JcCFyUbjkOfxyyPrKRn%2F3ihFfrh1AMWL1hSXyJSIrHo%3D&amp;reserved=0
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpuck.nether.net%2Fmailman%2Flistinfo%2Fjuniper-nsp&amp;data=02%7C01%7Cgiuliano%40wztech.com.br%7C0677ff9683cb447dc8ca08d7c04b78df%7C584787b077bd4312bf8815412b8ae504%7C1%7C0%7C637189302716156884&amp;sdata=JkwRRASmBkKYk7KbLOkqY%2F%2BRuI%2Ffq9fNoVJdQN5KrRk%3D&amp;reserved=0

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright ? 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informa??es deste e-mail e o conte?do dos eventuais documentos anexos s?o confidenciais e para conhecimento exclusivo do destinat?rio. Se o leitor desta mensagem n?o for o seu destinat?rio, fica desde j? notificado de que n?o poder? divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das informa??es e do conte?do dos documentos anexos. Neste caso, favor comunicar imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are solely for the intended recipient and may contain confidential or privileged information. If you are not the intended recipient, any review, transmission, dissemination or other use of this information is prohibited. If you have received this communication in error, please notify the sender immediately and delete the material from any computer, including any copies.

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright ? 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informa??es deste e-mail e o conte?do dos eventuais documentos anexos s?o confidenciais e para conhecimento exclusivo do destinat?rio. Se o leitor desta mensagem n?o for o seu destinat?rio, fica desde j? notificado de que n?o poder? divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das informa??es e do conte?do dos documentos anexos. Neste caso, favor comunicar imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are solely for the intended recipient and may contain confidential or privileged information. If you are not the intended recipient, any review, transmission, dissemination or other use of this information is prohibited. If you have received this communication in error, please notify the sender immediately and delete the material from any computer, including any copies.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 16:53, Giuliano C. Medalha wrote:
> With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in
> mx240 , mx480 or mx960
>
> But you will need premium 3 chassis with scbe3 boards to have maximum
> capacity.

An MX10008/10016 chassis can get you 24x 100Gbps per slot. That's going
to be a lot cheaper than an MPC10E-15C-MRATE (and other bits you may
need to upgrade for the performance).

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
Likely, but if you only need like 4.... :)

On Wed, Mar 4, 2020 at 10:01 AM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
> On 4/Mar/20 16:53, Giuliano C. Medalha wrote:
>
> With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in mx240 ,
> mx480 or mx960
>
> But you will need premium 3 chassis with scbe3 boards to have maximum
> capacity.
>
>
> An MX10008/10016 chassis can get you 24x 100Gbps per slot. That's going to
> be a lot cheaper than an MPC10E-15C-MRATE (and other bits you may need to
> upgrade for the performance).
>
> Mark.
>
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 17:18, Tom Beecher wrote:
> Likely, but if you only need like 4....  :)

Then try the MPC7E :-). Cheaper than the MPC10E.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
The MPC7E-MRATE is only good if you have to add a few 100G ports to a large
chassis (i.e. MX960) that has lots of 10G interfaces and/or service cards.
It's about 2/3 of the price of a new MX10003 with 12x100G.

On Wed, Mar 4, 2020 at 12:45 PM Mark Tinka <mark.tinka@seacom.mu> wrote:

>
>
> On 4/Mar/20 17:18, Tom Beecher wrote:
> > Likely, but if you only need like 4.... :)
>
> Then try the MPC7E :-). Cheaper than the MPC10E.
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 4/Mar/20 20:50, Luis Balbinot wrote:
> The MPC7E-MRATE is only good if you have to add a few 100G ports to a large
> chassis (i.e. MX960) that has lots of 10G interfaces and/or service cards.
> It's about 2/3 of the price of a new MX10003 with 12x100G.

That's my point :-).

We have several MX480's that have a ton of 10Gbps ports, but only need a
handful of 100Gbps ports. The MPC7E works out to be a little cheaper
than the MX1000 in that regard.

For cases where we need more than a handful of 100Gbps ports for edge
applications, the MX10000 is cheaper than an MX480 with MPC7E's or MPC10E's.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
Just to chime in --- for scale-out, wouldn't you be better offloading those MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing twice; so it's at worst a neutral proposition to externalise it and add low-cost non-HQoS ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box? and yes, MS-MPC won't scale to Nx100G of workload.

- CK.



> On 5 Mar 2020, at 1:36 am, Tom Beecher <beecher@beecher.cc> wrote:
>
> It really depends on what you're going to be doing,but I still have quite a
> few MX960s out there running pretty significant workloads without issues.
>
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.
>
> On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid <ibariouen@gmail.com> wrote:
>
>> dear Juniper community
>>
>> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>>
>> juniper always recommends to use MX10K , but i my case i need MS-MPC which
>> is not supported on MX10K and i want to knwo if i will have some limitation
>> on MX960.
>>
>> Thanks
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 5/Mar/20 05:32, Chris Kawchuk wrote:

> Just to chime in --- for scale-out, wouldn't you be better offloading those MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).
>
> You burn slots for the MSMPC plus you burn the backplane crossing twice; so it's at worst a neutral proposition to externalise it and add low-cost non-HQoS ports to feed it.
>
> or is it the case of limited space/power/RUs/want-it-all-in-one-box? and yes, MS-MPC won't scale to Nx100G of workload.

And along that line, are the services the OP needs on the MS-MPC not
available natively in the MX10000/960/480/240 line cards?

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
Only question is if it needs stateful-ness or not (IPSEC, CGNAT etc...), but only the OP can answer that.

- CK.


> On 5 Mar 2020, at 2:39 pm, Mark Tinka <mark.tinka@seacom.mu> wrote:
>
>
>
> On 5/Mar/20 05:32, Chris Kawchuk wrote:
>
>> Just to chime in --- for scale-out, wouldn't you be better offloading those MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).
>>
>> You burn slots for the MSMPC plus you burn the backplane crossing twice; so it's at worst a neutral proposition to externalise it and add low-cost non-HQoS ports to feed it.
>>
>> or is it the case of limited space/power/RUs/want-it-all-in-one-box? and yes, MS-MPC won't scale to Nx100G of workload.
>
> And along that line, are the services the OP needs on the MS-MPC not
> available natively in the MX10000/960/480/240 line cards?
>
> Mark.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On Thu, 5 Mar 2020 at 05:52, Chris Kawchuk <ckawchuk@gmail.com> wrote:

> Only question is if it needs stateful-ness or not (IPSEC, CGNAT etc...), but only the OP can answer that.

IPSEC isn't stateful in any meaningful way If you can implement MACSec
it shouldn't take much more transistors to do IPSEC.

Indeed current gen (post EA, i.e. ZT and YT) Trio does IPSEC in every port.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On Thu, 5 Mar 2020 at 18:05, Alexander Arseniev <arseniev@btinternet.com> wrote:


> I would expect the "IPSEC anchor PFE", just like it is done with BFD et
> al a.t.m.
> That anchor PFE maintains IKE exchange sequences/anti-replay etc and any
> IKE/IPSec packet arriving on a different PFE would be redirected there.
> Same thing really what currently happens on a Services card.

I'm not sure what you mean by BFD here. BFD can be done in various ways

a) RPD
b) PPMd on RE CPU
c) PPMd on LC CPU
d) Inline on NPU

If you do it on d) it's done the NPU where the neighbour is, entirely
on the NPU.

And sure there is signalling in IPSEC, just like there is in BGP,
which is not done in hardware. But actual bit pushing is done in
hardware.


--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
On 5/Mar/20 18:29, Saku Ytti wrote:

>
> If you do it on d) it's done the NPU where the neighbour is, entirely
> on the NPU.

Not yet available for IPv6.

Which reminds me - let me see where Juniper are with this ER.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
I'd be +1 for this. For DC GW the main concern should be reliability and
simplicity. If you are going to bring EVPN there, then having fancy
services mixed on the same chassis may affect your uptime.
Also I'd take MX480 instead of 960 because of architecture compromises
of the latter. I'm also wondering, if MX960 fits in terms of number of
ports and capacity with some slots occupied by service cards, maybe
MX1003 + MX480 (or virtualized services) would do the job?

Kind regards,
Andrey


Chris Kawchuk ????? 2020-03-04 22:32:
> Just to chime in --- for scale-out, wouldn't you be better offloading
> those MS-MPC functions to another box? (i.e. VM/Dedicated
> Appliance/etc..?).
>
> You burn slots for the MSMPC plus you burn the backplane crossing
> twice; so it's at worst a neutral proposition to externalise it and
> add low-cost non-HQoS ports to feed it.
>
> or is it the case of limited space/power/RUs/want-it-all-in-one-box?
> and yes, MS-MPC won't scale to Nx100G of workload.
>
> - CK.
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
In my case, 960 has a lot of slots, and I use slot 0 and slot 11 for
MPC-7E-MRATE to light up 100 gig east/west ring and 40 gig south to ACX
subrings, so I have plenty of slot space for my MS-MPC-128G nat module... If
I place it somewhere else, then I gotta cross the network to some extent to
get to it... also, my dual 100 gig inet connections are on a couple of those
960's where I colo the mpc-128g card, yeah, it's all right there. Not the
case for dsl nat, that's across the network in a couple mx104's, but dsl
doesn't have near the speeds that my ftth and cm subs have.

-Aaron

-----Original Message-----
From: juniper-nsp [mailto:juniper-nsp-bounces@puck.nether.net] On Behalf Of
Chris Kawchuk
Sent: Wednesday, March 4, 2020 9:33 PM
To: Tom Beecher
Cc: juniper-nsp
Subject: Re: [j-nsp] MX960 vs MX10K

Just to chime in --- for scale-out, wouldn't you be better offloading those
MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing twice; so
it's at worst a neutral proposition to externalise it and add low-cost
non-HQoS ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box? and
yes, MS-MPC won't scale to Nx100G of workload.

- CK.



> On 5 Mar 2020, at 1:36 am, Tom Beecher <beecher@beecher.cc> wrote:
>
> It really depends on what you're going to be doing,but I still have quite
a
> few MX960s out there running pretty significant workloads without issues.
>
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.
>
> On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid <ibariouen@gmail.com>
wrote:
>
>> dear Juniper community
>>
>> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>>
>> juniper always recommends to use MX10K , but i my case i need MS-MPC
which
>> is not supported on MX10K and i want to knwo if i will have some
limitation
>> on MX960.
>>
>> Thanks
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: MX960 vs MX10K [ In reply to ]
Just fyi, I'm running evpn-mpls between a couple dc's and ms-mpc-128g for my cable modem communities all in the same mx960 chassis's... been good so far.

-Aaron


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

1 2  View All