Mailing List Archive

Cisco N540-ACC-SYS ipv4 routes
Dear,

Can someone please confirm how many routes are supported in above model in
both rib and fib?

Also, I am not able to find table-map command for this router.

Any suggestions?

Regards,

Vikas
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi,

I don’t know the exact RIB scale, if there is one, short of what available memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+ BGP prefixes with the BGP process consuming about 1.9GB of memory.

It won’t hold a full table in FIB. 350K max, protocol independent, depending on the prefix size.

SRD is implemented using table-policy.

> On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Dear,
>
> Can someone please confirm how many routes are supported in above model in
> both rib and fib?
>
> Also, I am not able to find table-map command for this router.
>
> Any suggestions?
>
> Regards,
>
> Vikas
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Many thanks Jason for your quick response.

If possible please also confirm the rib/fib limits of ASR1002-HX.

I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
choose the best of the two options.

Regards,
Vikas

On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca> wrote:

> Hi,
>
> I don’t know the exact RIB scale, if there is one, short of what available
> memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+
> BGP prefixes with the BGP process consuming about 1.9GB of memory.
>
> It won’t hold a full table in FIB. 350K max, protocol independent,
> depending on the prefix size.
>
> SRD is implemented using table-policy.
>
> > On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
> >
> > Dear,
> >
> > Can someone please confirm how many routes are supported in above model
> in
> > both rib and fib?
> >
> > Also, I am not able to find table-map command for this router.
> >
> > Any suggestions?
> >
> > Regards,
> >
> > Vikas
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
I’m not as familiar with the ASR1002-HX, but what I’m pretty sure of is if you’re considering the ASR1002-HX for IGW, you may want to review Juniper’s MX204. It’s probably going to be slightly more expensive than a NCS540, but far less expensive than the ASR1002-HX, but overall it will be a much better bang for your buck as an IGW.

> On Jul 8, 2020, at 8:48 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Many thanks Jason for your quick response.
>
> If possible please also confirm the rib/fib limits of ASR1002-HX.
>
> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to choose the best of the two options.
>
> Regards,
> Vikas
>
> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca <mailto:jason@lixfeld.ca>> wrote:
> Hi,
>
> I don’t know the exact RIB scale, if there is one, short of what available memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+ BGP prefixes with the BGP process consuming about 1.9GB of memory.
>
> It won’t hold a full table in FIB. 350K max, protocol independent, depending on the prefix size.
>
> SRD is implemented using table-policy.
>
> > On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com <mailto:vikassharmas@gmail.com>> wrote:
> >
> > Dear,
> >
> > Can someone please confirm how many routes are supported in above model in
> > both rib and fib?
> >
> > Also, I am not able to find table-map command for this router.
> >
> > Any suggestions?
> >
> > Regards,
> >
> > Vikas
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp@puck.nether.net <mailto:cisco-nsp@puck.nether.net>
> > https://puck.nether.net/mailman/listinfo/cisco-nsp <https://puck.nether.net/mailman/listinfo/cisco-nsp>
> > archive at http://puck.nether.net/pipermail/cisco-nsp/ <http://puck.nether.net/pipermail/cisco-nsp/>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Vikas,

First of all, NCS 540 ACC-SYS has 16GB of RAM.

For NCS 540, slide 43:
https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf <https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf>

Essentially, around 380k depending on prefix distribution.

For ASR 1002-HX it’s here:
https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling <https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling>

Please use your favorite search engine first in future.


./

> On 9 Jul 2020, at 02:48, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Many thanks Jason for your quick response.
>
> If possible please also confirm the rib/fib limits of ASR1002-HX.
>
> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
> choose the best of the two options.
>
> Regards,
> Vikas
>
> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca> wrote:
>
>> Hi,
>>
>> I don’t know the exact RIB scale, if there is one, short of what available
>> memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+
>> BGP prefixes with the BGP process consuming about 1.9GB of memory.
>>
>> It won’t hold a full table in FIB. 350K max, protocol independent,
>> depending on the prefix size.
>>
>> SRD is implemented using table-policy.
>>
>>> On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>>>
>>> Dear,
>>>
>>> Can someone please confirm how many routes are supported in above model
>> in
>>> both rib and fib?
>>>
>>> Also, I am not able to find table-map command for this router.
>>>
>>> Any suggestions?
>>>
>>> Regards,
>>>
>>> Vikas
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Many thanks Jason.

Regards,
Vikas

On Thu, 9 Jul, 2020, 6:39 am Jason Lixfeld, <jason@lixfeld.ca> wrote:

> I’m not as familiar with the ASR1002-HX, but what I’m pretty sure of is if
> you’re considering the ASR1002-HX for IGW, you may want to review Juniper’s
> MX204. It’s probably going to be slightly more expensive than a NCS540,
> but far less expensive than the ASR1002-HX, but overall it will be a much
> better bang for your buck as an IGW.
>
> On Jul 8, 2020, at 8:48 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Many thanks Jason for your quick response.
>
> If possible please also confirm the rib/fib limits of ASR1002-HX.
>
> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
> choose the best of the two options.
>
> Regards,
> Vikas
>
> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca> wrote:
>
>> Hi,
>>
>> I don’t know the exact RIB scale, if there is one, short of what
>> available memory will hold. That said, it’s got 8GB of memory, and I’ve
>> seen 1.7M+ BGP prefixes with the BGP process consuming about 1.9GB of
>> memory.
>>
>> It won’t hold a full table in FIB. 350K max, protocol independent,
>> depending on the prefix size.
>>
>> SRD is implemented using table-policy.
>>
>> > On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com>
>> wrote:
>> >
>> > Dear,
>> >
>> > Can someone please confirm how many routes are supported in above model
>> in
>> > both rib and fib?
>> >
>> > Also, I am not able to find table-map command for this router.
>> >
>> > Any suggestions?
>> >
>> > Regards,
>> >
>> > Vikas
>> > _______________________________________________
>> > cisco-nsp mailing list cisco-nsp@puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/cisco-nsp
>> > archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> On Jul 8, 2020, at 9:14 PM, ?ukasz Bromirski <lukasz@bromirski.net> wrote:
>
> Vikas,
>
> First of all, NCS 540 ACC-SYS has 16GB of RAM.

… but only 8GB available to the RP.

RP/0/RP0/CPU0:ncs540-7.lab#show memory summary
Wed Jul 8 21:19:33.842 EDT

node: node0_RP0_CPU0
------------------------------------------------------------------

Physical Memory: 8192M total (5333M available)
Application Memory : 8192M (5333M available)
Image: 4M (bootram: 0M)
Reserved: 0M, IOMem: 0M, flashfsys: 0M
Total shared window: 410M

RP/0/RP0/CPU0:ncs540-7.lab#show platform
Wed Jul 8 21:20:04.260 EDT
Node Type State Config state
--------------------------------------------------------------------------------
0/RP0/CPU0 N540-ACC-SYS(Active) IOS XR RUN NSHUT
0/RP0/NPU0 Slice UP
0/FT0 N540-FAN OPERATIONAL NSHUT
0/FT1 N540-FAN OPERATIONAL NSHUT
0/FT2 N540-FAN OPERATIONAL NSHUT
0/FT3 N540-FAN OPERATIONAL NSHUT
0/PM0 N540-PWR400-D OPERATIONAL NSHUT
0/PM1 N540-PWR400-D FAILED NSHUT
RP/0/RP0/CPU0:ncs540-7.lab#

>
> For NCS 540, slide 43:
> https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf <https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf>
>
> Essentially, around 380k depending on prefix distribution.
>
> For ASR 1002-HX it’s here:
> https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling <https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling>
>
> Please use your favorite search engine first in future.
>
> —
> ./
>
>> On 9 Jul 2020, at 02:48, Vikas Sharma <vikassharmas@gmail.com <mailto:vikassharmas@gmail.com>> wrote:
>>
>> Many thanks Jason for your quick response.
>>
>> If possible please also confirm the rib/fib limits of ASR1002-HX.
>>
>> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
>> choose the best of the two options.
>>
>> Regards,
>> Vikas
>>
>> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca <mailto:jason@lixfeld.ca>> wrote:
>>
>>> Hi,
>>>
>>> I don’t know the exact RIB scale, if there is one, short of what available
>>> memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+
>>> BGP prefixes with the BGP process consuming about 1.9GB of memory.
>>>
>>> It won’t hold a full table in FIB. 350K max, protocol independent,
>>> depending on the prefix size.
>>>
>>> SRD is implemented using table-policy.
>>>
>>>> On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com <mailto:vikassharmas@gmail.com>> wrote:
>>>>
>>>> Dear,
>>>>
>>>> Can someone please confirm how many routes are supported in above model
>>> in
>>>> both rib and fib?
>>>>
>>>> Also, I am not able to find table-map command for this router.
>>>>
>>>> Any suggestions?
>>>>
>>>> Regards,
>>>>
>>>> Vikas
>>>> _______________________________________________
>>>> cisco-nsp mailing list cisco-nsp@puck.nether.net <mailto:cisco-nsp@puck.nether.net>
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp <https://puck.nether.net/mailman/listinfo/cisco-nsp>
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp@puck.nether.net <mailto:cisco-nsp@puck.nether.net>
>> https://puck.nether.net/mailman/listinfo/cisco-nsp <https://puck.nether.net/mailman/listinfo/cisco-nsp>
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Dear Luka,

Thanks for your revert. I have checked all ciscolive presentation before I
have shooted question to the forum. I understand, LPM and LEM along with
iTCAM support on C 540 does not exceed 400k but I was not getting details
on rib/fib on ASR 1002-HX, in case you have found, please share with me.

Also, processing power of ASR vs C 540 is very different, one with quard
core 1.2 GHz and another with 2.5 GHz, so I was also wondering if BGP
scanner process will be good with which. If 540 can take care of scanning
process!!

Also internet does not inform about table-map (atleast, I couldn't find),
on C 540, many thanks to Jason for details provided.

Anyway, many thanks for the kind revert.

Regards,
Vikas

On Thu, 9 Jul, 2020, 6:44 am ?ukasz Bromirski, <lukasz@bromirski.net> wrote:

> Vikas,
>
> First of all, NCS 540 ACC-SYS has 16GB of RAM.
>
> For NCS 540, slide 43:
>
> https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf
>
> Essentially, around 380k depending on prefix distribution.
>
> For ASR 1002-HX it’s here:
>
> https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling
>
> Please use your favorite search engine first in future.
>
> —
> ./
>
> On 9 Jul 2020, at 02:48, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Many thanks Jason for your quick response.
>
> If possible please also confirm the rib/fib limits of ASR1002-HX.
>
> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
> choose the best of the two options.
>
> Regards,
> Vikas
>
> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca> wrote:
>
> Hi,
>
> I don’t know the exact RIB scale, if there is one, short of what available
> memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+
> BGP prefixes with the BGP process consuming about 1.9GB of memory.
>
> It won’t hold a full table in FIB. 350K max, protocol independent,
> depending on the prefix size.
>
> SRD is implemented using table-policy.
>
> On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Dear,
>
> Can someone please confirm how many routes are supported in above model
>
> in
>
> both rib and fib?
>
> Also, I am not able to find table-map command for this router.
>
> Any suggestions?
>
> Regards,
>
> Vikas
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Slight modification, I am looking for fib for ASR 1002-HX , RIB is
available.

Regards,
Vikas

On Thu, 9 Jul, 2020, 7:04 am Vikas Sharma, <vikassharmas@gmail.com> wrote:

> Dear Luka,
>
> Thanks for your revert. I have checked all ciscolive presentation before I
> have shooted question to the forum. I understand, LPM and LEM along with
> iTCAM support on C 540 does not exceed 400k but I was not getting details
> on rib/fib on ASR 1002-HX, in case you have found, please share with me.
>
> Also, processing power of ASR vs C 540 is very different, one with quard
> core 1.2 GHz and another with 2.5 GHz, so I was also wondering if BGP
> scanner process will be good with which. If 540 can take care of scanning
> process!!
>
> Also internet does not inform about table-map (atleast, I couldn't find),
> on C 540, many thanks to Jason for details provided.
>
> Anyway, many thanks for the kind revert.
>
> Regards,
> Vikas
>
> On Thu, 9 Jul, 2020, 6:44 am ?ukasz Bromirski, <lukasz@bromirski.net>
> wrote:
>
>> Vikas,
>>
>> First of all, NCS 540 ACC-SYS has 16GB of RAM.
>>
>> For NCS 540, slide 43:
>>
>> https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2159.pdf
>>
>> Essentially, around 380k depending on prefix distribution.
>>
>> For ASR 1002-HX it’s here:
>>
>> https://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/datasheet-c78-731640.html#PerformanceandScaling
>>
>> Please use your favorite search engine first in future.
>>
>> —
>> ./
>>
>> On 9 Jul 2020, at 02:48, Vikas Sharma <vikassharmas@gmail.com> wrote:
>>
>> Many thanks Jason for your quick response.
>>
>> If possible please also confirm the rib/fib limits of ASR1002-HX.
>>
>> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
>> choose the best of the two options.
>>
>> Regards,
>> Vikas
>>
>> On Thu, 9 Jul, 2020, 12:52 am Jason Lixfeld, <jason@lixfeld.ca> wrote:
>>
>> Hi,
>>
>> I don’t know the exact RIB scale, if there is one, short of what available
>> memory will hold. That said, it’s got 8GB of memory, and I’ve seen 1.7M+
>> BGP prefixes with the BGP process consuming about 1.9GB of memory.
>>
>> It won’t hold a full table in FIB. 350K max, protocol independent,
>> depending on the prefix size.
>>
>> SRD is implemented using table-policy.
>>
>> On Jul 8, 2020, at 3:02 PM, Vikas Sharma <vikassharmas@gmail.com> wrote:
>>
>> Dear,
>>
>> Can someone please confirm how many routes are supported in above model
>>
>> in
>>
>> both rib and fib?
>>
>> Also, I am not able to find table-map command for this router.
>>
>> Any suggestions?
>>
>> Regards,
>>
>> Vikas
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
>>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi,

On Thu, Jul 09, 2020 at 07:04:13AM +0530, Vikas Sharma wrote:
> Also, processing power of ASR vs C 540 is very different, one with quard
> core 1.2 GHz and another with 2.5 GHz, so I was also wondering if BGP
> scanner process will be good with which. If 540 can take care of scanning
> process!!

BGP scanner process died like 10 years ago...

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany gert@greenie.muc.de
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
I am not sure if all hardware support hierarchical FIB as this is hardware
base feature. Yes, if H-FIB is supported, BGP PIC will be used. If not,
then !!!

Regards,
Vikas

On Thu, 9 Jul, 2020, 11:35 am Gert Doering, <gert@greenie.muc.de> wrote:

> Hi,
>
> On Thu, Jul 09, 2020 at 07:04:13AM +0530, Vikas Sharma wrote:
> > Also, processing power of ASR vs C 540 is very different, one with quard
> > core 1.2 GHz and another with 2.5 GHz, so I was also wondering if BGP
> > scanner process will be good with which. If 540 can take care of scanning
> > process!!
>
> BGP scanner process died like 10 years ago...
>
> gert
> --
> "If was one thing all people took for granted, was conviction that if you
> feed honest figures into a computer, honest figures come out. Never
> doubted
> it myself till I met a computer with a sense of humor."
> Robert A. Heinlein, The Moon is a Harsh
> Mistress
>
> Gert Doering - Munich, Germany
> gert@greenie.muc.de
>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 8/Jul/20 21:22, Jason Lixfeld wrote:

> SRD is implemented using table-policy.

Gosh, it's been a while. Didn't realize the name changed. It used to be
BGP-SD (BGP Selective Download).

I suppose SRD was to make it protocol independent.

I've never used SRD on IOS XR, but judging by what I see for the
NCS5500, it seems to be meant to decide whether routes are downloaded to
high or low scale line cards. Can anyone clarify?

Classic BGP-SD simply doesn't download those routes to any FIB, regardless.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 9/Jul/20 02:48, Vikas Sharma wrote:
> Many thanks Jason for your quick response.
>
> If possible please also confirm the rib/fib limits of ASR1002-HX.
>
> I have two choices to be used as IGW, ASR1002-HX or C 540 X and I want to
> choose the best of the two options.

If the -HX licensing and pricing is still the same as it was when the
box launched, you're better off with an MX204 :-).

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 9/Jul/20 03:09, Jason Lixfeld wrote:
> I’m not as familiar with the ASR1002-HX, but what I’m pretty sure of is if you’re considering the ASR1002-HX for IGW, you may want to review Juniper’s MX204. It’s probably going to be slightly more expensive than a NCS540, but far less expensive than the ASR1002-HX, but overall it will be a much better bang for your buck as an IGW.

Not forgetting that the NCS540 is a Broadcom chip, while the ASR1000 and
MX204 are in-house silicon.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 9/Jul/20 03:34, Vikas Sharma wrote:
> Dear Luka,
>
> Thanks for your revert. I have checked all ciscolive presentation before I
> have shooted question to the forum. I understand, LPM and LEM along with
> iTCAM support on C 540 does not exceed 400k but I was not getting details
> on rib/fib on ASR 1002-HX, in case you have found, please share with me.
>
> Also, processing power of ASR vs C 540 is very different, one with quard
> core 1.2 GHz and another with 2.5 GHz, so I was also wondering if BGP
> scanner process will be good with which. If 540 can take care of scanning
> process!!
>
> Also internet does not inform about table-map (atleast, I couldn't find),
> on C 540, many thanks to Jason for details provided.

Also, the ASR1000 is IOS XE, while the NCS540 is IOS XR.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> On Jul 9, 2020, at 10:49 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> Gosh, it's been a while. Didn't realize the name changed. It used to be
> BGP-SD (BGP Selective Download).
>
> I suppose SRD was to make it protocol independent.

I think SRD might be a made-up name, to be honest. My SE referred to it as SRD, so I just started referring to it as that too. But, nowhere in the docs is there actually a reference to that name as a feature, that I can find:

https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k-r6-3/routing/configuration/guide/b-routing-cg-asr9000-63x/b-routing-cg-asr9000-63x_chapter_010.html#task_1215567

It still looks to be BGP specific though.

> I've never used SRD on IOS XR, but judging by what I see for the
> NCS5500, it seems to be meant to decide whether routes are downloaded to
> high or low scale line cards. Can anyone clarify?

In the 9K days, they had SVD which did that:

https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A

I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.

> Classic BGP-SD simply doesn't download those routes to any FIB, regardless.
>
> Mark.
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Vikas,

> On 9 Jul 2020, at 03:34, Vikas Sharma <vikassharmas@gmail.com> wrote:
>
> Dear Luka,
>
> Thanks for your revert. I have checked all ciscolive presentation before I have shooted question to the forum. I understand, LPM and LEM along with iTCAM support on C 540 does not exceed 400k but I was not getting details on rib/fib on ASR 1002-HX, in case you have found, please share with me.

The numbers quoted in the URL are actually FIB size (3.5M for IPv4, 3M for IPv6 or mix of both).

RIB can scale to 11M routes.

--
?ukasz Bromirski
CCIE R&S/SP #15929, CCDE #2012::17, PGP Key ID: 0xFD077F6A

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Jason,

> On 9 Jul 2020, at 03:26, Jason Lixfeld <jason@lixfeld.ca> wrote:
>
>> On Jul 8, 2020, at 9:14 PM, ?ukasz Bromirski <lukasz@bromirski.net <mailto:lukasz@bromirski.net>> wrote:
>>
>> Vikas,
>>
>> First of all, NCS 540 ACC-SYS has 16GB of RAM.
>
> … but only 8GB available to the RP.
>
> RP/0/RP0/CPU0:ncs540-7.lab#show memory summary
> Wed Jul 8 21:19:33.842 EDT

Indeed, that may be the case. Thanks for correcting me. Memory allocated to RP may change in future.

--
?ukasz Bromirski
CCIE R&S/SP #15929, CCDE #2012::17, PGP Key ID: 0xFD077F6A
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 9/Jul/20 17:42, Jason Lixfeld wrote:

> I think SRD might be a made-up name, to be honest. My SE referred to it as SRD, so I just started referring to it as that too. But, nowhere in the docs is there actually a reference to that name as a feature, that I can find:
>
> https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k-r6-3/routing/configuration/guide/b-routing-cg-asr9000-63x/b-routing-cg-asr9000-63x_chapter_010.html#task_1215567
>
> It still looks to be BGP specific though.

From what I can find, SRD seems to be the new name.

I found some unofficial remnants of BGP-SD:

    https://null.53bits.co.uk/index.php?page=bgp-selective-download

Seems like SRD started to take effect in 2018.


> In the 9K days, they had SVD which did that:
>
> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>
> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.

Never heard of SVD, but I found this for the NCS5500:

    https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> On Jul 10, 2020, at 2:56 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>
>> In the 9K days, they had SVD which did that:
>>
>> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>>
>> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.
>
> Never heard of SVD, but I found this for the NCS5500:
>
> https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/


Nice.

There’s caveat at the bottom of that article that says it’s not supported on J-based FFF pizza boxes. I wonder why? I wonder if that applies to Qumran-AX a’la NCS540 too?

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, Jul 9, 2020, at 04:36, Vikas Sharma wrote:
> Slight modification, I am looking for fib for ASR 1002-HX

Hi, wasn't ASR1k a "software-based" series, with route lookups NOT done from TCAM, fib limit being the RAM, and forwarding done by homegrown QFP?

I don't really recall things having changed with -HX series.

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Sat, 11 Jul 2020 at 21:09, Radu-Adrian FEURDEAN
<cisco-nsp@radu-adrian.feurdean.net> wrote:

> Hi, wasn't ASR1k a "software-based" series, with route lookups NOT done from TCAM, fib limit being the RAM, and forwarding done by homegrown QFP?

I would say no, it was not software based, by HW. First gen QSFP or
Popey was 400MHz (1.2GHz if you count multithreading) 40 core, 307M
transistors, 20MB SRAM, 90nm lithography. It was tensilica platform
(like npower) but Cisco IP.
I think 2nd gen upped core count to 64 and frequency to 1.5GHz,
changed to 40nm lithography and transistor count to 1.8B. Unsure what
came after that.

But what is software based and what is hardware based? To me ASR1k is
HW based, it's an NPU box in my mind. Not having TCAM does not
exclude box from being hardware, if not having TCAM means it's not
hardware, then also say Juniper MX, PTX are not hardware, Cisco 8k is
not hardware, Jericho2 isn't hardware, modern stuff tends to run off
of DRAM, not TCAM.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Sat, Jul 11, 2020, at 21:34, Saku Ytti wrote:
>
> But what is software based and what is hardware based? To me ASR1k is

Put this way, everything is both software based and hardware based, since there is software running on hardware. Jusy than when using an ASIC, the software part is not doing very much for the packet switching.

> HW based, it's an NPU box in my mind.
> Not having TCAM does not
> exclude box from being hardware, if not having TCAM means it's not
> hardware,

You're right. However, moving away from TCAM usually signals chipsets more complex than ASICs (=more important software component).

> then also say Juniper MX, PTX are not hardware, Cisco 8k is
> not hardware, Jericho2 isn't hardware, modern stuff tends to run off
> of DRAM, not TCAM.

Most of them can definitely do more than dumb packet forwarding, but not so much as to do LNS or CGN in the main NPU (whatever variety the NPU is). A1k OTOH is THE platform to be used if you want to do LNS and CGN with Cisco.

As for TCAM vs *RAM, lack of TCAM signals a FIB scale significantly larger than a full table at the moment the box was designed.

Back to the question, NCS540 is merchant ASIC, while A1K is custom network-oriented processor. I'd expect FIB scale to be a few (~4-5) times higher on A1K than on NCS540.

--
R.-A. Feurdean
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Sun, 12 Jul 2020 at 12:45, Radu-Adrian FEURDEAN
<cisco-nsp@radu-adrian.feurdean.net> wrote:


> > then also say Juniper MX, PTX are not hardware, Cisco 8k is
> > not hardware, Jericho2 isn't hardware, modern stuff tends to run off
> > of DRAM, not TCAM.
>
> Most of them can definitely do more than dumb packet forwarding, but not so much as to do LNS or CGN in the main NPU (whatever variety the NPU is). A1k OTOH is THE platform to be used if you want to do LNS and CGN with Cisco.

I don't know where you base this claim, and I believe it's wrong. I
know Trio could do any of this, I think Cisc8k could, and I'm fairly
certain PTX1k could not. Both Trio and Ciscd8k are run-to-completion,
you run ucode on the NPU and you're only limited by time. Perhaps your
instruction set isn't conducive towards CGN or LNS, meaning you need
too many cycles to do the feature, but certainly it could be done.

> As for TCAM vs *RAM, lack of TCAM signals a FIB scale significantly larger than a full table at the moment the box was designed.

> Back to the question, NCS540 is merchant ASIC, while A1K is custom network-oriented processor. I'd expect FIB scale to be a few (~4-5) times higher on A1K than on NCS540.

J2 is merchant and does DRAM, so that doesn't seem to be a signal either.



--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
In XR in general we support restricting routes installed in the FIB using table-policy at various locations, but it's done across all NPUs. This specific feature mixing hi/lo FIB line cards adds another knob to tag certain routes as "external" so we can determine which prefixes to install on cards with only external TCAM.

The normal table-policy command is supported on NCS540, some use them as dedicated RRs. There is a 16 (newer N540-ACC-SYS) and 32 GB (24Z8Q2C-SYS) version of the 540.

Thanks,

Phil

?On 7/10/20, 2:10 PM, "cisco-nsp on behalf of Jason Lixfeld" <cisco-nsp-bounces@puck.nether.net on behalf of jason@lixfeld.ca> wrote:



> On Jul 10, 2020, at 2:56 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>
>> In the 9K days, they had SVD which did that:
>>
>> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>>
>> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.
>
> Never heard of SVD, but I found this for the NCS5500:
>
> https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/


Nice.

There’s caveat at the bottom of that article that says it’s not supported on J-based FFF pizza boxes. I wonder why? I wonder if that applies to Qumran-AX a’la NCS540 too?

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 11/Jul/20 20:00, Radu-Adrian FEURDEAN wrote:

>
> Hi, wasn't ASR1k a "software-based" series, with route lookups NOT done from TCAM, fib limit being the RAM, and forwarding done by homegrown QFP?
>
> I don't really recall things having changed with -HX series.

The ASR1000 line is all hardware-based.

The QFP clusters are hardware-based processors.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 11/Jul/20 20:34, Saku Ytti wrote:

> I would say no, it was not software based, by HW. First gen QSFP or
> Popey was 400MHz (1.2GHz if you count multithreading) 40 core, 307M
> transistors, 20MB SRAM, 90nm lithography. It was tensilica platform
> (like npower) but Cisco IP.
> I think 2nd gen upped core count to 64 and frequency to 1.5GHz,
> changed to 40nm lithography and transistor count to 1.8B. Unsure what
> came after that.
>
> But what is software based and what is hardware based? To me ASR1k is
> HW based, it's an NPU box in my mind. Not having TCAM does not
> exclude box from being hardware, if not having TCAM means it's not
> hardware, then also say Juniper MX, PTX are not hardware, Cisco 8k is
> not hardware, Jericho2 isn't hardware, modern stuff tends to run off
> of DRAM, not TCAM.

Indeed.

NPU or TCAM, both provide hardware-based forwarding.

The difference comes down to flexibility vs. performance, and how far
you can push one in sacrifice for the other.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Sun, 12 Jul 2020 at 21:30, Mark Tinka <mark.tinka@seacom.com> wrote:

> NPU or TCAM, both provide hardware-based forwarding.

This is a bit orthogonal. You can have an NPU with TCAM or NPU with
DRAM. You could also have ASIC with TCAM or ASIC with DRAM.

But if there is a clear line when a piece of hardware becomes an NPU
instead of ASIC, I don't know it. Trio and QFP are unambiguously NPUs,
is J2 still ASIC? In addition to J2 supporting NPL it has some
unassigned ALUs to add new features with 0 cost.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 12/Jul/20 21:00, Saku Ytti wrote:

> This is a bit orthogonal. You can have an NPU with TCAM or NPU with
> DRAM. You could also have ASIC with TCAM or ASIC with DRAM.

Sorry, meant to say "NPU or ASIC".

The general messaging, over the years, has been that ASIC is quick but
not flexible, while NPU is flexible but can get bogged down by added
flexibility in time.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Will 'set path-color external-reach’ get support on NCS540?

> On Jul 12, 2020, at 1:25 PM, Phil Bedard <philxor@gmail.com> wrote:
>
> In XR in general we support restricting routes installed in the FIB using table-policy at various locations, but it's done across all NPUs. This specific feature mixing hi/lo FIB line cards adds another knob to tag certain routes as "external" so we can determine which prefixes to install on cards with only external TCAM.
>
> The normal table-policy command is supported on NCS540, some use them as dedicated RRs. There is a 16 (newer N540-ACC-SYS) and 32 GB (24Z8Q2C-SYS) version of the 540.
>
> Thanks,
>
> Phil
>
> ?On 7/10/20, 2:10 PM, "cisco-nsp on behalf of Jason Lixfeld" <cisco-nsp-bounces@puck.nether.net on behalf of jason@lixfeld.ca> wrote:
>
>
>
>> On Jul 10, 2020, at 2:56 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>>
>>> In the 9K days, they had SVD which did that:
>>>
>>> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>>>
>>> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.
>>
>> Never heard of SVD, but I found this for the NCS5500:
>>
>> https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/
>
>
> Nice.
>
> There’s caveat at the bottom of that article that says it’s not supported on J-based FFF pizza boxes. I wonder why? I wonder if that applies to Qumran-AX a’la NCS540 too?
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
It wouldn't be implemented on a platform that doesn't support an external TCAM. It doesn't make much sense on a fixed platform with a single NPU (really ASIC but NPU is easier to type) since all the interfaces are sharing the same FIB. We would also never mix/match NPU scale types on the same fixed platform in the case where we have multiple NPUs.

The feature doesn't really have a lot of adoption, most buy the SE line cards for roles needing higher FIB scale.

Thanks,
Phil

?On 7/12/20, 7:11 PM, "Jason Lixfeld" <jason@lixfeld.ca> wrote:

Will 'set path-color external-reach’ get support on NCS540?

> On Jul 12, 2020, at 1:25 PM, Phil Bedard <philxor@gmail.com> wrote:
>
> In XR in general we support restricting routes installed in the FIB using table-policy at various locations, but it's done across all NPUs. This specific feature mixing hi/lo FIB line cards adds another knob to tag certain routes as "external" so we can determine which prefixes to install on cards with only external TCAM.
>
> The normal table-policy command is supported on NCS540, some use them as dedicated RRs. There is a 16 (newer N540-ACC-SYS) and 32 GB (24Z8Q2C-SYS) version of the 540.
>
> Thanks,
>
> Phil
>
> ?On 7/10/20, 2:10 PM, "cisco-nsp on behalf of Jason Lixfeld" <cisco-nsp-bounces@puck.nether.net on behalf of jason@lixfeld.ca> wrote:
>
>
>
>> On Jul 10, 2020, at 2:56 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>>
>>> In the 9K days, they had SVD which did that:
>>>
>>> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>>>
>>> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.
>>
>> Never heard of SVD, but I found this for the NCS5500:
>>
>> https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/
>
>
> Nice.
>
> There’s caveat at the bottom of that article that says it’s not supported on J-based FFF pizza boxes. I wonder why? I wonder if that applies to Qumran-AX a’la NCS540 too?
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Mon, 13 Jul 2020 at 00:54, Mark Tinka <mark.tinka@seacom.com> wrote:

> The general messaging, over the years, has been that ASIC is quick but
> not flexible, while NPU is flexible but can get bogged down by added
> flexibility in time.

The classical view is that packet through ASIC takes constant,
invariant time, and packet through NPU takes variant time, depending
on how many instructions the NPU needs to perform for this packet.

But if that is a strict definition, then we don't really have ASICs
outside really cheap switches, as there is some programmability in all
new stuff being released. So I'm not sure what the correct definition
is.

Equally when does a software router become a hardware router? Why is
XEON not NPU but Trio is? Are there some objective facts which
differentiate CPU from NPU and NPU from ASIC?

# NPU vs CPU?
- NPU tends to have more cores than CPU
- NPU has application specific instruction set
- NPU has application specific memory interface

# NPU vs ASIC?
- ASIC does parsing and lookup in silicon, not by running a set of
instruction given by a program
- ASIC is constant time, NPU is variable time
- ASIC has many type of silicons for different function, NPU has many
identical siicons running different set of instruction depending on
packet/config



--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 09:20, Saku Ytti wrote:

> But if that is a strict definition, then we don't really have ASICs
> outside really cheap switches, as there is some programmability in all
> new stuff being released. So I'm not sure what the correct definition
> is.

I've been thinking about this over the past 4 years, actually, and I
came to the conclusion that it was mostly championed by the 6500/7600
ASIC's, and Juniper's earlier Internet Processor I and Internet
Processor II ASIC's.

Since that time, we've asked routers to do more things beyond simple IP
packet forwarding, which has required those chips to evolve more into
NPU's than ASIC's. I'd say from around the ASR1000, MX and later, is
when we saw this shift.

So I agree with you that outside of classic Ethernet switches today, if
we have to be pedantic about what an ASIC is, we don't see them in
today's kit anymore.

But also, I have not heard any standard definition that makes NPU more
appropriate to what forwarding chips are doing today. Some vendors still
use "ASIC" and "NPU" interchangeably, depending on how old they are or
what mood they are in.

Ultimately, I'm not sure it matters that much nowadays, but I wouldn't
mind having the discussion :-).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> Mark Tinka
> Sent: Sunday, July 12, 2020 10:55 PM
>
> On 12/Jul/20 21:00, Saku Ytti wrote:
>
> > This is a bit orthogonal. You can have an NPU with TCAM or NPU with
> > DRAM. You could also have ASIC with TCAM or ASIC with DRAM.
>
> Sorry, meant to say "NPU or ASIC".
>
> The general messaging, over the years, has been that ASIC is quick but not
> flexible, while NPU is flexible but can get bogged down by added
flexibility in
> time.
>
That's basically it.
On J2 you can pretty much enable all the bells and whistles you can and
still get the same pps rate out of it as with no features enabled.
On run to completion NPUs every feature you enable costs you pps rate.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Mon, 13 Jul 2020 at 12:42, <adamv0025@netconsultings.com> wrote:

> On J2 you can pretty much enable all the bells and whistles you can and
> still get the same pps rate out of it as with no features enabled.

I don't think this is strictly true, maybe it's true for several
cases. But even without recirculation, I think you can take variable
time through J2 and if you wanted, you could make it perform
pathologically poor, it is flexible enough for that. You can do a lot
of stuff on the lookup pipeline for the first 144B in J2. Then there
is the programmable elements matrix for future functions, can I put
every frame there with no cost? You can use system headers to skip
devices inside the pipeline.
It's hard for me to imagine with all this flexibility we'd still
always guarantee constant time.

J2 is much more flexible than what J1 was. But of course it's still
very much a pipeline of different types of silicons, some more some
less programmable. So I don't know.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 13 Jul 2020, at 09:55, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 13/Jul/20 09:20, Saku Ytti wrote:
>
>> But if that is a strict definition, then we don't really have ASICs
>> outside really cheap switches, as there is some programmability in all
>> new stuff being released. So I'm not sure what the correct definition
>> is.
>
> I've been thinking about this over the past 4 years, actually, and I
> came to the conclusion that it was mostly championed by the 6500/7600
> ASIC's, and Juniper's earlier Internet Processor I and Internet
> Processor II ASIC's.
>
> Since that time, we've asked routers to do more things beyond simple IP
> packet forwarding, which has required those chips to evolve more into
> NPU's than ASIC's. I'd say from around the ASR1000, MX and later, is
> when we saw this shift.
>
> […]

> Ultimately, I'm not sure it matters that much nowadays, but I wouldn't
> mind having the discussion :-).

I’d risk stating the obvious - technology is moving in one great sinusoidal shape. We tend to deaggregate, and then to aggregate, just to deaggregate again. The only thing that really changes is speed of those cycles.

Current trend is to simplify (Segment Routing) so there’s chance hardware of tomorrow can be less complex and do much less with faster speeds. But at some point there will be new protocols, applications, overlays and whatever, so there will be need to do more than just basics. You also can’t bet on oversimplifying things, as Juniper with PTX (for example) found out.

Do we have ASICs? Yes, they *still* usually drive fabrics, all else is essentially NPU - because it can be reprogrammed on the fly. As Saku pointed out, there’s less and less difference between modern x86 architectures and networking NPUs however and given how much different things can be easily done in software, this trend will continue to drive “cloud” applications. This should also help “simplifcation” trend, as there will be less and less dependency on the fancy “hardware” capabilities of a box.

Next wave will (are) probably photonics, moving further and further into direct feature capabilities without influencing speed-down to tackle specific feature in silicon. That will be true test to “how many features you *really* need” and great area to optimize further. Question is - how fast we’ll get there realistically with shipping products and how much it will level field vendors are playing on with Customers.


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Indeed, however one of the really handy side effects of this command is matches coloured with external-reach are still downloaded to the RIB. Of course, this technically adds no value, but if you’re passing a full BGP table to a downstream customer, it’s handy to see those routes in the RIB* as opposed to having to switch gears and look in the BGP RIB if folks are used to doing the former.

[*] To pass a full table to a downstream BGP customer on this box today, you need to filter the interesting prefixes so they aren’t downloaded into the RIB, which in turn prevent them from being programmed into the FIB.

> On Jul 12, 2020, at 10:01 PM, Phil Bedard <philxor@gmail.com> wrote:
>
> It wouldn't be implemented on a platform that doesn't support an external TCAM. It doesn't make much sense on a fixed platform with a single NPU (really ASIC but NPU is easier to type) since all the interfaces are sharing the same FIB. We would also never mix/match NPU scale types on the same fixed platform in the case where we have multiple NPUs.
>
> The feature doesn't really have a lot of adoption, most buy the SE line cards for roles needing higher FIB scale.
>
> Thanks,
> Phil
>
> ?On 7/12/20, 7:11 PM, "Jason Lixfeld" <jason@lixfeld.ca> wrote:
>
> Will 'set path-color external-reach’ get support on NCS540?
>
>> On Jul 12, 2020, at 1:25 PM, Phil Bedard <philxor@gmail.com> wrote:
>>
>> In XR in general we support restricting routes installed in the FIB using table-policy at various locations, but it's done across all NPUs. This specific feature mixing hi/lo FIB line cards adds another knob to tag certain routes as "external" so we can determine which prefixes to install on cards with only external TCAM.
>>
>> The normal table-policy command is supported on NCS540, some use them as dedicated RRs. There is a 16 (newer N540-ACC-SYS) and 32 GB (24Z8Q2C-SYS) version of the 540.
>>
>> Thanks,
>>
>> Phil
>>
>> ?On 7/10/20, 2:10 PM, "cisco-nsp on behalf of Jason Lixfeld" <cisco-nsp-bounces@puck.nether.net on behalf of jason@lixfeld.ca> wrote:
>>
>>
>>
>>> On Jul 10, 2020, at 2:56 AM, Mark Tinka <mark.tinka@seacom.com> wrote:
>>>
>>>> In the 9K days, they had SVD which did that:
>>>>
>>>> https://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-3/routing/configuration/guide/b_routing_cg43xcrs/b_routing_cg43xcrs_chapter_010.html#concept_8E1919490E274B9E97F527166CB6EF8A
>>>>
>>>> I don’t know if SVD exists for chassis based NCS5K or if SRD^h^h^htable-policy is the new that, but SVD was per-linecard. Table-policy is a BGP sub configuration, and presumably LC agnostic.
>>>
>>> Never heard of SVD, but I found this for the NCS5500:
>>>
>>> https://xrdocs.io/ncs5500/tutorials/mixing-base-and-scale-LC-in-NCS5500/
>>
>>
>> Nice.
>>
>> There’s caveat at the bottom of that article that says it’s not supported on J-based FFF pizza boxes. I wonder why? I wonder if that applies to Qumran-AX a’la NCS540 too?
>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
>
>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 12:27, ?ukasz Bromirski wrote:
> You also can’t bet on oversimplifying things, as Juniper with PTX (for example) found out.

Can you expand on this a little more?

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 12:27, ?ukasz Bromirski wrote:

> Do we have ASICs? Yes, they *still* usually drive fabrics, all else is essentially NPU - because it can be reprogrammed on the fly. As Saku pointed out, there’s less and less difference between modern x86 architectures and networking NPUs however and given how much different things can be easily done in software, this trend will continue to drive “cloud” applications. This should also help “simplifcation” trend, as there will be less and less dependency on the fancy “hardware” capabilities of a box.

I believe what is holding this back from becoming mainstream reality is
that the cloud providers are building their own swaths of software
solutions to run in x86 CPU's, and that code does not make it into the
wild to see what else can be done with it. So the general community
keeps messing about with flavour-of-the-year-SDN-thingy, until we
realize it doesn't work and we move on to yet-another concoction.

I believe if the cloud boys & girls shared more of that code so the
network operators can see what to do with it on white boxes fitted with
Broadcom (or equivalent), the rate of "simplification" could accelerate.


>
> Next wave will (are) probably photonics, moving further and further into direct feature capabilities without influencing speed-down to tackle specific feature in silicon. That will be true test to “how many features you *really* need” and great area to optimize further. Question is - how fast we’ll get there realistically with shipping products and how much it will level field vendors are playing on with Customers.

I saw a preso from a major cloud provider (their main cloud product
rhymes with the number of days in a year) around that back in San Diego,
2018, at some DWDM conference. It was scary for the Transport vendors.
That was nearly 3 years ago. It wouldn't surprise me if it's now an
in-house solution.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 13 Jul 2020, at 15:55, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 13/Jul/20 12:27, ?ukasz Bromirski wrote:
>> You also can’t bet on oversimplifying things, as Juniper with PTX (for example) found out.
> Can you expand on this a little more?

PTX was created as platform that will do only MPLS-based forwarding, with tiny IP routing table scale.
The idea was marketed as something that will get Juniper leading edge across SPs and win back accounts.


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Mon, 13 Jul 2020 at 08:27, Saku Ytti <saku@ytti.fi> wrote:
>
> On Mon, 13 Jul 2020 at 00:54, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> > The general messaging, over the years, has been that ASIC is quick but
> > not flexible, while NPU is flexible but can get bogged down by added
> > flexibility in time.
>
> The classical view is that packet through ASIC takes constant,
> invariant time, and packet through NPU takes variant time, depending
> on how many instructions the NPU needs to perform for this packet.

It's tough to make a clear line. This is roughly the definition I use
too ^, fixed vs flexible time, with the exception that some ASICs
support packet recirculation meaning overall packet processing time
increased but, technically each pass of the packet is still in fixed
time.


> But if that is a strict definition, then we don't really have ASICs
> outside really cheap switches, as there is some programmability in all
> new stuff being released. So I'm not sure what the correct definition
> is.

I would say that ASICs are fixed function single die or very small
number of tightly bound dies that have a fixed feature set. My
definition of an NPU is a collection of ASICs and/or a collection of
purpose built components, e.g. "a complex of ASICs". NPUs tend to have
a varying feature set depending on what uCode you load on them.
Although some ASICs have limited flexibility based on the uCode you
load on them, the features are more or less restricted to the burnt in
features, whereas an NPU could support radically different features
throughout their lifetime.


> Equally when does a software router become a hardware router? Why is
> XEON not NPU but Trio is? Are there some objective facts which
> differentiate CPU from NPU and NPU from ASIC?

Added inline..

> # NPU vs CPU?
> - NPU tends to have more cores than CPU
> - NPU has application specific instruction set
> - NPU has application specific memory interface
- NPUs (Trio from Juniper MX, NP3C from Cisco 7600,
Trident/Typhoon/Tomahawk from Cisco ASR9) are all *collections* of
components/chips working together (ASICs, S/D/RLD-RAM, e/i-TCAM, TOPs,
ALUs, FPGAs, CPLDs).
- CPU is more general purpose, that is why we have GPUs + GDDR for
example, to address a specific purpose the CPU is not geared towards,
equally we have NPUs to address another purpose a CPU is not geared
towards.
- CPUs are often multiple components too (processor(s), i/d-caches,
ALUs) but running generic code with a generic instruction set. The
components in the NPU run a limited and purpose specific instruction
set with a very different design (e.g. highly parallelised as you
said).

> # NPU vs ASIC?
> - ASIC does parsing and lookup in silicon, not by running a set of
> instruction given by a program
> - ASIC is constant time, NPU is variable time
> - ASIC has many type of silicons for different function, NPU has many
> identical siicons running different set of instruction depending on
> packet/config
- NPUs also have function specific silicon too e.g. TOPs (Task
Optimised Processors) which exist in ASR9K NPUs, but not in Trio, but
they also run uCode and have a very small amount of flexibility.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Mon, 13 Jul 2020 at 09:01, Mark Tinka <mark.tinka@seacom.com> wrote:
>
>
>
> On 13/Jul/20 09:20, Saku Ytti wrote:
>
> > But if that is a strict definition, then we don't really have ASICs
> > outside really cheap switches, as there is some programmability in all
> > new stuff being released. So I'm not sure what the correct definition
> > is.
>
...
> Since that time, we've asked routers to do more things beyond simple IP
> packet forwarding, which has required those chips to evolve more into
> NPU's than ASIC's. I'd say from around the ASR1000, MX and later, is
> when we saw this shift.
>
> So I agree with you that outside of classic Ethernet switches today, if
> we have to be pedantic about what an ASIC is, we don't see them in
> today's kit anymore.

Back in the 7600s it was NPU based, and what we call NPUs today are
sometimes a collection of ASICs that form a "complex of ASICs". That
is what powered the 7600, the NP3C NPU. 7600s used a group of ASICs
working together to perform forwarding lookups, buffering, backplane
sending/receiving etc.

That's what we have in Juniper Trio / Cisco ASR9K / Nokia FPs too, a
bunch of devices, often ASICs working together. So I don't think that
we have no ASICs like in the classic Ethernet switch you mention, but
we have groups of them now with other components too, forming
something more complex.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 19:19, ?ukasz Bromirski wrote:

> PTX was created as platform that will do only MPLS-based forwarding,
> with tiny IP routing table scale.
> The idea was marketed as something that will get Juniper leading edge across SPs and win back accounts.

Except it can do millions of routes in RIB and FIB, so...

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
The initial iteration of the PTX couldn't. It was really just meant as an LSR with relatively low FIB scale, couldn't do Netflow (remember the external server to do it?), etc. That quickly pivoted to something more capable. Juniper sort of positioned the initial version as a route reflector since that was one of the other things it could do due to selective FIB install.

Thanks,
Phil

?On 7/13/20, 2:36 PM, "cisco-nsp on behalf of Mark Tinka" <cisco-nsp-bounces@puck.nether.net on behalf of mark.tinka@seacom.com> wrote:


On 13/Jul/20 19:19, ?ukasz Bromirski wrote:

> PTX was created as platform that will do only MPLS-based forwarding,
> with tiny IP routing table scale.
> The idea was marketed as something that will get Juniper leading edge across SPs and win back accounts.

Except it can do millions of routes in RIB and FIB, so...

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 21:51, Phil Bedard wrote:
> The initial iteration of the PTX couldn't. It was really just meant as an LSR with relatively low FIB scale, couldn't do Netflow (remember the external server to do it?), etc. That quickly pivoted to something more capable. Juniper sort of positioned the initial version as a route reflector since that was one of the other things it could do due to selective FIB install.

The PTX1000?

I somewhat recall the PTX5000 launching first, and then the smaller
variants following. But my memory isn't what it used to be.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 13/Jul/20 21:51, Phil Bedard wrote:
> Juniper sort of positioned the initial version as a route reflector since that was one of the other things it could do due to selective FIB install.

Between the J-series router, the JCS1200 and what some customers did
with the M120, Juniper have struggled to push an RR-only box.

Thank God all the vendors found a way to virtualize their OS's :-).

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
The PTX 5000 was the original PTX. The 1st/2nd generation FPCs didn't have much FIB capacity, like 128k routes which was well below an Internet table in 2014. The 3rd gen is where they started supporting up to 1M+ v4 prefixes. The CSE2000 was the external appliance.

Thanks,
Phil

?On 7/13/20, 5:25 PM, "Mark Tinka" <mark.tinka@seacom.com> wrote:



On 13/Jul/20 21:51, Phil Bedard wrote:
> The initial iteration of the PTX couldn't. It was really just meant as an LSR with relatively low FIB scale, couldn't do Netflow (remember the external server to do it?), etc. That quickly pivoted to something more capable. Juniper sort of positioned the initial version as a route reflector since that was one of the other things it could do due to selective FIB install.

The PTX1000?

I somewhat recall the PTX5000 launching first, and then the smaller
variants following. But my memory isn't what it used to be.

Mark.



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 14/Jul/20 04:28, Phil Bedard wrote:
> The PTX 5000 was the original PTX. The 1st/2nd generation FPCs didn't have much FIB capacity, like 128k routes which was well below an Internet table in 2014. The 3rd gen is where they started supporting up to 1M+ v4 prefixes.

Yes, this is what I remember also. But if you look at the PTX1000, you
don't have that problem from when it went into inception.

Granted, one could say Juniper made the assumption that most MPLS-based
networks would run a BGP-free core, and in theory, there were right.
What they didn't account was that 6PE was one of those "temporary
becomes permanent" situations.


> The CSE2000 was the external appliance.

Personally, I've never felt running Netflow on core routers is ideal,
particularly if you are a BGP-free core network, and if your collector
isn't that great at grabbing flow data from MPLS frames.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Mon, 13 Jul 2020 at 20:39, James Bensley
<jwbensley+cisco-nsp@gmail.com> wrote:

> Back in the 7600s it was NPU based, and what we call NPUs today are
> sometimes a collection of ASICs that form a "complex of ASICs". That
> is what powered the 7600, the NP3C NPU. 7600s used a group of ASICs
> working together to perform forwarding lookups, buffering, backplane
> sending/receiving etc.

NP3C was on ES20+ (not ES20). The ASR9k Trident was the same EZchip
NP3C. But of course the vast majority of 7600 linecards were PFC3,
clearly not a NPU.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> Mark Tinka
> Sent: Monday, July 13, 2020 3:04 PM
>
> On 13/Jul/20 12:27, ?ukasz Bromirski wrote:
>
> > Do we have ASICs? Yes, they *still* usually drive fabrics, all else is
> essentially NPU - because it can be reprogrammed on the fly. As Saku
> pointed out, there’s less and less difference between modern x86
> architectures and networking NPUs however and given how much different
> things can be easily done in software, this trend will continue to drive “cloud”
> applications. This should also help “simplifcation” trend, as there will be less
> and less dependency on the fancy “hardware” capabilities of a box.
>
> I believe what is holding this back from becoming mainstream reality is that
> the cloud providers are building their own swaths of software solutions to
> run in x86 CPU's, and that code does not make it into the wild to see what
> else can be done with it. So the general community keeps messing about
> with flavour-of-the-year-SDN-thingy, until we realize it doesn't work and we
> move on to yet-another concoction.
>
Not sure what you mean, you can run XR on a white-box, or x86-host (e.g. cRDP). -that's regarding the disaggregation and NFV...
And regarding the "flavour-of-the-year-SDN-thingy", I guess I could see how it has a certain mystery about it for outsiders, but for someone building an intent based networking/ service orchestration system all the concepts you read about in various RFCs, books and publications are a day to day reality.

> I believe if the cloud boys & girls shared more of that code so the network
> operators can see what to do with it on white boxes fitted with Broadcom (or
> equivalent), the rate of "simplification" could accelerate.
>
I'd say NO thanks very much,
Have you seen what they did with their espresso core? Looks like a bunch of programmers attempt at core network while never talking to anyone with network background, (no wonder their network architect job advert looked more like software architect job advert -not a single requirement around networking design skills).
There are a few interesting concepts in there worth exploring, but the thing is a mess.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> From: Saku Ytti <saku@ytti.fi>
> Sent: Monday, July 13, 2020 11:06 AM
>
> On Mon, 13 Jul 2020 at 12:42, <adamv0025@netconsultings.com> wrote:
>
> > On J2 you can pretty much enable all the bells and whistles you can
> > and still get the same pps rate out of it as with no features enabled.
>
> I don't think this is strictly true, maybe it's true for several cases. But even
> without recirculation, I think you can take variable time through J2 and if you
> wanted, you could make it perform pathologically poor, it is flexible enough
> for that. You can do a lot of stuff on the lookup pipeline for the first 144B in
> J2. Then there is the programmable elements matrix for future functions, can
> I put every frame there with no cost? You can use system headers to skip
> devices inside the pipeline.
> It's hard for me to imagine with all this flexibility we'd still always guarantee
> constant time.
>
> J2 is much more flexible than what J1 was. But of course it's still very much a
> pipeline of different types of silicons, some more some less programmable.
> So I don't know.
>
I think the idea behind it is that the whole thing is clocked to offer "line-rate" (whatever the vendor deems as line-rate -my definition is per 10Gbps@L2(64B)@L1(84B) is 29.761Mpps(two directions) )
So if you guarantee that no matter what features operator enables, the packet headers will leave the pipeline in frequency no less that << line-rate >>, you have an ASIC.
Also since the feature set is very narrow on these things (compared to NPUs) the deviation from the mean is probably very low.
And of course then there's the recirc. - but that's usually a known caveat.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 14/Jul/20 09:56, adamv0025@netconsultings.com wrote:

> Not sure what you mean, you can run XR on a white-box, or x86-host (e.g. cRDP). -that's regarding the disaggregation and NFV...

What I mean is the "holy grail" of white box or x86 + home-grown or
new/cheap NOS.

Running a traditional vendor's OS on a white box or x86 isn't going to
necessarily be cheaper from a licensing perspective.

> And regarding the "flavour-of-the-year-SDN-thingy", I guess I could see how it has a certain mystery about it for outsiders, but for someone building an intent based networking/ service orchestration system all the concepts you read about in various RFCs, books and publications are a day to day reality.

I'm not talking about their in-existence... I'm talking about the wide
difference between getting these "intend-based" solutions standardized,
and what is actually happening that hasn't been shared, and/or is
proprietary, particularly with the cloud bags (bags = boys & girls).

The otherwise untidy standardization process for "SDN" does not mean
operators (including the cloud bags) aren't enjoying their own
"automation" deployments, whatever those may be.


> I'd say NO thanks very much,
> Have you seen what they did with their espresso core? Looks like a bunch of programmers attempt at core network while never talking to anyone with network background, (no wonder their network architect job advert looked more like software architect job advert -not a single requirement around networking design skills).
> There are a few interesting concepts in there worth exploring, but the thing is a mess.

Like I said, "if" and "could". I don't know...

What I know is the cloud bags have the resources to experiment and write
code, maybe even to a larger extent than the traditional vendors they
buy from. Sure, the problems you mention are not unlike what we see
coming from vendors where the person interpreting the RFC to write the
code has no idea what IS-IS actually does. That comes down to staff
management, so not an entirely big problem to solve once they have the
right leadership.

For my shop, it still makes sense to rely on a traditional vendor to
solve my networking problems. This isn't necessarily the case for
fast-growing cloud bags, or network operators eating up the
"intent-based" liquorice :-).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 14 Jul 2020, at 06:52, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 14/Jul/20 04:28, Phil Bedard wrote:
>> The PTX 5000 was the original PTX. The 1st/2nd generation FPCs didn't have much FIB capacity, like 128k routes which was well below an Internet table in 2014. The 3rd gen is where they started supporting up to 1M+ v4 prefixes.
>
> Yes, this is what I remember also. But if you look at the PTX1000, you
> don't have that problem from when it went into inception.
>
> Granted, one could say Juniper made the assumption that most MPLS-based
> networks would run a BGP-free core, and in theory, there were right.

That’s right. That’s also why I didn’t want to claim Juniper was stupid
with the product, more into direction that such naive approach simply
didn’t work out.

> What they didn't account was that 6PE was one of those "temporary
> becomes permanent" situations.

OTOH, that’s still the vision we’re trying to catch up with, right? Have one,
simple and easy to provision/monitor/troubleshoot/traffic engineer/decomission
protocol, that carries all address families, has unified architecture and
universal interoperability “because of that”.

So, about OpenFlow… ;)


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 15:08, ?ukasz Bromirski wrote:

> That’s right. That’s also why I didn’t want to claim Juniper was stupid
> with the product, more into direction that such naive approach simply
> didn’t work out.

Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
list of shoo-ins.


> OTOH, that’s still the vision we’re trying to catch up with, right? Have one,
> simple and easy to provision/monitor/troubleshoot/traffic engineer/decomission
> protocol, that carries all address families, has unified architecture and
> universal interoperability “because of that”.
>
> So, about OpenFlow… ;)

Yea, about that :-)...

There will come a time when new operators cannot buy IPv4 on an open or
black market, so we all need to keep cracking on bringing IPv6 up to
scratch.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 15 Jul 2020, at 15:45, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 15/Jul/20 15:08, ?ukasz Bromirski wrote:
>
>> That’s right. That’s also why I didn’t want to claim Juniper was stupid
>> with the product, more into direction that such naive approach simply
>> didn’t work out.
> Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
> list of shoo-ins.

That’s interesting. How about Cisco 8000 (feature-rich services) or
NCS 55xx (cheap 10/40/100/200G ifaces)?

> There will come a time when new operators cannot buy IPv4 on an open or
> black market, so we all need to keep cracking on bringing IPv6 up to
> scratch.

I’m trying to preach IPv6 for two decades right now. I can last for one
or two decades more maximum so you guys *need* to deploy IPv6 in
production environments even if a) it’s still broken and b) it is
missing some parts.


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> -----Original Message-----
> From: cisco-nsp <cisco-nsp-bounces@puck.nether.net> On Behalf Of Lukasz
> Bromirski
> Sent: Wednesday, July 15, 2020 6:00 PM
> To: Mark Tinka <mark.tinka@seacom.com>
> Cc: cisco-nsp NSP <cisco-nsp@puck.nether.net>
> Subject: Re: [c-nsp] Cisco N540-ACC-SYS ipv4 routes
>
> Mark,
>
> > On 15 Jul 2020, at 15:45, Mark Tinka <mark.tinka@seacom.com> wrote:
> >
> > On 15/Jul/20 15:08, ?ukasz Bromirski wrote:
> >
> >> That’s right. That’s also why I didn’t want to claim Juniper was
> >> stupid with the product, more into direction that such naive approach
> >> simply didn’t work out.
> > Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
> > list of shoo-ins.
>
> That’s interesting. How about Cisco 8000 (feature-rich services) or NCS 55xx
> (cheap 10/40/100/200G ifaces)?
>
Wanna bet which one of the two will be cheaper? (the new upcoming PTX10k1 vs 8201)
- also will be interesting to compare flex license options on both.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 19:00, ?ukasz Bromirski wrote:

> That’s interesting. How about Cisco 8000 (feature-rich services)...

I don't need rich features in the core. The only exotic feature I will
need is LDPv6. It's a BGP-free core, swapping labels at high-speed for
IPv4 and IPv6 traffic in MPLS frames. So no fancy bits needed.

Also, the 8200 doesn't support 10Gbps ports, which I still need. This is
only supported on the 8800, which is too big and too bulky.


> or
> NCS 55xx (cheap 10/40/100/200G ifaces)?

Broadcom!

But more importantly, my trust level in Cisco's long-term ambitions in
the industry are at an all-time low. I just don't feel that Cisco have
sufficient integrity within their business for me to sleep well at
night, relying on them to have my best interests at heart, over a 5- to
10-year period.


> I’m trying to preach IPv6 for two decades right now. I can last for one
> or two decades more maximum so you guys *need* to deploy IPv6 in
> production environments even if a) it’s still broken and b) it is
> missing some parts.

I have never believed in tunneled IPv6. I have ran native dual-stack
infrastructure in all the networks I built in Africa, Asia-Pac, and
Europe, in the last 20 years.

This is the driving force behind my need for LDPv6, as I do not believe
in 6PE. And one of the many reasons I am moving on from Cisco to other
vendors that appreciate this philosphy, rather than finding ways to keep
milking money from customers through CGN's, vendor-managed services, and
such.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 19:45, adamv0025@netconsultings.com wrote:

> Wanna bet which one of the two will be cheaper? (the new upcoming PTX10k1 vs 8201)
> - also will be interesting to compare flex license options on both.

You can get flexible licensing on the PTX1000 today, where the 72x ports
are sold in chunks of 18x per license.

Currently, the 8201 is very closely priced to the PTX1000. The problem
is the 8201 supports only 12x 100Gbps ports, while the PTX1000 can
deliver 24x 100Gbps, in addition hundreds of 10Gbps ports (which the
8201 cannot support).

The 8202 would be closer to the PTX1000 and PTX10002, but the lack of
10Gbps support is a real problem.

Also, both the 8201 and 8202 give you only 2x power supplies, while you
get 4x power supplies in all the PTX fixed form factor routers bar the
PTX10003 (which provides 2x 3000W PSU's).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Tue, 14 Jul 2020 at 06:20, Saku Ytti <saku@ytti.fi> wrote:
>
> On Mon, 13 Jul 2020 at 20:39, James Bensley
> <jwbensley+cisco-nsp@gmail.com> wrote:
>
> > Back in the 7600s it was NPU based, and what we call NPUs today are
> > sometimes a collection of ASICs that form a "complex of ASICs". That
> > is what powered the 7600, the NP3C NPU. 7600s used a group of ASICs
> > working together to perform forwarding lookups, buffering, backplane
> > sending/receiving etc.
>
> NP3C was on ES20+ (not ES20). The ASR9k Trident was the same EZchip
> NP3C. But of course the vast majority of 7600 linecards were PFC3,
> clearly not a NPU.

You're right, and I should have been clearer that the ES20+ cards used
the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
are not an NPU. I think they are actually in the interesting middle
ground between what I would call an ASIC powered device and an NPU
powered device.

Take the ME3600X and early ASR920 devices for example (I don't know so
much about the more recent ASR920s); these are single chip all-in-one
ASIC boxes. Technically the ME3600X/ME3800X use two ASICs linked via a
PCI link which is non-blocking, but this is just horizontal scaling to
accomodate for additional ports, it's two of the exact same ASIC which
both have on chip TCAM and buffers and both carry all forwarding
information and perform all functions.

So in these two pizza boxes, we have a single ASIC that does
everything; the front panel ports connect to the ASICs, they perform
ingress queueing, forwarding look-ups, egress re-writes, egress
queueing, everything. The PFC3 cards on 7600 require a collection of
ASICs (one to connect to the front panel ports, one for queueing, one
for forwarding lookups and rewrites, one for backplane / crossbar
transmission and reception etc.), so whilst they probably don't fit a
strict definition of NPU I think they are in the interest in-between
stage of evolving from single ASIC -> a bunch of loosely coupled ASICs
-> a complex of tightly bound ASIC.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, 16 Jul 2020 at 11:25, James Bensley
<jwbensley+cisco-nsp@gmail.com> wrote:

> You're right, and I should have been clearer that the ES20+ cards used
> the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
> are not an NPU. I think they are actually in the interesting middle
> ground between what I would call an ASIC powered device and an NPU
> powered device.

ES20 is Toaster/PXF, which can be said to be NPU. But if PFC3[ABC] is
NPU, then I'd say there are no-non NPU forwarding chips.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 10:24, James Bensley wrote:

>
> Take the ME3600X and early ASR920 devices for example (I don't know so
> much about the more recent ASR920s);

I think the ASR920 hasn't changed since the beginning. It runs the Cylon
ASIC on all models I am aware of.


> Technically the ME3600X/ME3800X use two ASICs linked via a
> PCI link which is non-blocking, but this is just horizontal scaling to
> accomodate for additional ports, it's two of the exact same ASIC which
> both have on chip TCAM and buffers and both carry all forwarding
> information and perform all functions.

Yes, the Nile ASIC on the ME3600X/3800X was actually 2 of them in one
box. It was what Cisco called their "Cisco Carrier Ethernet ASIC", at
the time. Each chip was good for 24Gbps packet processing, and both
combined had a capability of 65Mpps.

The system was also equipped with what Cisco called their "Magic FPGA".
It was meant to offload certain processing requirements from Nile such
as OAM, performance monitoring, video monitoring, fast hellos, packet
inspection, e.t.c.

Nile was attached to:

    - 2x 64/96-bit 400MHz RLDRAM packet buffers.
    - 1x 36-bit 400MHz QDR SRAM.
    - 1x 400MHz TCAM chip.
    - 1x Forwarding RAM.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> -----Original Message-----
> From: Mark Tinka <mark.tinka@seacom.com>
> Sent: Wednesday, July 15, 2020 10:11 PM
> To: adamv0025@netconsultings.com; '?ukasz Bromirski'
> <lukasz@bromirski.net>
> Cc: 'cisco-nsp NSP' <cisco-nsp@puck.nether.net>
> Subject: Re: [c-nsp] Cisco N540-ACC-SYS ipv4 routes
>
>
>
> On 15/Jul/20 19:45, adamv0025@netconsultings.com wrote:
>
> > Wanna bet which one of the two will be cheaper? (the new upcoming
> > PTX10k1 vs 8201)
> > - also will be interesting to compare flex license options on both.
>
> You can get flexible licensing on the PTX1000 today, where the 72x ports are
> sold in chunks of 18x per license.
>
> Currently, the 8201 is very closely priced to the PTX1000. The problem is the
> 8201 supports only 12x 100Gbps ports, while the PTX1000 can deliver 24x
> 100Gbps,
You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

>in addition hundreds of 10Gbps ports (which the
> 8201 cannot support).
>
Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
PTX1000 is not 100G optimized (only PTX10K series are).
I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

adam




_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, Jul 16, 2020 at 5:00 AM Saku Ytti <saku@ytti.fi> wrote:

> On Thu, 16 Jul 2020 at 11:25, James Bensley
> <jwbensley+cisco-nsp@gmail.com> wrote:
>
> > You're right, and I should have been clearer that the ES20+ cards used
> > the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
> > are not an NPU. I think they are actually in the interesting middle
> > ground between what I would call an ASIC powered device and an NPU
> > powered device.
>
> ES20 is Toaster/PXF, which can be said to be NPU. But if PFC3[ABC] is
> NPU, then I'd say there are no-non NPU forwarding chips.
>
> --
> ++ytti
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>

Not trying to be smart or pedantic: modern routers are built out of lots of
"ASICs". I imagine the forwarding element design is the differentiator:

1. Fixed pipeline: EARL family
2. Progammable pipeline: UADP family
3. Run-to-completion: "Silicon One" family

Not an exhaustive list, lots of other examples etc...

--
Tim:>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, 16 Jul 2020 at 16:31, Tim Durack <tdurack@gmail.com> wrote:

> Not trying to be smart or pedantic: modern routers are built out of lots of "ASICs". I imagine the forwarding element design is the differentiator:

I don't think there is other option here :)

> 1. Fixed pipeline: EARL family
> 2. Progammable pipeline: UADP family
> 3. Run-to-completion: "Silicon One" family
>
> Not an exhaustive list, lots of other examples etc...

I mean what is pipeline? Silicon One is pipeline. Ingress pipe is
parser + npu (terminate) + npu (lookup), egress pipe is npu (rewrite).

Nokia FP is pipeline, but like Silicon one, it's pipeline of identical
NPUs, just lot more identical NPUs in pipeline compared to Silicon
One.

Trio OTOH hits only one core in LU, one given PPE handles everything
for given packet. So not a pipeline.

I like Trio approach more, as the more NPUs you have in the pipeline,
the more difficult it looks to program it right. Because if your NPU1
is parser, and you have big buggy code and your parsing of IPv6
extension headers is pathologically slow, now you're HOLB the whole
line, rest of the cores are doing nothing.
In Trio they don't need to be so careful, as you can think of it as
single fat core in stead of many slim cores in pipe, so you get to use
whole cycle pool, and if not every packet is pathological, you get
away with lot worse ucode design.


--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi James,

> James Bensley
> Sent: Thursday, July 16, 2020 9:25 AM
>
> The
> PFC3 cards on 7600 require a collection of ASICs (one to connect to the
front
> panel ports, one for queueing, one for forwarding lookups and rewrites,
one
> for backplane / crossbar transmission and reception etc.), so whilst they
> probably don't fit a strict definition of NPU I think they are in the
interest in-
> between stage of evolving from single ASIC -> a bunch of loosely coupled
> ASICs
> -> a complex of tightly bound ASIC.
Which is indeed how all the modern NPUs are actually built.
As the lithography shrinks more and more of these components are placed
under one roof but inside it's still a collection of distinct functional
blocks.
But out of those around two dozens of high level functional blocks a typical
NPU architecture has, in discussions around comparison between ASICs and
NPUs, I think the only part/ functional block worth focusing on is the one
responsible for the packet header (or in some cases the whole packet)
processing and specifically the flexibility that the particular functional
block provides in terms of possible operations on the packet header.
There we could then spot a certain pattern where functional blocks
responsible for packet header processing operating with less flexibility
tend to provide more consistent pps performances while costing less, in
contrast to those operating with more flexibility but expressing more
variability in pps performance while costing more. Then we can discuss
whether we want to call the former ACICs and the later NPUs but that's just
a nomenclature.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 15:16, adamv0025@netconsultings.com wrote:

> You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

Probably - not sure.

To be honest, not really interested in what Cisco do anymore. I'll keep
them around because the CSR1000v is the one thing they didn't cock up;
and even if they suddenly stop supporting it for whatever reason, it's
reasonably modern enough that I could still run it for years for BGP-4
route reflection, and not worry about them supporting it.


> Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
> PTX1000 is not 100G optimized (only PTX10K series are).
> I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

Yes, I knew you were talking about an upcoming new platform. Not really
heavy into that - for us, the PTX1000 meets both of our 10Gbps and
100Gbps core requirements at a fair price, for a very long time to come.
I wish I could say the same about the
very-underutilized-but-still-potent CRS-X's we have, but alas, Cisco
again showing why I can't trust them for the long-haul.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 15:31, Tim Durack wrote:

> Not trying to be smart or pedantic: modern routers are built out of lots of
> "ASICs". I imagine the forwarding element design is the differentiator:
>
> 1. Fixed pipeline: EARL family
> 2. Progammable pipeline: UADP family
> 3. Run-to-completion: "Silicon One" family
>
> Not an exhaustive list, lots of other examples etc...

I'm thinking that this is where we can draw a convergence, and conclusion.

An NPU isn't really just a single chip, but a multitude of chips
(ASIC's, FPGA's, DDR, GDDR, HMC, HBM, TCAM, e.t.c.) all combining
together to form a homogenous unit that is fit for purpose.

The successful vendors will be those who can make the near-perfect
combination of each individual part, to form the ideal NPU.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
can you please remove me from this list

On 16/07/2020 14:51, Mark Tinka wrote:
>
> On 16/Jul/20 15:31, Tim Durack wrote:
>
>> Not trying to be smart or pedantic: modern routers are built out of lots of
>> "ASICs". I imagine the forwarding element design is the differentiator:
>>
>> 1. Fixed pipeline: EARL family
>> 2. Progammable pipeline: UADP family
>> 3. Run-to-completion: "Silicon One" family
>>
>> Not an exhaustive list, lots of other examples etc...
> I'm thinking that this is where we can draw a convergence, and conclusion.
>
> An NPU isn't really just a single chip, but a multitude of chips
> (ASIC's, FPGA's, DDR, GDDR, HMC, HBM, TCAM, e.t.c.) all combining
> together to form a homogenous unit that is fit for purpose.
>
> The successful vendors will be those who can make the near-perfect
> combination of each individual part, to form the ideal NPU.
>
> Mark.
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come. CRS-X I would agree doesn't have the longevity of some of the other platforms. In the end Cisco builds hardware people ask for, and unfortunately has to retire hardware people no longer want to purchase.

The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

Thanks,
Phil

On 7/16/20, 9:53 AM, "cisco-nsp on behalf of Mark Tinka" <cisco-nsp-bounces@puck.nether.net on behalf of mark.tinka@seacom.com> wrote:

On 16/Jul/20 15:16, adamv0025@netconsultings.com wrote:

> You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

Probably - not sure.

To be honest, not really interested in what Cisco do anymore. I'll keep
them around because the CSR1000v is the one thing they didn't cock up;
and even if they suddenly stop supporting it for whatever reason, it's
reasonably modern enough that I could still run it for years for BGP-4
route reflection, and not worry about them supporting it.


> Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
> PTX1000 is not 100G optimized (only PTX10K series are).
> I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

Yes, I knew you were talking about an upcoming new platform. Not really
heavy into that - for us, the PTX1000 meets both of our 10Gbps and
100Gbps core requirements at a fair price, for a very long time to come.
I wish I could say the same about the
very-underutilized-but-still-potent CRS-X's we have, but alas, Cisco
again showing why I can't trust them for the long-haul.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 20:48, Phil Bedard wrote:
> To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come.

If we wanted to use a purely Ethernet-focused box for our core when we
deployed back in 2014, I'd have gone with the MX960.

The CRS made a lot of sense because we had a need for plenty of
non-Ethernet links, and both the MX and ASR9000 were too expensive on a
per-slot basis.


> CRS-X I would agree doesn't have the longevity of some of the other platforms. In the end Cisco builds hardware people ask for, and unfortunately has to retire hardware people no longer want to purchase.

The CRS-X is neither EoS nor EoL. It can do 400Gbps/slot (even though I
am sure it can do more, but then where do you put the NCS 6000), and has
plenty of room for growth.

My problem with Cisco is their solution to a lot of their products is a
complete swap-out. Making us have to replace a ton of CRS-X's with
ASR9000's so I can get "cheap" 100Gbps ports when our current platform
is nowhere near dying is just silly and opportunistic.

>
>
> The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

The lack of 10Gbps support on the 8200's notwithstanding, I just don't
trust Cisco anymore. Boxes come and go with them before they'd have time
to bake in, who knows what they'll come up with next.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
?On 7/16/20, 4:37 PM, "Mark Tinka" <mark.tinka@seacom.com> wrote:



On 16/Jul/20 20:48, Phil Bedard wrote:
> > To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come.

> If we wanted to use a purely Ethernet-focused box for our core when we
> deployed back in 2014, I'd have gone with the MX960.

> The CRS made a lot of sense because we had a need for plenty of
> non-Ethernet links, and both the MX and ASR9000 were too expensive on a
> per-slot basis.

Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

>
> The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

> The lack of 10Gbps support on the 8200's notwithstanding, I just don't
> trust Cisco anymore. Boxes come and go with them before they'd have time
> to bake in, who knows what they'll come up with next.

Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

Thanks,
Phil



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi,

On Thu, Jul 16, 2020 at 06:53:49PM -0400, Phil Bedard wrote:
> > The CRS made a lot of sense because we had a need for plenty of
> > non-Ethernet links, and both the MX and ASR9000 were too expensive on a
> > per-slot basis.
>
> Fair enough. Every vendor has gone through their own pain with
> the older midplane systems in having to swap out chassis multiple
> times to get to higher speeds. Thankfully with the newer fabric
> designs we've eliminated most of that.

But that's actually one of the things that alienates the "not megacarrier"
customers. There's perfectly working routers, they fall out of love,
new features are not added anymore, and you're expected to buy 32x400G
things when all you need are like "16x 10G".

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany gert@greenie.muc.de
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 17/Jul/20 00:53, Phil Bedard wrote:

> Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

Well, we started off with the MX480 back in 2014, and save for the most
recent purchases in the last year, we are still running a ton of the
actual chassis' from 2014. We did buy them with the high capacity fan
trays back then, and the 264VAC power supplies too, so those haven't
changed. What has changed, in PoP's where we've needed to add MPC7E line
cards for low-scale 100Gbps use-cases, is the SCB. We started off with
the SCBE in 2014, and have that in most of the boxes that don't need the
MPC7E's. On the units with the MPC7E's, we just upgraded the SCBE's to
the SCBE2's.

The MX480 RE-S-1800x4 control planes from 2014 are still running just
fine. In fact, we still buy that RE for all new MX480 deployments, today.

I'm not sure how much enhancement you'd need to make to an ASR9006 or
ASR9010 to keep it running 7+ years on. We only ever deployed the
ASR9001, which is still humming along as long as you don't use it as a
very busy peering router :-).

Also in 2014, we bought the CRS-B chassis, which was the one built to
support between 400Gbps - 800Gbps per slot. Cisco decided to cap it off
at 400Gbps/slot when they moved on with the NCS 6000, even though they
did tell us that it has the potential to do 800Gbps with no issue.

We started it off as a CRS-3 (140Gbps/slot), and most of our PoP's still
run it in that configuration. For the PoP's where we need 100Gbps
support, we upgraded them to the CRS-X (so a new 400Gbps fabric and
slot-specific FP-X's). The good news is that the CRS-X is backward
compatible with CRS-3 (and CRS-1) line cards, so that mix works well for
us, since the PoP's where we need 100Gbps ports also still run 10Gbps
ports in CRS-3 line cards.

We still have the same RP's in our CRS routers from 2014 (1.73GHz
Dual-Core Intel Xeon, 12GB RAM, 2x 32GB SSD drives). Solid control
planes, those.

So for me, Cisco not EoL'ing or EoS'ing the CRS-X (or its line cards),
but still "nudging" you away from it is simply bad form. We still have
anywhere from 4 - 6 slots free on each of these routers (so 8 - 12 per
PoP), so the room for growth is plenty, and there is no way I'm going to
put my refresh in Cisco's hands after this behaviour from them. We saw
what happened with a bunch of other boxes that came out, and then simply
disappeared - the NCS 6000 being the most recent.

So I have no confidence that someone at Cisco will some day get bored
and decide that the 8000 platform was not the right approach. No
confidence at all! And I told our AM's the exact same thing a few weeks
ago, when they asked why they were not being considered for our core
refresh any longer. I hope they learned something, but it's hard to
teach the 500-pound gorilla in the room new tricks, so...


> Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

We've decided not to continue running our core routers on chassis-based
platforms. The current state-of-the-art suggests that you can get quite
a lot of density, performance and reliability from fixed form factor
core routers, even for multi-100Gbps applications. Less space, less
power, fewer things to spare, fewer things to fail, quick and easy
installations/de-installations... what's not to love?

So as we get rid of our CRS's, only fixed form factor options are going in.

The PTX1000 is looking very good, but we are also looking at Nokia's new
SR-1. The SR-1 can be ordered either as a fixed or modular chassis, and
consumes 3U of rack space.

Exciting times.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 17/Jul/20 08:17, Gert Doering wrote:

> But that's actually one of the things that alienates the "not megacarrier"
> customers. There's perfectly working routers, they fall out of love,
> new features are not added anymore, and you're expected to buy 32x400G
> things when all you need are like "16x 10G".

I think the MX has shown how this can be done, reasonably well, and
still continue to be the pick of the field.

The MX104 was the only time I can fault Juniper, but otherwise, what
they've done with the MX is nothing short of exemplary.

I have no doubt the MX480's I bought in 2014 will still be with us for
another 6 years, at least. Which is why I have no problem investing in
the MX10003 for dedicated 100Gbps edge customers.

Fancy slides and coining catchy terms is no longer a path into our
network. Service provider networks are not that complicated, or complex.
Get back to your roots, and let's not make it a whole song & dance

Mark.
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
The MX960 obviously came out a long time ago. There have been new chassis versions for it as well as the PTX5K to support higher bandwidth speeds but it was always called the same thing and backwards compatible.

Can't argue with the NCS 6K, IMHO it was really forced by some large providers who required a multi-chassis evolution beyond CRS, and that continues to be its main role. But very few really want to continue with multi-chassis at this point as router capacity has increased rapidly from where it was even a few years ago.

Obviously Cisco has the ASR 99XX series, but there are a lot of 9006s and 9010s that have been in networks for 10+ years at this point. You can use the latest line cards with 400G QSFP-DD ports in a 9006/9010 chassis that came out in 2007? Obviously with commons upgrades like switch fabrics and fans to get the most out of it.

TBH the 8k is probably not a very good fit for your network today. Not sure if it's super public but Cisco does have the ASR 9903. It's 3RU, 600mm depth, 3.6Tb FD. It's 16x100G+20x10G fixed, and then a single 800G or 2T expansion card.

Thanks,
Phil

?On 7/17/20, 3:32 AM, "Mark Tinka" <mark.tinka@seacom.com> wrote:



On 17/Jul/20 00:53, Phil Bedard wrote:

> Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

Well, we started off with the MX480 back in 2014, and save for the most
recent purchases in the last year, we are still running a ton of the
actual chassis' from 2014. We did buy them with the high capacity fan
trays back then, and the 264VAC power supplies too, so those haven't
changed. What has changed, in PoP's where we've needed to add MPC7E line
cards for low-scale 100Gbps use-cases, is the SCB. We started off with
the SCBE in 2014, and have that in most of the boxes that don't need the
MPC7E's. On the units with the MPC7E's, we just upgraded the SCBE's to
the SCBE2's.

The MX480 RE-S-1800x4 control planes from 2014 are still running just
fine. In fact, we still buy that RE for all new MX480 deployments, today.

I'm not sure how much enhancement you'd need to make to an ASR9006 or
ASR9010 to keep it running 7+ years on. We only ever deployed the
ASR9001, which is still humming along as long as you don't use it as a
very busy peering router :-).

Also in 2014, we bought the CRS-B chassis, which was the one built to
support between 400Gbps - 800Gbps per slot. Cisco decided to cap it off
at 400Gbps/slot when they moved on with the NCS 6000, even though they
did tell us that it has the potential to do 800Gbps with no issue.

We started it off as a CRS-3 (140Gbps/slot), and most of our PoP's still
run it in that configuration. For the PoP's where we need 100Gbps
support, we upgraded them to the CRS-X (so a new 400Gbps fabric and
slot-specific FP-X's). The good news is that the CRS-X is backward
compatible with CRS-3 (and CRS-1) line cards, so that mix works well for
us, since the PoP's where we need 100Gbps ports also still run 10Gbps
ports in CRS-3 line cards.

We still have the same RP's in our CRS routers from 2014 (1.73GHz
Dual-Core Intel Xeon, 12GB RAM, 2x 32GB SSD drives). Solid control
planes, those.

So for me, Cisco not EoL'ing or EoS'ing the CRS-X (or its line cards),
but still "nudging" you away from it is simply bad form. We still have
anywhere from 4 - 6 slots free on each of these routers (so 8 - 12 per
PoP), so the room for growth is plenty, and there is no way I'm going to
put my refresh in Cisco's hands after this behaviour from them. We saw
what happened with a bunch of other boxes that came out, and then simply
disappeared - the NCS 6000 being the most recent.

So I have no confidence that someone at Cisco will some day get bored
and decide that the 8000 platform was not the right approach. No
confidence at all! And I told our AM's the exact same thing a few weeks
ago, when they asked why they were not being considered for our core
refresh any longer. I hope they learned something, but it's hard to
teach the 500-pound gorilla in the room new tricks, so...


> Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

We've decided not to continue running our core routers on chassis-based
platforms. The current state-of-the-art suggests that you can get quite
a lot of density, performance and reliability from fixed form factor
core routers, even for multi-100Gbps applications. Less space, less
power, fewer things to spare, fewer things to fail, quick and easy
installations/de-installations... what's not to love?

So as we get rid of our CRS's, only fixed form factor options are going in.

The PTX1000 is looking very good, but we are also looking at Nokia's new
SR-1. The SR-1 can be ordered either as a fixed or modular chassis, and
consumes 3U of rack space.

Exciting times.

Mark.



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 17/Jul/20 18:22, Phil Bedard wrote:
> The MX960 obviously came out a long time ago. There have been new chassis versions for it as well as the PTX5K to support higher bandwidth speeds but it was always called the same thing and backwards compatible.

Indeed. But we are likely 2 chassis revisions behind on the current
shipping MX platforms, and we are still happy, even with newer line
cards, fabrics, and so on.

Of course, you do get a point where you ultimately need to change a
chassis to go past a certain performance threshold, but the degree of
oscillation on the MX side, as a function of how much you need to "give
up" to get there, is not as bad.

I suspect the ASR9000 is not far off from the MX in that regard, but
like I said, if I had to choose one of them for my core, the MX would
have won that easily.

> Can't argue with the NCS 6K, IMHO it was really forced by some large providers who required a multi-chassis evolution beyond CRS, and that continues to be its main role. But very few really want to continue with multi-chassis at this point as router capacity has increased rapidly from where it was even a few years ago.

There was the ME2600X. There was the ASR14000. There was the ME4600.
There was the CRS LSR line card. I could go on... plenty of situations
we've found ourselves in where we can't bank on a promise that Cisco
have made.


> TBH the 8k is probably not a very good fit for your network today. Not sure if it's super public but Cisco does have the ASR 9903. It's 3RU, 600mm depth, 3.6Tb FD. It's 16x100G+20x10G fixed, and then a single 800G or 2T expansion card.

Yes, heard about this one. Still not as dense as we can get from the
PTX1000 or PTX10002.

But again with my broken record, we are done with this crew :-).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> From: Phil Bedard <philxor@gmail.com>
> Sent: Friday, July 17, 2020 5:22 PM
>
> The MX960 obviously came out a long time ago. There have been new
> chassis versions for it as well as the PTX5K to support higher bandwidth
> speeds but it was always called the same thing and backwards compatible.
>
> Can't argue with the NCS 6K, IMHO it was really forced by some large
> providers who required a multi-chassis evolution beyond CRS, and that
> continues to be its main role. But very few really want to continue with
> multi-chassis at this point as router capacity has increased rapidly from where
> it was even a few years ago.
>
> Obviously Cisco has the ASR 99XX series, but there are a lot of 9006s and
> 9010s that have been in networks for 10+ years at this point. You can use the
> latest line cards with 400G QSFP-DD ports in a 9006/9010 chassis that came
> out in 2007? Obviously with commons upgrades like switch fabrics and fans
> to get the most out of it.
>
> TBH the 8k is probably not a very good fit for your network today. Not sure if
> it's super public but Cisco does have the ASR 9903. It's 3RU, 600mm depth,
> 3.6Tb FD. It's 16x100G+20x10G fixed, and then a single 800G or 2T expansion
> card.
>
Hi Phil,

Personally I don't have a problem with the ever changing platforms -to me it's just natural to see the evolution cycles shortening with every next one (-this is to the point of folks reminiscing about the good old 6500/7600 times). My reaction to gears spinning faster is horizontal scaling.

The problem I think is that while Cisco is still extra attentive to the big folks (chasses developed specifically for these carriers, custom code, etc...) in recent years it feels like somehow Cisco is not paying attention to any of the smaller accounts. I don't know maybe it's just the account teams that gone bad. (or too overloaded with the pre-covid cleansing?)
But it wasn’t like this, yes I enjoyed the premium lane while working for one of the giants, but even when I started on a small green field project I enjoyed very good partnership from the Cisco team. But these days it all seems cold uninterested at all.

So to me it's all about having a good reliable partner by your side, cause there are going to be good and inevitably bad times as well, so while there are compelling products this is what kills the deal ultimately.

adam



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/