Mailing List Archive

1 2 3 4  View All
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> From: Saku Ytti <saku@ytti.fi>
> Sent: Monday, July 13, 2020 11:06 AM
>
> On Mon, 13 Jul 2020 at 12:42, <adamv0025@netconsultings.com> wrote:
>
> > On J2 you can pretty much enable all the bells and whistles you can
> > and still get the same pps rate out of it as with no features enabled.
>
> I don't think this is strictly true, maybe it's true for several cases. But even
> without recirculation, I think you can take variable time through J2 and if you
> wanted, you could make it perform pathologically poor, it is flexible enough
> for that. You can do a lot of stuff on the lookup pipeline for the first 144B in
> J2. Then there is the programmable elements matrix for future functions, can
> I put every frame there with no cost? You can use system headers to skip
> devices inside the pipeline.
> It's hard for me to imagine with all this flexibility we'd still always guarantee
> constant time.
>
> J2 is much more flexible than what J1 was. But of course it's still very much a
> pipeline of different types of silicons, some more some less programmable.
> So I don't know.
>
I think the idea behind it is that the whole thing is clocked to offer "line-rate" (whatever the vendor deems as line-rate -my definition is per 10Gbps@L2(64B)@L1(84B) is 29.761Mpps(two directions) )
So if you guarantee that no matter what features operator enables, the packet headers will leave the pipeline in frequency no less that << line-rate >>, you have an ASIC.
Also since the feature set is very narrow on these things (compared to NPUs) the deviation from the mean is probably very low.
And of course then there's the recirc. - but that's usually a known caveat.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 14/Jul/20 09:56, adamv0025@netconsultings.com wrote:

> Not sure what you mean, you can run XR on a white-box, or x86-host (e.g. cRDP). -that's regarding the disaggregation and NFV...

What I mean is the "holy grail" of white box or x86 + home-grown or
new/cheap NOS.

Running a traditional vendor's OS on a white box or x86 isn't going to
necessarily be cheaper from a licensing perspective.

> And regarding the "flavour-of-the-year-SDN-thingy", I guess I could see how it has a certain mystery about it for outsiders, but for someone building an intent based networking/ service orchestration system all the concepts you read about in various RFCs, books and publications are a day to day reality.

I'm not talking about their in-existence... I'm talking about the wide
difference between getting these "intend-based" solutions standardized,
and what is actually happening that hasn't been shared, and/or is
proprietary, particularly with the cloud bags (bags = boys & girls).

The otherwise untidy standardization process for "SDN" does not mean
operators (including the cloud bags) aren't enjoying their own
"automation" deployments, whatever those may be.


> I'd say NO thanks very much,
> Have you seen what they did with their espresso core? Looks like a bunch of programmers attempt at core network while never talking to anyone with network background, (no wonder their network architect job advert looked more like software architect job advert -not a single requirement around networking design skills).
> There are a few interesting concepts in there worth exploring, but the thing is a mess.

Like I said, "if" and "could". I don't know...

What I know is the cloud bags have the resources to experiment and write
code, maybe even to a larger extent than the traditional vendors they
buy from. Sure, the problems you mention are not unlike what we see
coming from vendors where the person interpreting the RFC to write the
code has no idea what IS-IS actually does. That comes down to staff
management, so not an entirely big problem to solve once they have the
right leadership.

For my shop, it still makes sense to rely on a traditional vendor to
solve my networking problems. This isn't necessarily the case for
fast-growing cloud bags, or network operators eating up the
"intent-based" liquorice :-).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 14 Jul 2020, at 06:52, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 14/Jul/20 04:28, Phil Bedard wrote:
>> The PTX 5000 was the original PTX. The 1st/2nd generation FPCs didn't have much FIB capacity, like 128k routes which was well below an Internet table in 2014. The 3rd gen is where they started supporting up to 1M+ v4 prefixes.
>
> Yes, this is what I remember also. But if you look at the PTX1000, you
> don't have that problem from when it went into inception.
>
> Granted, one could say Juniper made the assumption that most MPLS-based
> networks would run a BGP-free core, and in theory, there were right.

That’s right. That’s also why I didn’t want to claim Juniper was stupid
with the product, more into direction that such naive approach simply
didn’t work out.

> What they didn't account was that 6PE was one of those "temporary
> becomes permanent" situations.

OTOH, that’s still the vision we’re trying to catch up with, right? Have one,
simple and easy to provision/monitor/troubleshoot/traffic engineer/decomission
protocol, that carries all address families, has unified architecture and
universal interoperability “because of that”.

So, about OpenFlow… ;)


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 15:08, ?ukasz Bromirski wrote:

> That’s right. That’s also why I didn’t want to claim Juniper was stupid
> with the product, more into direction that such naive approach simply
> didn’t work out.

Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
list of shoo-ins.


> OTOH, that’s still the vision we’re trying to catch up with, right? Have one,
> simple and easy to provision/monitor/troubleshoot/traffic engineer/decomission
> protocol, that carries all address families, has unified architecture and
> universal interoperability “because of that”.
>
> So, about OpenFlow… ;)

Yea, about that :-)...

There will come a time when new operators cannot buy IPv4 on an open or
black market, so we all need to keep cracking on bringing IPv6 up to
scratch.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Mark,

> On 15 Jul 2020, at 15:45, Mark Tinka <mark.tinka@seacom.com> wrote:
>
> On 15/Jul/20 15:08, ?ukasz Bromirski wrote:
>
>> That’s right. That’s also why I didn’t want to claim Juniper was stupid
>> with the product, more into direction that such naive approach simply
>> didn’t work out.
> Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
> list of shoo-ins.

That’s interesting. How about Cisco 8000 (feature-rich services) or
NCS 55xx (cheap 10/40/100/200G ifaces)?

> There will come a time when new operators cannot buy IPv4 on an open or
> black market, so we all need to keep cracking on bringing IPv6 up to
> scratch.

I’m trying to preach IPv6 for two decades right now. I can last for one
or two decades more maximum so you guys *need* to deploy IPv6 in
production environments even if a) it’s still broken and b) it is
missing some parts.


./
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> -----Original Message-----
> From: cisco-nsp <cisco-nsp-bounces@puck.nether.net> On Behalf Of Lukasz
> Bromirski
> Sent: Wednesday, July 15, 2020 6:00 PM
> To: Mark Tinka <mark.tinka@seacom.com>
> Cc: cisco-nsp NSP <cisco-nsp@puck.nether.net>
> Subject: Re: [c-nsp] Cisco N540-ACC-SYS ipv4 routes
>
> Mark,
>
> > On 15 Jul 2020, at 15:45, Mark Tinka <mark.tinka@seacom.com> wrote:
> >
> > On 15/Jul/20 15:08, ?ukasz Bromirski wrote:
> >
> >> That’s right. That’s also why I didn’t want to claim Juniper was
> >> stupid with the product, more into direction that such naive approach
> >> simply didn’t work out.
> > Well, we are dumping our CRS-X boxes and the PTX1000 is high up on the
> > list of shoo-ins.
>
> That’s interesting. How about Cisco 8000 (feature-rich services) or NCS 55xx
> (cheap 10/40/100/200G ifaces)?
>
Wanna bet which one of the two will be cheaper? (the new upcoming PTX10k1 vs 8201)
- also will be interesting to compare flex license options on both.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 19:00, ?ukasz Bromirski wrote:

> That’s interesting. How about Cisco 8000 (feature-rich services)...

I don't need rich features in the core. The only exotic feature I will
need is LDPv6. It's a BGP-free core, swapping labels at high-speed for
IPv4 and IPv6 traffic in MPLS frames. So no fancy bits needed.

Also, the 8200 doesn't support 10Gbps ports, which I still need. This is
only supported on the 8800, which is too big and too bulky.


> or
> NCS 55xx (cheap 10/40/100/200G ifaces)?

Broadcom!

But more importantly, my trust level in Cisco's long-term ambitions in
the industry are at an all-time low. I just don't feel that Cisco have
sufficient integrity within their business for me to sleep well at
night, relying on them to have my best interests at heart, over a 5- to
10-year period.


> I’m trying to preach IPv6 for two decades right now. I can last for one
> or two decades more maximum so you guys *need* to deploy IPv6 in
> production environments even if a) it’s still broken and b) it is
> missing some parts.

I have never believed in tunneled IPv6. I have ran native dual-stack
infrastructure in all the networks I built in Africa, Asia-Pac, and
Europe, in the last 20 years.

This is the driving force behind my need for LDPv6, as I do not believe
in 6PE. And one of the many reasons I am moving on from Cisco to other
vendors that appreciate this philosphy, rather than finding ways to keep
milking money from customers through CGN's, vendor-managed services, and
such.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 15/Jul/20 19:45, adamv0025@netconsultings.com wrote:

> Wanna bet which one of the two will be cheaper? (the new upcoming PTX10k1 vs 8201)
> - also will be interesting to compare flex license options on both.

You can get flexible licensing on the PTX1000 today, where the 72x ports
are sold in chunks of 18x per license.

Currently, the 8201 is very closely priced to the PTX1000. The problem
is the 8201 supports only 12x 100Gbps ports, while the PTX1000 can
deliver 24x 100Gbps, in addition hundreds of 10Gbps ports (which the
8201 cannot support).

The 8202 would be closer to the PTX1000 and PTX10002, but the lack of
10Gbps support is a real problem.

Also, both the 8201 and 8202 give you only 2x power supplies, while you
get 4x power supplies in all the PTX fixed form factor routers bar the
PTX10003 (which provides 2x 3000W PSU's).

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Tue, 14 Jul 2020 at 06:20, Saku Ytti <saku@ytti.fi> wrote:
>
> On Mon, 13 Jul 2020 at 20:39, James Bensley
> <jwbensley+cisco-nsp@gmail.com> wrote:
>
> > Back in the 7600s it was NPU based, and what we call NPUs today are
> > sometimes a collection of ASICs that form a "complex of ASICs". That
> > is what powered the 7600, the NP3C NPU. 7600s used a group of ASICs
> > working together to perform forwarding lookups, buffering, backplane
> > sending/receiving etc.
>
> NP3C was on ES20+ (not ES20). The ASR9k Trident was the same EZchip
> NP3C. But of course the vast majority of 7600 linecards were PFC3,
> clearly not a NPU.

You're right, and I should have been clearer that the ES20+ cards used
the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
are not an NPU. I think they are actually in the interesting middle
ground between what I would call an ASIC powered device and an NPU
powered device.

Take the ME3600X and early ASR920 devices for example (I don't know so
much about the more recent ASR920s); these are single chip all-in-one
ASIC boxes. Technically the ME3600X/ME3800X use two ASICs linked via a
PCI link which is non-blocking, but this is just horizontal scaling to
accomodate for additional ports, it's two of the exact same ASIC which
both have on chip TCAM and buffers and both carry all forwarding
information and perform all functions.

So in these two pizza boxes, we have a single ASIC that does
everything; the front panel ports connect to the ASICs, they perform
ingress queueing, forwarding look-ups, egress re-writes, egress
queueing, everything. The PFC3 cards on 7600 require a collection of
ASICs (one to connect to the front panel ports, one for queueing, one
for forwarding lookups and rewrites, one for backplane / crossbar
transmission and reception etc.), so whilst they probably don't fit a
strict definition of NPU I think they are in the interest in-between
stage of evolving from single ASIC -> a bunch of loosely coupled ASICs
-> a complex of tightly bound ASIC.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, 16 Jul 2020 at 11:25, James Bensley
<jwbensley+cisco-nsp@gmail.com> wrote:

> You're right, and I should have been clearer that the ES20+ cards used
> the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
> are not an NPU. I think they are actually in the interesting middle
> ground between what I would call an ASIC powered device and an NPU
> powered device.

ES20 is Toaster/PXF, which can be said to be NPU. But if PFC3[ABC] is
NPU, then I'd say there are no-non NPU forwarding chips.

--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 10:24, James Bensley wrote:

>
> Take the ME3600X and early ASR920 devices for example (I don't know so
> much about the more recent ASR920s);

I think the ASR920 hasn't changed since the beginning. It runs the Cylon
ASIC on all models I am aware of.


> Technically the ME3600X/ME3800X use two ASICs linked via a
> PCI link which is non-blocking, but this is just horizontal scaling to
> accomodate for additional ports, it's two of the exact same ASIC which
> both have on chip TCAM and buffers and both carry all forwarding
> information and perform all functions.

Yes, the Nile ASIC on the ME3600X/3800X was actually 2 of them in one
box. It was what Cisco called their "Cisco Carrier Ethernet ASIC", at
the time. Each chip was good for 24Gbps packet processing, and both
combined had a capability of 65Mpps.

The system was also equipped with what Cisco called their "Magic FPGA".
It was meant to offload certain processing requirements from Nile such
as OAM, performance monitoring, video monitoring, fast hellos, packet
inspection, e.t.c.

Nile was attached to:

    - 2x 64/96-bit 400MHz RLDRAM packet buffers.
    - 1x 36-bit 400MHz QDR SRAM.
    - 1x 400MHz TCAM chip.
    - 1x Forwarding RAM.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
> -----Original Message-----
> From: Mark Tinka <mark.tinka@seacom.com>
> Sent: Wednesday, July 15, 2020 10:11 PM
> To: adamv0025@netconsultings.com; '?ukasz Bromirski'
> <lukasz@bromirski.net>
> Cc: 'cisco-nsp NSP' <cisco-nsp@puck.nether.net>
> Subject: Re: [c-nsp] Cisco N540-ACC-SYS ipv4 routes
>
>
>
> On 15/Jul/20 19:45, adamv0025@netconsultings.com wrote:
>
> > Wanna bet which one of the two will be cheaper? (the new upcoming
> > PTX10k1 vs 8201)
> > - also will be interesting to compare flex license options on both.
>
> You can get flexible licensing on the PTX1000 today, where the 72x ports are
> sold in chunks of 18x per license.
>
> Currently, the 8201 is very closely priced to the PTX1000. The problem is the
> 8201 supports only 12x 100Gbps ports, while the PTX1000 can deliver 24x
> 100Gbps,
You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

>in addition hundreds of 10Gbps ports (which the
> 8201 cannot support).
>
Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
PTX1000 is not 100G optimized (only PTX10K series are).
I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

adam




_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, Jul 16, 2020 at 5:00 AM Saku Ytti <saku@ytti.fi> wrote:

> On Thu, 16 Jul 2020 at 11:25, James Bensley
> <jwbensley+cisco-nsp@gmail.com> wrote:
>
> > You're right, and I should have been clearer that the ES20+ cards used
> > the NP3C NPU. But I wouldn't say that ES20 cards / PFC3 cards clearly
> > are not an NPU. I think they are actually in the interesting middle
> > ground between what I would call an ASIC powered device and an NPU
> > powered device.
>
> ES20 is Toaster/PXF, which can be said to be NPU. But if PFC3[ABC] is
> NPU, then I'd say there are no-non NPU forwarding chips.
>
> --
> ++ytti
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>

Not trying to be smart or pedantic: modern routers are built out of lots of
"ASICs". I imagine the forwarding element design is the differentiator:

1. Fixed pipeline: EARL family
2. Progammable pipeline: UADP family
3. Run-to-completion: "Silicon One" family

Not an exhaustive list, lots of other examples etc...

--
Tim:>
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On Thu, 16 Jul 2020 at 16:31, Tim Durack <tdurack@gmail.com> wrote:

> Not trying to be smart or pedantic: modern routers are built out of lots of "ASICs". I imagine the forwarding element design is the differentiator:

I don't think there is other option here :)

> 1. Fixed pipeline: EARL family
> 2. Progammable pipeline: UADP family
> 3. Run-to-completion: "Silicon One" family
>
> Not an exhaustive list, lots of other examples etc...

I mean what is pipeline? Silicon One is pipeline. Ingress pipe is
parser + npu (terminate) + npu (lookup), egress pipe is npu (rewrite).

Nokia FP is pipeline, but like Silicon one, it's pipeline of identical
NPUs, just lot more identical NPUs in pipeline compared to Silicon
One.

Trio OTOH hits only one core in LU, one given PPE handles everything
for given packet. So not a pipeline.

I like Trio approach more, as the more NPUs you have in the pipeline,
the more difficult it looks to program it right. Because if your NPU1
is parser, and you have big buggy code and your parsing of IPv6
extension headers is pathologically slow, now you're HOLB the whole
line, rest of the cores are doing nothing.
In Trio they don't need to be so careful, as you can think of it as
single fat core in stead of many slim cores in pipe, so you get to use
whole cycle pool, and if not every packet is pathological, you get
away with lot worse ucode design.


--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi James,

> James Bensley
> Sent: Thursday, July 16, 2020 9:25 AM
>
> The
> PFC3 cards on 7600 require a collection of ASICs (one to connect to the
front
> panel ports, one for queueing, one for forwarding lookups and rewrites,
one
> for backplane / crossbar transmission and reception etc.), so whilst they
> probably don't fit a strict definition of NPU I think they are in the
interest in-
> between stage of evolving from single ASIC -> a bunch of loosely coupled
> ASICs
> -> a complex of tightly bound ASIC.
Which is indeed how all the modern NPUs are actually built.
As the lithography shrinks more and more of these components are placed
under one roof but inside it's still a collection of distinct functional
blocks.
But out of those around two dozens of high level functional blocks a typical
NPU architecture has, in discussions around comparison between ASICs and
NPUs, I think the only part/ functional block worth focusing on is the one
responsible for the packet header (or in some cases the whole packet)
processing and specifically the flexibility that the particular functional
block provides in terms of possible operations on the packet header.
There we could then spot a certain pattern where functional blocks
responsible for packet header processing operating with less flexibility
tend to provide more consistent pps performances while costing less, in
contrast to those operating with more flexibility but expressing more
variability in pps performance while costing more. Then we can discuss
whether we want to call the former ACICs and the later NPUs but that's just
a nomenclature.

adam


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 15:16, adamv0025@netconsultings.com wrote:

> You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

Probably - not sure.

To be honest, not really interested in what Cisco do anymore. I'll keep
them around because the CSR1000v is the one thing they didn't cock up;
and even if they suddenly stop supporting it for whatever reason, it's
reasonably modern enough that I could still run it for years for BGP-4
route reflection, and not worry about them supporting it.


> Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
> PTX1000 is not 100G optimized (only PTX10K series are).
> I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

Yes, I knew you were talking about an upcoming new platform. Not really
heavy into that - for us, the PTX1000 meets both of our 10Gbps and
100Gbps core requirements at a fair price, for a very long time to come.
I wish I could say the same about the
very-underutilized-but-still-potent CRS-X's we have, but alas, Cisco
again showing why I can't trust them for the long-haul.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 15:31, Tim Durack wrote:

> Not trying to be smart or pedantic: modern routers are built out of lots of
> "ASICs". I imagine the forwarding element design is the differentiator:
>
> 1. Fixed pipeline: EARL family
> 2. Progammable pipeline: UADP family
> 3. Run-to-completion: "Silicon One" family
>
> Not an exhaustive list, lots of other examples etc...

I'm thinking that this is where we can draw a convergence, and conclusion.

An NPU isn't really just a single chip, but a multitude of chips
(ASIC's, FPGA's, DDR, GDDR, HMC, HBM, TCAM, e.t.c.) all combining
together to form a homogenous unit that is fit for purpose.

The successful vendors will be those who can make the near-perfect
combination of each individual part, to form the ideal NPU.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
can you please remove me from this list

On 16/07/2020 14:51, Mark Tinka wrote:
>
> On 16/Jul/20 15:31, Tim Durack wrote:
>
>> Not trying to be smart or pedantic: modern routers are built out of lots of
>> "ASICs". I imagine the forwarding element design is the differentiator:
>>
>> 1. Fixed pipeline: EARL family
>> 2. Progammable pipeline: UADP family
>> 3. Run-to-completion: "Silicon One" family
>>
>> Not an exhaustive list, lots of other examples etc...
> I'm thinking that this is where we can draw a convergence, and conclusion.
>
> An NPU isn't really just a single chip, but a multitude of chips
> (ASIC's, FPGA's, DDR, GDDR, HMC, HBM, TCAM, e.t.c.) all combining
> together to form a homogenous unit that is fit for purpose.
>
> The successful vendors will be those who can make the near-perfect
> combination of each individual part, to form the ideal NPU.
>
> Mark.
> _______________________________________________
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come. CRS-X I would agree doesn't have the longevity of some of the other platforms. In the end Cisco builds hardware people ask for, and unfortunately has to retire hardware people no longer want to purchase.

The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

Thanks,
Phil

On 7/16/20, 9:53 AM, "cisco-nsp on behalf of Mark Tinka" <cisco-nsp-bounces@puck.nether.net on behalf of mark.tinka@seacom.com> wrote:

On 16/Jul/20 15:16, adamv0025@netconsultings.com wrote:

> You should be able to break out the 24x400G ports on 8201 to 96x100G ports (plus the 12x100G native),

Probably - not sure.

To be honest, not really interested in what Cisco do anymore. I'll keep
them around because the CSR1000v is the one thing they didn't cock up;
and even if they suddenly stop supporting it for whatever reason, it's
reasonably modern enough that I could still run it for years for BGP-4
route reflection, and not worry about them supporting it.


> Comparing PTX1000 to 8201 is comparing apples and oranges. There's 7Tbps difference between these.
> PTX1000 is not 100G optimized (only PTX10K series are).
> I meant the upcoming PTX10k which should be directly comparable with 8201, we'll see.

Yes, I knew you were talking about an upcoming new platform. Not really
heavy into that - for us, the PTX1000 meets both of our 10Gbps and
100Gbps core requirements at a fair price, for a very long time to come.
I wish I could say the same about the
very-underutilized-but-still-potent CRS-X's we have, but alas, Cisco
again showing why I can't trust them for the long-haul.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 16/Jul/20 20:48, Phil Bedard wrote:
> To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come.

If we wanted to use a purely Ethernet-focused box for our core when we
deployed back in 2014, I'd have gone with the MX960.

The CRS made a lot of sense because we had a need for plenty of
non-Ethernet links, and both the MX and ASR9000 were too expensive on a
per-slot basis.


> CRS-X I would agree doesn't have the longevity of some of the other platforms. In the end Cisco builds hardware people ask for, and unfortunately has to retire hardware people no longer want to purchase.

The CRS-X is neither EoS nor EoL. It can do 400Gbps/slot (even though I
am sure it can do more, but then where do you put the NCS 6000), and has
plenty of room for growth.

My problem with Cisco is their solution to a lot of their products is a
complete swap-out. Making us have to replace a ton of CRS-X's with
ASR9000's so I can get "cheap" 100Gbps ports when our current platform
is nowhere near dying is just silly and opportunistic.

>
>
> The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

The lack of 10Gbps support on the 8200's notwithstanding, I just don't
trust Cisco anymore. Boxes come and go with them before they'd have time
to bake in, who knows what they'll come up with next.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
?On 7/16/20, 4:37 PM, "Mark Tinka" <mark.tinka@seacom.com> wrote:



On 16/Jul/20 20:48, Phil Bedard wrote:
> > To be fair there are many many ASR9K systems out there today which have been in networks for many year. There is a new generation of cards for those coming out which do not require a chassis swap people will be using for many years to come.

> If we wanted to use a purely Ethernet-focused box for our core when we
> deployed back in 2014, I'd have gone with the MX960.

> The CRS made a lot of sense because we had a need for plenty of
> non-Ethernet links, and both the MX and ASR9000 were too expensive on a
> per-slot basis.

Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

>
> The 8000 series is much less power and higher throughput than a current generation PTX. An 8202 is around 750W. As mentioned you can use breakouts but to breakout 4x100G from 400G is going to require changing optics on the other side, 2x100G does not. The 8000 series and its silicon are going to be around for a long time.

> The lack of 10Gbps support on the 8200's notwithstanding, I just don't
> trust Cisco anymore. Boxes come and go with them before they'd have time
> to bake in, who knows what they'll come up with next.

Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

Thanks,
Phil



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
Hi,

On Thu, Jul 16, 2020 at 06:53:49PM -0400, Phil Bedard wrote:
> > The CRS made a lot of sense because we had a need for plenty of
> > non-Ethernet links, and both the MX and ASR9000 were too expensive on a
> > per-slot basis.
>
> Fair enough. Every vendor has gone through their own pain with
> the older midplane systems in having to swap out chassis multiple
> times to get to higher speeds. Thankfully with the newer fabric
> designs we've eliminated most of that.

But that's actually one of the things that alienates the "not megacarrier"
customers. There's perfectly working routers, they fall out of love,
new features are not added anymore, and you're expected to buy 32x400G
things when all you need are like "16x 10G".

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany gert@greenie.muc.de
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 17/Jul/20 00:53, Phil Bedard wrote:

> Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

Well, we started off with the MX480 back in 2014, and save for the most
recent purchases in the last year, we are still running a ton of the
actual chassis' from 2014. We did buy them with the high capacity fan
trays back then, and the 264VAC power supplies too, so those haven't
changed. What has changed, in PoP's where we've needed to add MPC7E line
cards for low-scale 100Gbps use-cases, is the SCB. We started off with
the SCBE in 2014, and have that in most of the boxes that don't need the
MPC7E's. On the units with the MPC7E's, we just upgraded the SCBE's to
the SCBE2's.

The MX480 RE-S-1800x4 control planes from 2014 are still running just
fine. In fact, we still buy that RE for all new MX480 deployments, today.

I'm not sure how much enhancement you'd need to make to an ASR9006 or
ASR9010 to keep it running 7+ years on. We only ever deployed the
ASR9001, which is still humming along as long as you don't use it as a
very busy peering router :-).

Also in 2014, we bought the CRS-B chassis, which was the one built to
support between 400Gbps - 800Gbps per slot. Cisco decided to cap it off
at 400Gbps/slot when they moved on with the NCS 6000, even though they
did tell us that it has the potential to do 800Gbps with no issue.

We started it off as a CRS-3 (140Gbps/slot), and most of our PoP's still
run it in that configuration. For the PoP's where we need 100Gbps
support, we upgraded them to the CRS-X (so a new 400Gbps fabric and
slot-specific FP-X's). The good news is that the CRS-X is backward
compatible with CRS-3 (and CRS-1) line cards, so that mix works well for
us, since the PoP's where we need 100Gbps ports also still run 10Gbps
ports in CRS-3 line cards.

We still have the same RP's in our CRS routers from 2014 (1.73GHz
Dual-Core Intel Xeon, 12GB RAM, 2x 32GB SSD drives). Solid control
planes, those.

So for me, Cisco not EoL'ing or EoS'ing the CRS-X (or its line cards),
but still "nudging" you away from it is simply bad form. We still have
anywhere from 4 - 6 slots free on each of these routers (so 8 - 12 per
PoP), so the room for growth is plenty, and there is no way I'm going to
put my refresh in Cisco's hands after this behaviour from them. We saw
what happened with a bunch of other boxes that came out, and then simply
disappeared - the NCS 6000 being the most recent.

So I have no confidence that someone at Cisco will some day get bored
and decide that the 8000 platform was not the right approach. No
confidence at all! And I told our AM's the exact same thing a few weeks
ago, when they asked why they were not being considered for our core
refresh any longer. I hope they learned something, but it's hard to
teach the 500-pound gorilla in the room new tricks, so...


> Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

We've decided not to continue running our core routers on chassis-based
platforms. The current state-of-the-art suggests that you can get quite
a lot of density, performance and reliability from fixed form factor
core routers, even for multi-100Gbps applications. Less space, less
power, fewer things to spare, fewer things to fail, quick and easy
installations/de-installations... what's not to love?

So as we get rid of our CRS's, only fixed form factor options are going in.

The PTX1000 is looking very good, but we are also looking at Nokia's new
SR-1. The SR-1 can be ordered either as a fixed or modular chassis, and
consumes 3U of rack space.

Exciting times.

Mark.

_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
On 17/Jul/20 08:17, Gert Doering wrote:

> But that's actually one of the things that alienates the "not megacarrier"
> customers. There's perfectly working routers, they fall out of love,
> new features are not added anymore, and you're expected to buy 32x400G
> things when all you need are like "16x 10G".

I think the MX has shown how this can be done, reasonably well, and
still continue to be the pick of the field.

The MX104 was the only time I can fault Juniper, but otherwise, what
they've done with the MX is nothing short of exemplary.

I have no doubt the MX480's I bought in 2014 will still be with us for
another 6 years, at least. Which is why I have no problem investing in
the MX10003 for dedicated 100Gbps edge customers.

Fancy slides and coining catchy terms is no longer a path into our
network. Service provider networks are not that complicated, or complex.
Get back to your roots, and let's not make it a whole song & dance

Mark.
Re: Cisco N540-ACC-SYS ipv4 routes [ In reply to ]
The MX960 obviously came out a long time ago. There have been new chassis versions for it as well as the PTX5K to support higher bandwidth speeds but it was always called the same thing and backwards compatible.

Can't argue with the NCS 6K, IMHO it was really forced by some large providers who required a multi-chassis evolution beyond CRS, and that continues to be its main role. But very few really want to continue with multi-chassis at this point as router capacity has increased rapidly from where it was even a few years ago.

Obviously Cisco has the ASR 99XX series, but there are a lot of 9006s and 9010s that have been in networks for 10+ years at this point. You can use the latest line cards with 400G QSFP-DD ports in a 9006/9010 chassis that came out in 2007? Obviously with commons upgrades like switch fabrics and fans to get the most out of it.

TBH the 8k is probably not a very good fit for your network today. Not sure if it's super public but Cisco does have the ASR 9903. It's 3RU, 600mm depth, 3.6Tb FD. It's 16x100G+20x10G fixed, and then a single 800G or 2T expansion card.

Thanks,
Phil

?On 7/17/20, 3:32 AM, "Mark Tinka" <mark.tinka@seacom.com> wrote:



On 17/Jul/20 00:53, Phil Bedard wrote:

> Fair enough. Every vendor has gone through their own pain with the older midplane systems in having to swap out chassis multiple times to get to higher speeds. Thankfully with the newer fabric designs we've eliminated most of that.

Well, we started off with the MX480 back in 2014, and save for the most
recent purchases in the last year, we are still running a ton of the
actual chassis' from 2014. We did buy them with the high capacity fan
trays back then, and the 264VAC power supplies too, so those haven't
changed. What has changed, in PoP's where we've needed to add MPC7E line
cards for low-scale 100Gbps use-cases, is the SCB. We started off with
the SCBE in 2014, and have that in most of the boxes that don't need the
MPC7E's. On the units with the MPC7E's, we just upgraded the SCBE's to
the SCBE2's.

The MX480 RE-S-1800x4 control planes from 2014 are still running just
fine. In fact, we still buy that RE for all new MX480 deployments, today.

I'm not sure how much enhancement you'd need to make to an ASR9006 or
ASR9010 to keep it running 7+ years on. We only ever deployed the
ASR9001, which is still humming along as long as you don't use it as a
very busy peering router :-).

Also in 2014, we bought the CRS-B chassis, which was the one built to
support between 400Gbps - 800Gbps per slot. Cisco decided to cap it off
at 400Gbps/slot when they moved on with the NCS 6000, even though they
did tell us that it has the potential to do 800Gbps with no issue.

We started it off as a CRS-3 (140Gbps/slot), and most of our PoP's still
run it in that configuration. For the PoP's where we need 100Gbps
support, we upgraded them to the CRS-X (so a new 400Gbps fabric and
slot-specific FP-X's). The good news is that the CRS-X is backward
compatible with CRS-3 (and CRS-1) line cards, so that mix works well for
us, since the PoP's where we need 100Gbps ports also still run 10Gbps
ports in CRS-3 line cards.

We still have the same RP's in our CRS routers from 2014 (1.73GHz
Dual-Core Intel Xeon, 12GB RAM, 2x 32GB SSD drives). Solid control
planes, those.

So for me, Cisco not EoL'ing or EoS'ing the CRS-X (or its line cards),
but still "nudging" you away from it is simply bad form. We still have
anywhere from 4 - 6 slots free on each of these routers (so 8 - 12 per
PoP), so the room for growth is plenty, and there is no way I'm going to
put my refresh in Cisco's hands after this behaviour from them. We saw
what happened with a bunch of other boxes that came out, and then simply
disappeared - the NCS 6000 being the most recent.

So I have no confidence that someone at Cisco will some day get bored
and decide that the 8000 platform was not the right approach. No
confidence at all! And I told our AM's the exact same thing a few weeks
ago, when they asked why they were not being considered for our core
refresh any longer. I hope they learned something, but it's hard to
teach the 500-pound gorilla in the room new tricks, so...


> Sorry was thinking 400GE to 100GE breakout. You can certainly do 4x10GE breakouts on the various 8000s boxes and line cards.

We've decided not to continue running our core routers on chassis-based
platforms. The current state-of-the-art suggests that you can get quite
a lot of density, performance and reliability from fixed form factor
core routers, even for multi-100Gbps applications. Less space, less
power, fewer things to spare, fewer things to fail, quick and easy
installations/de-installations... what's not to love?

So as we get rid of our CRS's, only fixed form factor options are going in.

The PTX1000 is looking very good, but we are also looking at Nokia's new
SR-1. The SR-1 can be ordered either as a fixed or modular chassis, and
consumes 3U of rack space.

Exciting times.

Mark.



_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

1 2 3 4  View All