Mailing List Archive

1 2 3 4 5 6 7 8 9  View All
Re: DoD IP Space [ In reply to ]
Randy,

In one sense I agree with you, but what I was reacting to was the idea
of an ISP begging IETF to reassign 22/8 as private space because their
customers won't migrate to IPv6. That's problematic for many reasons,
and causes the folks who aren't getting with the program to inflict the
pain caused by their inaction on the rest of the network.

At the same time, I sympathize with the ISP because if they can't meet
their customer's needs (however dumb those needs are) then the customers
will leave.

I agree that we don't need a flag day for IPv6, but we have to stop
creating new accommodations, and we need to be more creative about
keeping the pain (aka cost) of not moving forward isolated to the folks
who are creating the problems.

Doug


On 1/21/21 2:22 PM, Randy Bush wrote:
>>> I?m sure we all remember Y2k (well, most of us, there could be some
>>> young-uns on the list). That day was happening whether we wanted it to
>>> or not. It was an unchangeable, unmovable deadline.
>>
>> but i thought 3gpp was gong to force ipv6 adoption
>
> let me try it a different way
>
> why should i care whether you deploy ipv6, move to dual stack, cgnat,
> ...? you will do whatever makes sense to the pointy heads in your c
> suite. why should i give them or some tech religion free rent in my
> mind when i already have too much real work to do?
>
> randy
>
Re: DoD IP Space [ In reply to ]
Joe,

I haven't done that kind of work for a few years now, but I assume the
answer to your question in terms of hardware is still yes.

By and large the problem isn't hardware, it's finding the institutional
will to actually do the thing. That requires a lot of education,
creating or buying resources that can do the architecture, and
ultimately the rollout, etc. etc.

And before all of that you have to overcome the fear of things that are
new and different, and even 20 years later that's still a tough hill to
climb.

Doug


On 1/21/21 1:01 PM, j k wrote:
> Organizations I have worked with for IPv6 transition, reduced CAPex and
> OPex by leveraging the IT refresh cycle, and by ensuring there
> investment included leveraging the USGv6
> (https://www.nist.gov/programs-projects/usgv6-program) or IPv6Ready
> (https://www.ipv6ready.org/) to mitigate the "We sell IPv6 products, and
> want to you to pay for the debugging costs".
>
> Can I assume other organizations don't leverage the IT refresh cycle?
>
> Joe Klein
Re: DoD IP Space [ In reply to ]
On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revstr?m via NANOG wrote:
> certain large corporations that have run out of RFC1918, etc. space

At what level of incompetence must an organization operate to squander
roughly 70,000 /24 networks?

Or to do so and then decide, "You know what we really need to do? Let's
stomp on someone else's address space instead of deploying IPv6 a decade
late.

"And not just anyone's -- the US Military's! Because there's no
possible future in which an emergency might arise and see a need for
this global network built for resiliency to carry defense related
traffic."

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revstr?m via NANOG wrote:
> certain large corporations that have run out of RFC1918, etc. space

At what level of incompetence must an organization operate to squander
roughly 70,000 /24 networks?

Or to do so and then decide, "You know what we really need to do? Let's
stomp on someone else's address space instead of deploying IPv6 a decade
late.

"And not just anyone's -- the US Military's! Because there's no
possible future in which an emergency might arise and see a need for
this global network built for resiliency to carry defense related
traffic."

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revstr?m via NANOG wrote:
> certain large corporations that have run out of RFC1918, etc. space

At what level of incompetence must an organization operate to squander
roughly 70,000 /24 networks?

Or to do so and then decide, "You know what we really need to do? Let's
stomp on someone else's address space instead of deploying IPv6 a decade
late.

"And not just anyone's -- the US Military's! Because there's no
possible future in which an emergency might arise and see a need for
this global network built for resiliency to carry defense related
traffic."

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
Disney should hire some proper developers and QA team.

RFC 1123 instructed developers to make sure your products handled multi-homed servers properly and dealing with one of the addresses being unreachable is part of that. It’s not like the app can’t attempt to a stream from the IPv6 address and if there is no response in 200ms start a parallel attempt from the IPv4 address. If the IPv6 stream succeeds drop the IPv4 stream Happy Eyeballs is just a specific case of multi-homed servers.

QA should have test scenarios where the app has a dual stack network and the servers are silently untraceable over one then the other transport. It isn’t hard to do. Dealing with broken networks is something every application should do.
--
Mark Andrews

> On 23 Jan 2021, at 01:28, Travis Garrison <tgarrison@netviscom.com> wrote:
>
> ?What's all your opinion when company's such as Disney actively recommend disabling IPv6? They are presenting it as IPv6 is blocking their app. We all know that isn’t possible. Several people have issues with their app and Amazon firesticks. I use my phone and a chromecast and I see the issues when IPv6 is enabled. We are in the testing phase on rolling out IPv6 on our network. All the scripts are ready, just trying to work through the few issues like this one.
>
> https://help.disneyplus.com/csp?id=csp_article_content&sys_kb_id=c91af021dbe46850b03cc58a139619ed
>
> Thank you
> Travis
>
>
>
> -----Original Message-----
> From: NANOG <nanog-bounces+tgarrison=netviscom.com@nanog.org> On Behalf Of Mark Andrews
> Sent: Thursday, January 21, 2021 7:45 PM
> To: Sabri Berisha <sabri@cluecentral.net>
> Cc: nanog <nanog@nanog.org>
> Subject: Re: DoD IP Space
>
> IPv6 doesn’t need a hard date. It is coming, slowly, but it is coming.
> Every data set says the same thing. It may not be coming as fast as a lot of us would want or actually think is reasonable as ISP’s are currently being forced to deploy CGNs (NAT44 and NAT64) because there are laggards that are not doing their part.
>
> If you offer a service over the Internet then it should be available over
> IPv6 otherwise you are costing your customers more to reach you. CGNs are not free.
>
> Mark
>
>> On 22 Jan 2021, at 06:07, Sabri Berisha <sabri@cluecentral.net> wrote:
>>
>> ----- On Jan 21, 2021, at 6:40 AM, Andy Ringsmuth andy@andyring.com wrote:
>>
>> Hi,
>>
>>> I’m sure we all remember Y2k
>>
>> Ah, yes. As a young IT consultant wearing a suit and tie (rofl), I
>> upgraded many bioses in many office buildings in the months leading up to it...
>>
>>> I’d love to see a line in the concrete of, say, January 1, 2025,
>>> whereby IPv6 will be the default.
>>
>> The challenge with that is the market. Y2K was a problem that was
>> existed. It was a brick wall that we would hit no matter what. The
>> faulty code was released years before the date.
>>
>> We, IETF, or even the UN could come up with 1/1/25 as the date where
>> we switch off IPv4, and you will still find networks that run IPv4 for
>> the simple reason that the people who own those networks have a choice. With Y2K there was no choice.
>>
>> The best way to have IPv6 implemented worldwide is by having an
>> incentive for the executives that make the decisions. From experience,
>> as I've said on this list a few times before, I can tell you that
>> decision makers with a limited budget that have to choose between a
>> new revenue generating feature, or a company-wide implementation of
>> IPv6, will choose the one that's best for their own short-term interests.
>>
>> On that note, I did have a perhaps silly idea: One way to create the
>> demand could be to have browser makers add a warning to the URL bar,
>> similar to the HTTPS warnings we see today. If a site is IPv4 only,
>> warn that the site is using deprecated technology.
>>
>> Financial incentives also work. Perhaps we can convince Mr. Biden to
>> give a .5% tax cut to corporations that fully implement v6. That will
>> create some bonus targets.
>>
>> Thanks,
>>
>> Sabri
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
>
Re: DoD IP Space [ In reply to ]
You mean like Rogers?

https://communityforums.rogers.com/t5/Internet/Why-is-my-first-hop-on-a-trace-route-to-the-United-States/td-p/30382



At 03:28 PM 22/01/2021, Izaac wrote:
>On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revstr?m via NANOG wrote:
> > certain large corporations that have run out of RFC1918, etc. space
>
>At what level of incompetence must an organization operate to squander
>roughly 70,000 /24 networks?
>
>Or to do so and then decide, "You know what we really need to do? Let's
>stomp on someone else's address space instead of deploying IPv6 a decade
>late.
>
>"And not just anyone's -- the US Military's! Because there's no
>possible future in which an emergency might arise and see a need for
>this global network built for resiliency to carry defense related
>traffic."
>
>--
>. ___ ___ . . ___
>. \ / |\ |\ \
>. _\_ /__ |-\ |-\ \__

--

Clayton Zekelman
Managed Network Systems Inc. (MNSi)
3363 Tecumseh Rd. E
Windsor, Ontario
N8W 1H4

tel. 519-985-8410
fax. 519-985-8409
Re: DoD IP Space [ In reply to ]
Big networks do run out of IPv4 space. It doesn’t require incompetence just lots of devices. That said if the devices where purchased in the last 2 decades they should support IPv6.

How many devices do you think a large car manufacture has on the shop floor? Remember some run their own bus services to move staff around the factory.

--
Mark Andrews

> On 23 Jan 2021, at 07:42, Mark Andrews <marka@isc.org> wrote:
>
> ?Disney should hire some proper developers and QA team.
>
> RFC 1123 instructed developers to make sure your products handled multi-homed servers properly and dealing with one of the addresses being unreachable is part of that. It’s not like the app can’t attempt to a stream from the IPv6 address and if there is no response in 200ms start a parallel attempt from the IPv4 address. If the IPv6 stream succeeds drop the IPv4 stream Happy Eyeballs is just a specific case of multi-homed servers.
>
> QA should have test scenarios where the app has a dual stack network and the servers are silently untraceable over one then the other transport. It isn’t hard to do. Dealing with broken networks is something every application should do.
> --
> Mark Andrews
>
>> On 23 Jan 2021, at 01:28, Travis Garrison <tgarrison@netviscom.com> wrote:
>>
>> ?What's all your opinion when company's such as Disney actively recommend disabling IPv6? They are presenting it as IPv6 is blocking their app. We all know that isn’t possible. Several people have issues with their app and Amazon firesticks. I use my phone and a chromecast and I see the issues when IPv6 is enabled. We are in the testing phase on rolling out IPv6 on our network. All the scripts are ready, just trying to work through the few issues like this one.
>>
>> https://help.disneyplus.com/csp?id=csp_article_content&sys_kb_id=c91af021dbe46850b03cc58a139619ed
>>
>> Thank you
>> Travis
>>
>>
>>
>> -----Original Message-----
>> From: NANOG <nanog-bounces+tgarrison=netviscom.com@nanog.org> On Behalf Of Mark Andrews
>> Sent: Thursday, January 21, 2021 7:45 PM
>> To: Sabri Berisha <sabri@cluecentral.net>
>> Cc: nanog <nanog@nanog.org>
>> Subject: Re: DoD IP Space
>>
>> IPv6 doesn’t need a hard date. It is coming, slowly, but it is coming.
>> Every data set says the same thing. It may not be coming as fast as a lot of us would want or actually think is reasonable as ISP’s are currently being forced to deploy CGNs (NAT44 and NAT64) because there are laggards that are not doing their part.
>>
>> If you offer a service over the Internet then it should be available over
>> IPv6 otherwise you are costing your customers more to reach you. CGNs are not free.
>>
>> Mark
>>
>>>> On 22 Jan 2021, at 06:07, Sabri Berisha <sabri@cluecentral.net> wrote:
>>>
>>> ----- On Jan 21, 2021, at 6:40 AM, Andy Ringsmuth andy@andyring.com wrote:
>>>
>>> Hi,
>>>
>>>> I’m sure we all remember Y2k
>>>
>>> Ah, yes. As a young IT consultant wearing a suit and tie (rofl), I
>>> upgraded many bioses in many office buildings in the months leading up to it...
>>>
>>>> I’d love to see a line in the concrete of, say, January 1, 2025,
>>>> whereby IPv6 will be the default.
>>>
>>> The challenge with that is the market. Y2K was a problem that was
>>> existed. It was a brick wall that we would hit no matter what. The
>>> faulty code was released years before the date.
>>>
>>> We, IETF, or even the UN could come up with 1/1/25 as the date where
>>> we switch off IPv4, and you will still find networks that run IPv4 for
>>> the simple reason that the people who own those networks have a choice. With Y2K there was no choice.
>>>
>>> The best way to have IPv6 implemented worldwide is by having an
>>> incentive for the executives that make the decisions. From experience,
>>> as I've said on this list a few times before, I can tell you that
>>> decision makers with a limited budget that have to choose between a
>>> new revenue generating feature, or a company-wide implementation of
>>> IPv6, will choose the one that's best for their own short-term interests.
>>>
>>> On that note, I did have a perhaps silly idea: One way to create the
>>> demand could be to have browser makers add a warning to the URL bar,
>>> similar to the HTTPS warnings we see today. If a site is IPv4 only,
>>> warn that the site is using deprecated technology.
>>>
>>> Financial incentives also work. Perhaps we can convince Mr. Biden to
>>> give a .5% tax cut to corporations that fully implement v6. That will
>>> create some bonus targets.
>>>
>>> Thanks,
>>>
>>> Sabri
>>
>> --
>> Mark Andrews, ISC
>> 1 Seymour St., Dundas Valley, NSW 2117, Australia
>> PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
>>
Re: DoD IP Space [ In reply to ]
On Fri, Jan 22, 2021 at 03:44:34PM -0500, Clayton Zekelman wrote:
> You mean like Rogers?

Smashing example. They've got fewer than 4 million subscribers (only
about a million of them being Internet), and yet they have somehow gone
through over 17 million addresses?

"Ohh no! Quick! Let's abandon fundamental principles of Internet
architecture to get these poor souls more addresses right away!"

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On Fri, Jan 22, 2021 at 03:44:34PM -0500, Clayton Zekelman wrote:
> You mean like Rogers?

Smashing example. They've got fewer than 4 million subscribers (only
about a million of them being Internet), and yet they have somehow gone
through over 17 million addresses?

"Ohh no! Quick! Let's abandon fundamental principles of Internet
architecture to get these poor souls more addresses right away!"

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On 1/21/21 4:29 PM, Travis Garrison wrote:
> What's all your opinion when company's such as Disney actively recommend disabling IPv6? They are presenting it as IPv6 is blocking their app.
>
> https://help.disneyplus.com/csp?id=csp_article_content&sys_kb_id=c91af021dbe46850b03cc58a139619ed
--------------------------------------------------


Where it asked 'was this article helpful' I clicked 'no', selected 'other'
and wrote:

"IPv6 cannot block an app.  Disney uses only legacy IPv4 and does
not want to come onto the 21st century internet by using IPv6.  This
article is disinformation."

It won't help, but I feel better!  >:-)     <= evil grin

scott
Re: DoD IP Space [ In reply to ]
----- On Jan 22, 2021, at 12:28 PM, Izaac izaac@setec.org wrote:

Hi,

> On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revström via NANOG wrote:
>> certain large corporations that have run out of RFC1918, etc. space
>
> At what level of incompetence must an organization operate to squander
> roughly 70,000 /24 networks?

Or, at what level of scale.

Or, a combination of both.

Let me give you an example. This example is not hypothetical.

Acme Inc operates a popular social media site. This requires a lot of
compute power, and storage space. Acme owns multiple datacenters around
the world, and all must be connected.

Acme divides its data centers in "Availability Zones". Each AZ contains
a limited amount of equipment. A typical AZ is made up of multiple pods,
and each pod contains anywhere between 40 and 48 racks. Each rack contains
up to 72 servers. Each server can contain many VMs or containers.

In order to scale, each AZ and pod are designed according to blueprints. This
obviously means that tradeoffs must be made. For example, each rack will be
assigned a /25, since a /26 means that not all 72 servers can have an IP.

Just to accommodate a single IP per server, we already need a /19. Most
servers will have different NICs for different purposes. For example, it is
not uncommon to have a separate storage network, and a management network.

Now we already need 3 /19s (32 /24s per pod, and we haven't even started to
assign IPs to VMs or containers yet.

Let's start to assign IPs to VMs and containers. Within one of my previous
employers, there were different groups that worked on VMs (cloud), and
containers (k8s). Both groups had automated scripts to assign IPs, but these
(obviously) did not communicate. Which means that each group had their own
vlan, with their own IRB (or BVI, or VLAN interface, however you want to
name it). On average, each group started with a /22 per tor (later on,
we limited them to a /24). So now we need 48*2*4=384 /24s per pod extra.

So, with 384+32 = 416 /24s per pod, you are looking at a maximum of 157 pods.

Now, granted, there is a lot of waste in this, hence the change from a /22 to
a /24, with a realization that the cloud and k8s group really needed to work
together to avoid more waste.

I will tell you that this is not at all hypothetical, I have personally
created spreadsheets of every /16 in 10/8 and how they were allocated. It's
amazing how much space was wasted in the early days at said employer, and
how much I was able to reclaim simply by checking if the allocations were
still valid. Hint: when companies split up, a lot of space gets freed up.

This the way that we avoided using DoD IP space to complement 10/8.

But, you were asking how it's possible to run out of 10/8, and here is your
answer :)

TL;DR: a combination of scale and incompetence means you can run out of 10/8
really quick.

Thanks,

Sabri
Re: DoD IP Space [ In reply to ]
On 1/22/21 6:09 AM, Tom Beecher wrote:
> V6 Adoption always is, and always will be, metered by time, money and
> resources. Everybody kicks the can on things like this until they
> can't anymore.
---------------------------------

I have always said the management chooses this.  It's a cost-only
thing and they want to chase the sales, so IPv6 rollout loses every
time.  However, I now see a new ok-you-can-roll-out-ipv6-now
motivator for them.  "We can make $BIGSALE, but it requires IPv6
to be rolled out.  Hurry up and do it!"

Moral of the story...  Have your IPv6 rollout plan in place and as
ready to go as you can because it may be similar: Hurry up!

scott
Re: DoD IP Space [ In reply to ]
On Fri, Jan 22, 2021 at 01:03:15PM -0800, Sabri Berisha wrote:
> TL;DR: a combination of scale and incompetence means you can run out of 10/8
> really quick.

Indeed. Thank you for providing a demonstration of my point.

I'd question the importance of having an console on target in Singapore
be able to directly address an BMC controller in Phoenix (wait for it),
but I'm sure that's a mission requirement.

But just in case you'd like to reconsider, can I interest you in NAT?
Like nutmeg, a little will add some spice to your recipe -- but too much
will cause nausea and hallucinations. It's entirely possible to put an
entire 192.168.0.0/16 network behind every single 172.16.0.0/12 address.

So, you've already "not at all hypothetical'd" entire racks completely
full of 1U hosts that are supporting lots of VMs in their beefy memory
on their two processors and also doing SAN into another universe. Let's
just magic a rack controller to handle the NAT. We can just cram it
into the extra-dimensional space where the switches live.

A standard port mapping configuration to match your "blueprint" ought to
be straight-foward. But let's elide the details and learn by
demonstration by just using it!

If the Singapore AZ were assigned 172.18.0.0/16.
And the 7th pod were 172.18.7.0/24.
And the 12th rack were 172.18.7.12/32.
We can SSH to the 39th host at: 172.18.7.11:2239
Which NATs to 192.168.0.39:22 on the 192.168.0.0/24 standard net.

If the Phoenix AZ (payoff!) were assigned 172.22.0.0/16.
And the 9th pod were 172.22.9.0/24
And the 33rd rack were 172.22.9.33/32.
We can VNC to the BMC of the 27th host at: 172.22.9.33:5927.
Which NATs to 192.168.1.27:5900 on the 192.168.1.0/24 management net.

Let's see. We've met all our requirements, left unused more than 50% of
the 172.16/12 space by being very generous to our AZs, left unused 98%
of the 192.168/16 space in each rack, threw every zero-network to the
wolves for our human counting from 1, and still haven't even touched
10/8. And all less than an hour's chin pulling.

Good for us.

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
The KB indicates that the problem is with the "LG TV WebOS 3.8 or above."

Doug

(not speaking for any employers, current or former)


On 1/22/21 12:42 PM, Mark Andrews wrote:
> Disney should hire some proper developers and QA team.
>
> RFC 1123 instructed developers to make sure your products handled multi-homed servers properly and dealing with one of the addresses being unreachable is part of that. It’s not like the app can’t attempt to a stream from the IPv6 address and if there is no response in 200ms start a parallel attempt from the IPv4 address. If the IPv6 stream succeeds drop the IPv4 stream Happy Eyeballs is just a specific case of multi-homed servers.
>
> QA should have test scenarios where the app has a dual stack network and the servers are silently untraceable over one then the other transport. It isn’t hard to do. Dealing with broken networks is something every application should do.
>
Re: DoD IP Space [ In reply to ]
----- On Jan 22, 2021, at 2:42 PM, Izaac izaac@setec.org wrote:

Hi,

> On Fri, Jan 22, 2021 at 01:03:15PM -0800, Sabri Berisha wrote:
>> TL;DR: a combination of scale and incompetence means you can run out of 10/8
>> really quick.
>
> Indeed. Thank you for providing a demonstration of my point.
>
> I'd question the importance of having an console on target in Singapore
> be able to directly address an BMC controller in Phoenix (wait for it),
> but I'm sure that's a mission requirement.

No, but the NOC that sits in between does need to access both. Sure, you can
use jumphosts, but now you're delaying troubleshooting of a potentially costly
outage.

> But just in case you'd like to reconsider, can I interest you in NAT?
> Like nutmeg, a little will add some spice to your recipe -- but too much
> will cause nausea and hallucinations.

NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even
company, is a nightmare. If you've ever been on call for any decently sized
network, you'll know that.

> Let's just magic a rack controller to handle the NAT. We can just cram it
> into the extra-dimensional space where the switches live.

> And all less than an hour's chin pulling.

We both know that this is

A. An operational nightmare, and
B. Simply not the way things work in the real world.

The people who designed most of the legacy networks I've ever worked on did
not plan for the networks to grow to the size they became. Just like we would
never run out of the 640k of memory, people thought they would never run out
of RFC1918 space. Until they did.

And when that James May moment arrives, people start looking at a quick fix
(i.e., let's use unannounced public space), rather than redesigning and
reimplementing networks that have been in use for a long long time.

TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
work.

Thanks,

Sabri
Re: DoD IP Space [ In reply to ]
An embarrassing mistake. I'm not a computer and don't count from zero. It is, of course, at 172.18.7.12:2239 and not 11.

Jan 22, 2021 18:01:15 Izaac <izaac@setec.org>:

> We can SSH to the 39th host at: 172.18.7.11:2239
Re: DoD IP Space [ In reply to ]
An embarrassing mistake. I'm not a computer and don't count from zero. It is, of course, at 172.18.7.12:2239 and not 11.

Jan 22, 2021 18:01:15 Izaac <izaac@setec.org>:

> We can SSH to the 39th host at: 172.18.7.11:2239
Re: DoD IP Space [ In reply to ]
On Fri, Jan 22, 2021 at 03:43:43PM -0800, Sabri Berisha wrote:
> No, but the NOC that sits in between does need to access both. Sure, you can

A single NOC sitting in the middle of a single address space. I believe
I'm detecting an architectural paradigm on the order of "bouncy castle."

Tell me, do you also permit customer A's secondary DNS server to reach
out and touch customer B's tertiary MongoDB replica in some other AZ for
a particular reason? Or are these networks segregated in some
meaningful way -- a way which might, say, completely vacate the entire
point of having a completely de-conflicted 1918 address space?

> use jumphosts, but now you're delaying troubleshooting of a potentially costly
> outage.

Who's using jumphosts? I very deliberately employed one of my least
favorite networking "technologies" in order to give you direct
connections. I just had to break a different fundamental networking
principle to steal the bits from another header. No biggie. You won't
even miss the lack of ICMP or the squished MTU. Honest.

It's just "your" stuff anyway. The customers have all that delicious
10/8 to use. Imagine how nice troubleshooting that would be, where
anything that's 172.16/12 is "yours" and anything 10/8 is "theirs."

> NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even
> company, is a nightmare. If you've ever been on call for any decently sized
> network, you'll know that.

And that's different than NATing non-1918 addresses to a 1918 address
space how? Four bytes is four bytes, no? Or are 1918 addresses magic
when it comes to the mechanical process of address translation?

As far as being on call and troubleshooting, I'd think that identically
configured rack-based networks would be ideal, no? In the context of
the rack, everything is very familiar. That 192.168.0.1 is always the
gateway for the rack hosts. That 192.168.3.254 is always the iSCSI
target on the SAN. (Or is it more correctly NAS, since any random PDU
in Wallawalla WA can hit my disks in Perth via its unique address on a
machine which lives "not at all hypothetically" under the raised floor
or something. Maybe sitting in the 76-80th RU.)

Maybe I should investigate these "jumphosts" of which you speak, too.
They might have some advantages.

But I'm sure using your spreadsheets to look up everything all the time
works even better. Especially when you start having to slice your
networks thinner and thinner and renumber stuff. But I'm sure no
customer would ever say they needed more address space than was
initially allocated to them. It should be trivial to throw them
another /24 from elsewhere in the 10 space, get it all routed and
filtered and troubleshoot that on call. Much easier than handing them
very own 10/8.

> We both know that this is
>
> A. An operational nightmare, and
> B. Simply not the way things work in the real world.

Right. What would I know about the real world? What madman would ever
deploy a system in a way other than the flat, star pattern in which you
suggest. Who even approaches that scale and scope?

> not plan for the networks to grow to the size they became. Just like we would
> never run out of the 640k of memory, people thought they would never run out
> of RFC1918 space. Until they did.

Yes. Whoever could have seen that coming. If only we had developed
mechanisms for extending the existing IPv4 address space. Maybe by
making multiple hosts share a single address by using some kind of "proxy"
or committing a horrible sin and stealing bits from a different layer.

Or perhaps we could even deploy a different protocol with an even larger
address space. It could be done in parallel, even. Well. I can dream,
can't I?

> And when that James May moment arrives, people start looking at a quick fix
> (i.e., let's use unannounced public space), rather than redesigning and
> reimplementing networks that have been in use for a long long time.

A long long time indeed. Why, I remember back in the late 1990s when
the cloud wars started. They were saying Microsoft would have to divest
Azure. Barnes and Noble had just started selling MMX-optimized instances for
machine learning. The enormous web farms at Geocities were really
pushing the envelope of the possible when it came to high availability
concurrent web connections by leveraging CDNs. Very little has changed
since then. We've hardly had the opportunity to look at these networks,
let alone consider rebuilding them. Who has the time or opportunity?
That Cisco 2600 may be dusty, but it's been holding the fort all this
time.

> TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
> work.

Well thanks for sharing. I think we've all learned a lot.

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On Fri, Jan 22, 2021 at 03:43:43PM -0800, Sabri Berisha wrote:
> No, but the NOC that sits in between does need to access both. Sure, you can

A single NOC sitting in the middle of a single address space. I believe
I'm detecting an architectural paradigm on the order of "bouncy castle."

Tell me, do you also permit customer A's secondary DNS server to reach
out and touch customer B's tertiary MongoDB replica in some other AZ for
a particular reason? Or are these networks segregated in some
meaningful way -- a way which might, say, completely vacate the entire
point of having a completely de-conflicted 1918 address space?

> use jumphosts, but now you're delaying troubleshooting of a potentially costly
> outage.

Who's using jumphosts? I very deliberately employed one of my least
favorite networking "technologies" in order to give you direct
connections. I just had to break a different fundamental networking
principle to steal the bits from another header. No biggie. You won't
even miss the lack of ICMP or the squished MTU. Honest.

It's just "your" stuff anyway. The customers have all that delicious
10/8 to use. Imagine how nice troubleshooting that would be, where
anything that's 172.16/12 is "yours" and anything 10/8 is "theirs."

> NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even
> company, is a nightmare. If you've ever been on call for any decently sized
> network, you'll know that.

And that's different than NATing non-1918 addresses to a 1918 address
space how? Four bytes is four bytes, no? Or are 1918 addresses magic
when it comes to the mechanical process of address translation?

As far as being on call and troubleshooting, I'd think that identically
configured rack-based networks would be ideal, no? In the context of
the rack, everything is very familiar. That 192.168.0.1 is always the
gateway for the rack hosts. That 192.168.3.254 is always the iSCSI
target on the SAN. (Or is it more correctly NAS, since any random PDU
in Wallawalla WA can hit my disks in Perth via its unique address on a
machine which lives "not at all hypothetically" under the raised floor
or something. Maybe sitting in the 76-80th RU.)

Maybe I should investigate these "jumphosts" of which you speak, too.
They might have some advantages.

But I'm sure using your spreadsheets to look up everything all the time
works even better. Especially when you start having to slice your
networks thinner and thinner and renumber stuff. But I'm sure no
customer would ever say they needed more address space than was
initially allocated to them. It should be trivial to throw them
another /24 from elsewhere in the 10 space, get it all routed and
filtered and troubleshoot that on call. Much easier than handing them
very own 10/8.

> We both know that this is
>
> A. An operational nightmare, and
> B. Simply not the way things work in the real world.

Right. What would I know about the real world? What madman would ever
deploy a system in a way other than the flat, star pattern in which you
suggest. Who even approaches that scale and scope?

> not plan for the networks to grow to the size they became. Just like we would
> never run out of the 640k of memory, people thought they would never run out
> of RFC1918 space. Until they did.

Yes. Whoever could have seen that coming. If only we had developed
mechanisms for extending the existing IPv4 address space. Maybe by
making multiple hosts share a single address by using some kind of "proxy"
or committing a horrible sin and stealing bits from a different layer.

Or perhaps we could even deploy a different protocol with an even larger
address space. It could be done in parallel, even. Well. I can dream,
can't I?

> And when that James May moment arrives, people start looking at a quick fix
> (i.e., let's use unannounced public space), rather than redesigning and
> reimplementing networks that have been in use for a long long time.

A long long time indeed. Why, I remember back in the late 1990s when
the cloud wars started. They were saying Microsoft would have to divest
Azure. Barnes and Noble had just started selling MMX-optimized instances for
machine learning. The enormous web farms at Geocities were really
pushing the envelope of the possible when it came to high availability
concurrent web connections by leveraging CDNs. Very little has changed
since then. We've hardly had the opportunity to look at these networks,
let alone consider rebuilding them. Who has the time or opportunity?
That Cisco 2600 may be dusty, but it's been holding the fort all this
time.

> TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
> work.

Well thanks for sharing. I think we've all learned a lot.

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
Re: DoD IP Space [ In reply to ]
On Thu, 21 Jan 2021 11:07:42 -0800, Sabri Berisha said:
> Financial incentives also work. Perhaps we can convince Mr. Biden to give a .5%
> tax cut to corporations that fully implement v6. That will create some bonus
> targets.

And how would you define "fully implement v6", anyhow?

Case in point: I helped deploy v6 at my employer *last century*, and the
entire network was (last I knew) totally v6 ready, and large segments were
v6-only. Yet Google *still* says that only 80% or so traffic to them are via
v6.

The other 20% being end-user devices that aren't using v6 for one reason or
another - I'm pretty sure that a lot of those are because companies have told
the user to "turn off ipv6" to solve connection problems, and I know that a lot
of them are gaming consoles from a vendor that had a brief shining chance to
Get It Right on the last iteration(*) but failed to do so....

And when I retired, I had several clusters of file servers that weren't doing
IPv6 because a certain 3-letter vendor who *really* should have been more on
the ball didn't have v6 support in the relevant software.

Even more problematic: What do you do with a company that's fully v6-ready, but
still has several major interconnects to other companies that *aren't* ready,
and thus still using v4?

(*) The PS4 has ipv6 support in the OS - it will dhcpv6 and answer pings from
on and off subnet. However, they didn't include ipv6 support in the development
software toolkit, so nothing actually uses it. They appear to have fixed this in the PS5,
but that still hits the "other company isn't ready" issue.
Re: DoD IP Space [ In reply to ]
----- On Jan 22, 2021, at 4:50 PM, Izaac izaac@setec.org wrote:

Hi,

> On Fri, Jan 22, 2021 at 03:43:43PM -0800, Sabri Berisha wrote:

>> TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
>> work.
>
> Well thanks for sharing. I think we've all learned a lot.

You don't need to patronize me. I'm merely explaining the real life realities of
working in a large enterprise.

And the key takeaway here is: we can come up with the most efficient solutions,
in the end it's all about budgets and stakeholder requirements.

Thanks,

Sabri
Re: DoD IP Space [ In reply to ]
----- On Jan 22, 2021, at 10:28 PM, Valdis Kl?tnieks valdis.kletnieks@vt.edu wrote:

Hi,

> On Thu, 21 Jan 2021 11:07:42 -0800, Sabri Berisha said:
>> Financial incentives also work. Perhaps we can convince Mr. Biden to give a .5%
>> tax cut to corporations that fully implement v6. That will create some bonus
>> targets.
>
> And how would you define "fully implement v6", anyhow?

Fair point. I'm sure the a commission appointed by the appropriate legislators
will be happy to spend a few millions debating that issue. Personally, I would
argue that a full implementation of IPv6 means that v4 could be phased out without
adverse effect on the production network.

But of course, how would we define "adverse effect on the production network"? :)

> Even more problematic: What do you do with a company that's fully v6-ready, but
> still has several major interconnects to other companies that *aren't* ready,
> and thus still using v4?

I totally agree with everything you wrote. It proves the point that having v6 ready
technologies in "the network", does not mean a network, or even a company is fully
v6 ready. Way too many stakeholders and outside dependencies.

To me, it means that "we", as in network professionals, should be ready to save
the day when company leaders finally realize they have no option and need v6 to
be implemented fast.

And secretly, I've been hoping for that moment. "Well, sir, the network has been
IPv6 ready for years, but the software groups and their leadership have so far
blatantly refused to update their code and support it".

I guess that I'll join you in retirement before that moment comes.

Thanks,

Sabri
Re: DoD IP Space [ In reply to ]
On Sat, Jan 23, 2021 at 11:20:47AM -0800, Sabri Berisha wrote:
> You don't need to patronize me. I'm merely explaining the real life realities of
> working in a large enterprise.

Patronize you? Ohh, heavens no! I fully intend to use your replies as
educational material. Why, I've passed them to colleagues of mine
already. It's not every day that an off-handed comment made in
frustration at the state of the industry is so immediately and
thoroughly expanded upon.

I think patronizing would look more like: assuming a position of great
authority and noteworthy insight on a list full of professionals by
vaguely citing a situation which they were once exposed to as some kind
of instructive lab of how the "real world" works -- perhaps going
farther to summarizing each of the lessons into a one-line takeaway for
those who were either unable or unwilling to understand their point.

> And the key takeaway here is: we can come up with the most efficient solutions,
> in the end it's all about budgets and stakeholder requirements.

Ahh, I see! Thanks. I'll put that with the rest of my notes.

--
. ___ ___ . . ___
. \ / |\ |\ \
. _\_ /__ |-\ |-\ \__
RE: DoD IP Space [ In reply to ]
I have personally seen the issue with streaming from a Samsung cell phone and the Disney+ app to a Google chrome cast and a regular not-smart TV.

Travis

-----Original Message-----
From: NANOG <nanog-bounces+tgarrison=netviscom.com@nanog.org> On Behalf Of Doug Barton
Sent: Friday, January 22, 2021 5:30 PM
To: nanog@nanog.org
Subject: Re: DoD IP Space

The KB indicates that the problem is with the "LG TV WebOS 3.8 or above."

Doug

(not speaking for any employers, current or former)


On 1/22/21 12:42 PM, Mark Andrews wrote:
> Disney should hire some proper developers and QA team.
>
> RFC 1123 instructed developers to make sure your products handled multi-homed servers properly and dealing with one of the addresses being unreachable is part of that. It’s not like the app can’t attempt to a stream from the IPv6 address and if there is no response in 200ms start a parallel attempt from the IPv4 address. If the IPv6 stream succeeds drop the IPv4 stream Happy Eyeballs is just a specific case of multi-homed servers.
>
> QA should have test scenarios where the app has a dual stack network and the servers are silently untraceable over one then the other transport. It isn’t hard to do. Dealing with broken networks is something every application should do.
>

1 2 3 4 5 6 7 8 9  View All