Mailing List Archive

ssh host keys on cloned virtual machines
Hi list members,

does any one of you have a best practice on renewing ssh host keys on cloned machines?
I have a customer who never thought about that, while cloning all VMs from one template. Now all machines have the exact same host key.
My approach would be to store a machines MAC address(es). Then when starting the sshd.service, check if this MAC has changed. If so, remove all host keys, let sshd create new ones.

Thanks for any thoughts and comments about that.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
One solution I used was simply scripting the deletion of the host key after cloning it.
Another solution is to delete them in the golden image you create (which could be a different scenario from cloning whatever machine you need)
Both approaches worked well enough except when they didn’t.

It would be great to be able to specify path to hostkey including some sort of $hostname variable, so it would be regenerated if hostname changes, but that is probably better solved in a startup script. Maybe modifying it to create a symlink from the hostkey to a filename including hostname? I wonder how fragile that would be and if something like that already exists. Not sure if MAC or hostname are the right distinguishing parameters, though, maybe something like dmidecode UUID?

Jan


> On 24. 2. 2023, at 12:58, Keine Eile <keine-eile@e-mail.de> wrote:
>
> Hi list members,
>
> does any one of you have a best practice on renewing ssh host keys on cloned machines?
> I have a customer who never thought about that, while cloning all VMs from one template. Now all machines have the exact same host key.
> My approach would be to store a machines MAC address(es). Then when starting the sshd.service, check if this MAC has changed. If so, remove all host keys, let sshd create new ones.
>
> Thanks for any thoughts and comments about that.
> _______________________________________________
> openssh-unix-dev mailing list
> openssh-unix-dev@mindrot.org
> https://www.google.com/url?q=https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev&source=gmail-imap&ust=1677844875000000&usg=AOvVaw1wDCGYuQ4a5KUjpWj0GLtO

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Am 24.02.23 um 13:11 schrieb Jan Schermer:
> One solution I used was simply scripting the deletion of the host key after cloning it.
> Another solution is to delete them in the golden image you create (which could be a different scenario from cloning whatever machine you need)

The golden image can not have a hard wired magic which generates new host keys, as it is maintained from time to time using ssh.

> Both approaches worked well enough except when they didn’t.

I think, I have seen this, too.

> It would be great to be able to specify path to hostkey including some sort of $hostname variable, so it would be regenerated if hostname changes, but that is probably better solved in a startup script. Maybe modifying it to create a symlink from the hostkey to a filename including hostname? I wonder how fragile that would be and if something like that already exists. Not sure if MAC or hostname are the right distinguishing parameters, though, maybe something like dmidecode UUID?

The MAC is my weapon of choice, because no matter what virtualization you have, this will (in a sense, it hast to) change. Changing the hostname comes with the Ansible stuff, but this is already too late.

> Jan

Thanks Jan.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
> On 24. 2. 2023, at 13:25, Keine Eile <keine-eile@e-mail.de> wrote:
>
> Am 24.02.23 um 13:11 schrieb Jan Schermer:
>> One solution I used was simply scripting the deletion of the host key after cloning it.
>> Another solution is to delete them in the golden image you create (which could be a different scenario from cloning whatever machine you need)
>
> The golden image can not have a hard wired magic which generates new host keys, as it is maintained from time to time using ssh.

Right, that is what I did - stuff like apt update/upgrade or yum upgrade, pushing new versions of other stuff and then right before shutdown and turning it back into golden image I deleted the hostkeys, dhcp leases, logs and other state files.

>
>> Both approaches worked well enough except when they didn’t.
>
> I think, I have seen this, too.
>
>> It would be great to be able to specify path to hostkey including some sort of $hostname variable, so it would be regenerated if hostname changes, but that is probably better solved in a startup script. Maybe modifying it to create a symlink from the hostkey to a filename including hostname? I wonder how fragile that would be and if something like that already exists. Not sure if MAC or hostname are the right distinguishing parameters, though, maybe something like dmidecode UUID?
>
> The MAC is my weapon of choice, because no matter what virtualization you have, this will (in a sense, it hast to) change. Changing the hostname comes with the Ansible stuff, but this is already too late.
>

Hmm, I usually get hostnames from DHCP/cloud-init etc. This is where this magic should happen in theory. I guess looking for cloud-init hooks could turn up something that already exists?


>> Jan
>
> Thanks Jan.
> _______________________________________________
> openssh-unix-dev mailing list
> openssh-unix-dev@mindrot.org
> https://www.google.com/url?q=https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev&source=gmail-imap&ust=1677846468000000&usg=AOvVaw3QfyRqSVP6ds-YjBi_a9iN

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Are you doing any other first-boot initialization on the cloned VMs? Are
you (or could you) use cloud-init for this?

If so, you can run:

    cloud-init clean [--seed] [--logs] [--machine-id]

before cloning - or inside the cloned image using guestfish etc. I'm not
sure if this actually removes the existing host keys, but if it doesn't,
you could manually rm them as well.

Then optionally you can provide cloud-init metadata when the clones boot
if you want to set different network parameters, or perform other
initialization like creating additional user accounts etc.

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Am 24.02.23 um 13:39 schrieb Brian Candler:
> Are you doing any other first-boot initialization on the cloned VMs? Are you (or could you) use cloud-init for this?
>
> If so, you can run:
>
>     cloud-init clean [--seed] [--logs] [--machine-id]

Well, sure cloud-init could do this, but I think, it is way to complex for what needs to be done. An the customer has defined processes, which I would not like to touch. Some 'we always did it that way' and 'we never did it the other way' mentality.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 24/02/23, Brian Candler (b.candler@pobox.com) wrote:
> Are you doing any other first-boot initialization on the cloned VMs? Are you
> (or could you) use cloud-init for this?
>
> If so, you can run:
>
> ??? cloud-init clean [--seed] [--logs] [--machine-id]
>
> before cloning - or inside the cloned image using guestfish etc. I'm not
> sure if this actually removes the existing host keys, but if it doesn't, you
> could manually rm them as well.

This situation is beyond my experience, but I guess another way around would be
to try and block the golden image host key for users and use a host certificate
on the golden image host.

The golden image host could have its host certificate rotated every month,
perhaps, although that might mean you'd have to rotate the certificates on all
your other hosts too, depending on the expiry parameters on your certificates.

This would require setting up a ssh certificate signing process which might not
be something you'd like to do. Also, all users would have to add a
"@cert-authority" line to their ~/.ssh/known_hosts.

Rory

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Host key certificates are great, but it’s an even trickier thing to do than simply deleting the host key by a script… :-)


> On 24. 2. 2023, at 14:05, Rory Campbell-Lange <rory@campbell-lange.net> wrote:
>
> On 24/02/23, Brian Candler (b.candler@pobox.com) wrote:
>> Are you doing any other first-boot initialization on the cloned VMs? Are you
>> (or could you) use cloud-init for this?
>>
>> If so, you can run:
>>
>> cloud-init clean [--seed] [--logs] [--machine-id]
>>
>> before cloning - or inside the cloned image using guestfish etc. I'm not
>> sure if this actually removes the existing host keys, but if it doesn't, you
>> could manually rm them as well.
>
> This situation is beyond my experience, but I guess another way around would be
> to try and block the golden image host key for users and use a host certificate
> on the golden image host.
>
> The golden image host could have its host certificate rotated every month,
> perhaps, although that might mean you'd have to rotate the certificates on all
> your other hosts too, depending on the expiry parameters on your certificates.
>
> This would require setting up a ssh certificate signing process which might not
> be something you'd like to do. Also, all users would have to add a
> "@cert-authority" line to their ~/.ssh/known_hosts.
>
> Rory
>
> _______________________________________________
> openssh-unix-dev mailing list
> openssh-unix-dev@mindrot.org
> https://www.google.com/url?q=https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev&source=gmail-imap&ust=1677848781000000&usg=AOvVaw1sTcGhtCjOjnkNh1H9TZOx

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Hey.

Keep in mind that when you clone the template image and replace/delete
the template image's SSH host keys (and the same applies to other such
key material as well) in the clone... then chances are good that the
data is nevertheless still accessible from within the clone (depending
on the used fs, whether DISCARD is used, IO patterns and so on).

If the subsequent owner of the clone is not fully trustworthy, and
extraction of the template image's keys might be possible and could be
used in subsequent attacks.


Cheers,
Chris.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 2023/02/24 13:25, Keine Eile wrote:
> The MAC is my weapon of choice, because no matter what virtualization
> you have, this will (in a sense, it hast to) change. Changing the
> hostname comes with the Ansible stuff, but this is already too late.

Regenerating host keys if the MAC changes is no good in the general
case. Firstly, *which* MAC, there can be more than one. Secondly,
if you legitimately replace a NIC/motherboard due to hardware failure
(or move disks between motherboards etc) you'll generate new keys
when you shouldn't.

This isn't unique to SSH; there are other files depending on the
software involved which might include /etc/machine-id, saved RNG seeds,
IPv6 SOII keys, which need removing when preparing to clone.

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 24.02.23 12:58, Keine Eile wrote:
> does any one of you have a best practice on renewing ssh host keys on
> cloned machines?
> I have a customer who never thought about that, while cloning all VMs
> from one template. Now all machines have the exact same host key.
> My approach would be to store a machines MAC address(es). Then when
> starting the sshd.service, check if this MAC has changed. If so, remove
> all host keys, let sshd create new ones.

Strictly speaking, *if* you have an interest to make sure that *every*
VM gets unique host keypairs, then you should implement a cleanup
routine that takes care of "everything"¹ that matters to you.

Once you have *that*, the decision whether to trigger it as the last
step on the template before making a new image, or as the first step on
a VM created "to stay" from the template, and by what means and
mechanisms, becomes somewhat secondary.²

I'd be wary of having it triggered automatically whenever the MAC or
some other "hardware ID" changes, though, as that can happen when you
move VMs between hypervisors, add or remove virtual devices, etc..

¹ Erase host keypairs, erase keypairs of local users (so that access to
elsewhere doesn't get copied along), generate individual moduli, erase
shell histories, remove local non-system users' crontabs, empty the mail
spools of same, empty system and application logfiles, hunt for
passwords set in whatever config files, rename LVM VGs so as to have
names as unique as the VM itself, .......

² Personally, I like to add color coding to shell prompts to signal
platform, test vs. prod etc.. Blinking black-on-yellow works quite well
as a reminder that a VM freshly created from a template might still need
some finalizing touches. ;-)

Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Fri, Feb 24, 2023 at 10:01 AM Jochen Bern <Jochen.Bern@binect.de> wrote:
>
> On 24.02.23 12:58, Keine Eile wrote:
> > does any one of you have a best practice on renewing ssh host keys on
> > cloned machines?
> > I have a customer who never thought about that, while cloning all VMs
> > from one template. Now all machines have the exact same host key.
> > My approach would be to store a machines MAC address(es). Then when
> > starting the sshd.service, check if this MAC has changed. If so, remove
> > all host keys, let sshd create new ones.
>
> Strictly speaking, *if* you have an interest to make sure that *every*
> VM gets unique host keypairs, then you should implement a cleanup
> routine that takes care of "everything"¹ that matters to you.

These vagaries are why many environments simply disable the validation
of hostkeys in their .ssh/config settings and move on to work that is
of some more effective use to their workplace. I've encountered,
several times, when sites relied on extensive use of SSH key managed
git access and shattered their deployment systems when the git server
got moved and hostkeys were either incorrectly migrated or the IP was
a re-used IP of a previously accessed SSH target. Hilarity ensued.
This kind of hand-tuning of every deployment rapidly becomes a waste
of admin time and serves little purpose without very tight control of
the "known_hosts", which can be overridden by local users anyway.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 2/25/23 07:50, Nico Kadel-Garcia wrote:
> On Fri, Feb 24, 2023 at 10:01 AM Jochen Bern <Jochen.Bern@binect.de> wrote:
>>
>> On 24.02.23 12:58, Keine Eile wrote:
>>> does any one of you have a best practice on renewing ssh host keys on
>>> cloned machines?
>>> I have a customer who never thought about that, while cloning all VMs
>>> from one template. Now all machines have the exact same host key.
>>> My approach would be to store a machines MAC address(es). Then when
>>> starting the sshd.service, check if this MAC has changed. If so, remove
>>> all host keys, let sshd create new ones.
>>
>> Strictly speaking, *if* you have an interest to make sure that *every*
>> VM gets unique host keypairs, then you should implement a cleanup
>> routine that takes care of "everything"¹ that matters to you.
>
> These vagaries are why many environments simply disable the validation
> of hostkeys in their .ssh/config settings and move on to work that is
> of some more effective use to their workplace. I've encountered,
> several times, when sites relied on extensive use of SSH key managed
> git access and shattered their deployment systems when the git server
> got moved and hostkeys were either incorrectly migrated or the IP was
> a re-used IP of a previously accessed SSH target. Hilarity ensued.
> This kind of hand-tuning of every deployment rapidly becomes a waste
> of admin time and serves little purpose without very tight control of
> the "known_hosts", which can be overridden by local users anyway.

Are SSH host certificates the solution?
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Sat, Feb 25, 2023 at 12:14?PM Demi Marie Obenour
<demiobenour@gmail.com> wrote:
>
> On 2/25/23 07:50, Nico Kadel-Garcia wrote:
> > On Fri, Feb 24, 2023 at 10:01 AM Jochen Bern <Jochen.Bern@binect.de> wrote:
> >>
> >> On 24.02.23 12:58, Keine Eile wrote:
> >>> does any one of you have a best practice on renewing ssh host keys on
> >>> cloned machines?
> >>> I have a customer who never thought about that, while cloning all VMs
> >>> from one template. Now all machines have the exact same host key.
> >>> My approach would be to store a machines MAC address(es). Then when
> >>> starting the sshd.service, check if this MAC has changed. If so, remove
> >>> all host keys, let sshd create new ones.
> >>
> >> Strictly speaking, *if* you have an interest to make sure that *every*
> >> VM gets unique host keypairs, then you should implement a cleanup
> >> routine that takes care of "everything"¹ that matters to you.
> >
> > These vagaries are why many environments simply disable the validation
> > of hostkeys in their .ssh/config settings and move on to work that is
> > of some more effective use to their workplace. I've encountered,
> > several times, when sites relied on extensive use of SSH key managed
> > git access and shattered their deployment systems when the git server
> > got moved and hostkeys were either incorrectly migrated or the IP was
> > a re-used IP of a previously accessed SSH target. Hilarity ensued.
> > This kind of hand-tuning of every deployment rapidly becomes a waste
> > of admin time and serves little purpose without very tight control of
> > the "known_hosts", which can be overridden by local users anyway.
>
> Are SSH host certificates the solution?
> --
> Sincerely,
> Demi Marie Obenour (she/her/hers)

Host key certificates are a solution to how to spend billable hours
making someone feel good about their technological superiority without
actually resolving the problem. You'd have to deploy the root
certificates to all of the clients, and they become another
unpredictable source of failure for people's laptop or new testing
server setups. There are problems they address, of certain high
security environments where you really, really care if someone spoofs
your DNS or IP address, but the resulting blocked communications are
far more likely to be blocking legitimate traffic, not intruders.

I'm also a bit snarky about the time spent to set up the service,
publish the relevant root certificates generally and adjust the
ssh_config settings, time far better spent elsewhere.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 2/25/23 21:18, Nico Kadel-Garcia wrote:
> On Sat, Feb 25, 2023 at 12:14?PM Demi Marie Obenour
> <demiobenour@gmail.com> wrote:
>>
>> On 2/25/23 07:50, Nico Kadel-Garcia wrote:
>>> On Fri, Feb 24, 2023 at 10:01 AM Jochen Bern <Jochen.Bern@binect.de> wrote:
>>>>
>>>> On 24.02.23 12:58, Keine Eile wrote:
>>>>> does any one of you have a best practice on renewing ssh host keys on
>>>>> cloned machines?
>>>>> I have a customer who never thought about that, while cloning all VMs
>>>>> from one template. Now all machines have the exact same host key.
>>>>> My approach would be to store a machines MAC address(es). Then when
>>>>> starting the sshd.service, check if this MAC has changed. If so, remove
>>>>> all host keys, let sshd create new ones.
>>>>
>>>> Strictly speaking, *if* you have an interest to make sure that *every*
>>>> VM gets unique host keypairs, then you should implement a cleanup
>>>> routine that takes care of "everything"¹ that matters to you.
>>>
>>> These vagaries are why many environments simply disable the validation
>>> of hostkeys in their .ssh/config settings and move on to work that is
>>> of some more effective use to their workplace. I've encountered,
>>> several times, when sites relied on extensive use of SSH key managed
>>> git access and shattered their deployment systems when the git server
>>> got moved and hostkeys were either incorrectly migrated or the IP was
>>> a re-used IP of a previously accessed SSH target. Hilarity ensued.
>>> This kind of hand-tuning of every deployment rapidly becomes a waste
>>> of admin time and serves little purpose without very tight control of
>>> the "known_hosts", which can be overridden by local users anyway.
>>
>> Are SSH host certificates the solution?
>> --
>> Sincerely,
>> Demi Marie Obenour (she/her/hers)
>
> Host key certificates are a solution to how to spend billable hours
> making someone feel good about their technological superiority without
> actually resolving the problem. You'd have to deploy the root
> certificates to all of the clients, and they become another
> unpredictable source of failure for people's laptop or new testing
> server setups. There are problems they address, of certain high
> security environments where you really, really care if someone spoofs
> your DNS or IP address, but the resulting blocked communications are
> far more likely to be blocking legitimate traffic, not intruders.
>
> I'm also a bit snarky about the time spent to set up the service,
> publish the relevant root certificates generally and adjust the
> ssh_config settings, time far better spent elsewhere.

What *is* a good solution, then? Disabling host key checking is not
a good solution. What about embedding the host key fingerprint in the
domain somehow?
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Fri, 24 Feb 2023, Keine Eile wrote:

> does any one of you have a best practice on renewing ssh host keys on cloned
> machines?

Yes: not cloning machines.

There’s too many things to take care of for these. The VM UUID in
libvirt. The systemd machine ID. SSH hostkey and SSL private key.
The RNG seed. The various places where the hostname is written to
during software installation. The inode generation IDs, depending
on the filesystem. Other things that are created depending on the
machine and OS… such as the Debian popcon host ID, even.

The effort to clean/regenerate these and possibly more, which, in
addition, often needs new per-machine random bytes introduced, is
more than just installing fresh machines all the time, especially
if you script that (in which I personally even consider moving a?
way from d-i with preseed and towards debootstrap with (scripted)
manual pre? (partitioning, mkfs, …) and post-steps).

This is even more true as every new machine tends to get just the
little bit of difference from the old ones that is easier to make
when not cloning (such as different filesystem layout, software).

I know, this is not the answer you want to hear, but it’s the one
that works reliably, without investing too much while still being
not 100% sure you caught everything.

(Fun fact on the side, while doing admin stuff at $dayjob, I even
didn’t automate VM creation as clicking through d-i those times I
was installing some took less time in summary than creating auto?
mation for it would’ve. I used xlax (like clusterssh, but for any
X11 window) for starting installation, then d-i network-console +
cssh for the remainder; a private APT repository with config pak?
kages, to install dependencies and configure some things, rounded
it off.)

bye,
//mirabilos
--
Infrastrukturexperte • tarent solutions GmbH
Am Dickobskreuz 10, D-53121 Bonn • http://www.tarent.de/
Telephon +49 228 54881-393 • Fax: +49 228 54881-235
HRB AG Bonn 5168 • USt-ID (VAT): DE122264941
Geschäftsführer: Dr. Stefan Barth, Kai Ebenrett, Boris Esser, Alexander Steeg

****************************************************
/?\ The UTF-8 Ribbon
? ? Campaign against Mit dem tarent-Newsletter nichts mehr verpassen:
 ?  HTML eMail! Also, https://www.tarent.de/newsletter
? ? header encryption!
****************************************************
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Sun, Feb 26, 2023 at 2:51?PM Thorsten Glaser <t.glaser@tarent.de> wrote:
>
> On Fri, 24 Feb 2023, Keine Eile wrote:
>
> > does any one of you have a best practice on renewing ssh host keys on cloned
> > machines?
>
> Yes: not cloning machines.

Good luck with *that*. Building VM's from media is a far, far too
lengthy process for production deployment, especially for auto-scaling
clusters.

> There’s too many things to take care of for these. The VM UUID in
> libvirt. The systemd machine ID. SSH hostkey and SSL private key.
> The RNG seed. The various places where the hostname is written to
> during software installation. The inode generation IDs, depending
> on the filesystem. Other things that are created depending on the
> machine and OS… such as the Debian popcon host ID, even.

That's what the "sysprep" procedure is for when generating reference
VM images, and "cloud-utils" for setting up new VMs from images, at
least for Linux and Windows and MacOS. I've no idea if OpenBSD has a
similar utility, I've never tried to cloud or enterprise deploy that.
I've done such cloning very effectively on very large scales, up to
roughly 20,000 servers at a time, and the relevant procedures are
decades old. It's a solved problem, all the way back to cloning
hardware images form CD's and USB sticks and re-imaging images
remotely. "virt-utils" has been handling host-specific boot-time
tuning for years, most of them are solved problems.

> The effort to clean/regenerate these and possibly more, which, in
> addition, often needs new per-machine random bytes introduced, is
> more than just installing fresh machines all the time, especially
> if you script that (in which I personally even consider moving a?
> way from d-i with preseed and towards debootstrap with (scripted)
> manual pre? (partitioning, mkfs, …) and post-steps).

If people really feel the need for robust random number services,
they've got other problems. I'd suggest they either apply an init
script to reset whatever they feel they need on every reboot, or find
a genuine physical RNG to tap.

The more host-by-host customization, admittedly the more billable
powers and the more yourself personally into each and every stop. But
it doesn't scale, and you will eventually be told to stop wasting your
time if your manager is attentive to how much time you're burning on
each deployment.

> This is even more true as every new machine tends to get just the
> little bit of difference from the old ones that is easier to make
> when not cloning (such as different filesystem layout, software).

And *that* is one of the big reasons for virtualization based
deployments, so people can stop caring about the physical subtleties.
I've been the poor beggar negotiating all distinct physical platforms
for deployment, I used to build production operating systems. Getting
the SSH setups straight was something I had to pay attention to, as
was the destabilizing effects of using .ssh/known_hosts in such
broadly and erratically deployed environments were IP addresses in
remote data centers could not possibly be reliably controlled or
predicted, nor was reverse DNS likely to work at all which was its own
distinct burden for logging *on* those remote servers.

> I know, this is not the answer you want to hear, but it’s the one
> that works reliably, without investing too much while still being
> not 100% sure you caught everything.

It Does Not Help when unpredictable assignment of IP addresses of
different classes of hosts inside the same VLAN lead to different
hostkeys migrating back and forth to the same IP address, which I'm
afraid occurs far too often with careless DHCP setups.

> (Fun fact on the side, while doing admin stuff at $dayjob, I even
> didn’t automate VM creation as clicking through d-i those times I
> was installing some took less time in summary than creating auto?
> mation for it would’ve. I used xlax (like clusterssh, but for any
> X11 window) for starting installation, then d-i network-console +
> cssh for the remainder; a private APT repository with config pak?
> kages, to install dependencies and configure some things, rounded
> it off.)

You probably don't work on the same scale I've worked, or had to
publish infrastructure-as-code so that the poor guy in India getting
the call at 2 AM to triple a cluster can push a few buttons and not go
through manual partitioning, or lots and lots and lots of AWS
consoles. It can be a very different world when you have that much
time to invest on individual servers. It's also a great way to stretch
our billable hours with very familiar tasks which only you know how to
do. I've been targeted before for layoffs because I successfully did
so much automation, but I know a few of my trainees who are doing
*very well* with quite large environments, so I'm glad to have taught
them well.

Nico Kadel-Garcia

> bye,
> //mirabilos
> --
> Infrastrukturexperte • tarent solutions GmbH
> Am Dickobskreuz 10, D-53121 Bonn • http://www.tarent.de/
> Telephon +49 228 54881-393 • Fax: +49 228 54881-235
> HRB AG Bonn 5168 • USt-ID (VAT): DE122264941
> Geschäftsführer: Dr. Stefan Barth, Kai Ebenrett, Boris Esser, Alexander Steeg
>
> ****************************************************
> /?\ The UTF-8 Ribbon
> ? ? Campaign against Mit dem tarent-Newsletter nichts mehr verpassen:
> ? HTML eMail! Also, https://www.tarent.de/newsletter
> ? ? header encryption!
> ****************************************************
> _______________________________________________
> openssh-unix-dev mailing list
> openssh-unix-dev@mindrot.org
> https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Mon, 27 Feb 2023, Nico Kadel-Garcia wrote:

>> > does any one of you have a best practice on renewing ssh host keys on cloned
>> > machines?
>>
>> Yes: not cloning machines.
>
>Good luck with *that*. Building VM's from media is a far, far too
>lengthy process for production deployment, especially for auto-scaling
>clusters.

(It’s “VMs”, no genitive apostrophe.)

What media? debootstrap + a local mirror = fast.
In fact, much faster than cloning, possibly large, filesystems,
unless you use CoW, which you don’t because then you’re overcommitting.

>> There’s too many things to take care of for these. The VM UUID in
[…]

>That's what the "sysprep" procedure is for when generating reference
>VM images, and "cloud-utils" for setting up new VMs from images, at

What guarantees you “sysprep” and “cloud-utils” find everything that
needs to be changed?

(I’m not sure where inode generation numbers are (still) a concern,
on what filesystems, anyway. They only come into play with NFS,
AFAIK, though, so that limits this issue. When they come into play,
however, they’re hard to change without doing a newfs(8)…)

>If people really feel the need for robust random number services,
>they've got other problems. I'd suggest they either apply an init
>script to reset whatever they feel they need on every reboot, or find

I think you’re downplaying a very real problem here, as an aside.

>The more host-by-host customization, admittedly the more billable
>powers and the more yourself personally into each and every stop. But
>it doesn't scale

Huh? Scripting that creation from scratch is a job done once that
scales very well. debootstrap is reasonably fast, installation of
additional packages can be fast as well (since it’s a new image,
use eatmydata or the dpkg option I’ve not yet remembered).

And, given the system’s all-new, I believe this is even more
reliable than cloning something customised, then trying to
adapt *that* to the new requirements.

>, and you will eventually be told to stop wasting your
>time if your manager is attentive to how much time you're burning on
>each deployment.

If I’ve scripted the image creation, it’s no more work than
a cloning approach.

>> This is even more true as every new machine tends to get just the
>> little bit of difference from the old ones that is easier to make
>> when not cloning (such as different filesystem layout, software).
>
>And *that* is one of the big reasons for virtualization based
>deployments, so people can stop caring about the physical subtleties.

?!?!?!

How does that translate into needing, say, 8 GiB HDD for some VMs but
32 GiB HDD for some others?

This has *NOTHING* to do with physical vs virtual platforms.

>predicted, nor was reverse DNS likely to work at all which was its own
>distinct burden for logging *on* those remote servers.

Maybe invest your time into fixing infrastructure then…

>> (Fun fact on the side, while doing admin stuff at $dayjob, I even
[…]

>You probably don't work on the same scale I've worked, or had to

No, not for that. If I had to do it at larger scale I would have
scripted it. I didn’t so it turned out to be cheaper, work-time-wise,
to do part of the steps by hand the few times I needed to do it.
I don’t admin stuff at work for others any more, so that point is
moot. But I did want to share this as an anecdote: when scaling very
small, the “stupid” solution may be better than a clever one.

>It's also a great way to stretch
>our billable hours with very familiar tasks which only you know how to
>do.

I don’t need to do that. Besides, I’m employed, not freelancing,
so I don’t even have to care about billable hours.

bye,
//mirabilos
--
Infrastrukturexperte • tarent solutions GmbH
Am Dickobskreuz 10, D-53121 Bonn • http://www.tarent.de/
Telephon +49 228 54881-393 • Fax: +49 228 54881-235
HRB AG Bonn 5168 • USt-ID (VAT): DE122264941
Geschäftsführer: Dr. Stefan Barth, Kai Ebenrett, Boris Esser, Alexander Steeg

****************************************************
/?\ The UTF-8 Ribbon
? ? Campaign against Mit dem tarent-Newsletter nichts mehr verpassen:
 ?  HTML eMail! Also, https://www.tarent.de/newsletter
? ? header encryption!
****************************************************
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Mon, Feb 27, 2023 at 8:33?PM Thorsten Glaser <t.glaser@tarent.de> wrote:
>
> On Mon, 27 Feb 2023, Nico Kadel-Garcia wrote:
>
> >> > does any one of you have a best practice on renewing ssh host keys on cloned
> >> > machines?
> >>
> >> Yes: not cloning machines.
> >
> >Good luck with *that*. Building VM's from media is a far, far too
> >lengthy process for production deployment, especially for auto-scaling
> >clusters.
>
> (It’s “VMs”, no genitive apostrophe.)

OK, point.

> What media? debootstrap + a local mirror = fast.
> In fact, much faster than cloning, possibly large, filesystems,
> unless you use CoW, which you don’t because then you’re overcommitting.

Sure, I was doing that sort of "local build into a chroot cage" stunt
in 1999. It's re-inventing the wheel, and using a 3-D printer to spend
time making it, when you've already a very broad variety of off-site
VM images, and well defined tools for deploying them directly. I
suspect that most of us have better things to do with our time than
maintaining a local mirror when our friends in every cloud center on
the planet have already done the work.

> >> There’s too many things to take care of for these. The VM UUID in
> […]
>
> >That's what the "sysprep" procedure is for when generating reference
> >VM images, and "cloud-utils" for setting up new VMs from images, at
>
> What guarantees you “sysprep” and “cloud-utils” find everything that
> needs to be changed?

What makes your customized, hand-written, internal versions of such
tools are better and will work more reliably than a consistently and
effectively used open source tool?

> (I’m not sure where inode generation numbers are (still) a concern,
> on what filesystems, anyway. They only come into play with NFS,
> AFAIK, though, so that limits this issue. When they come into play,
> however, they’re hard to change without doing a newfs(8)…)

They exist for other filesystems. I've not really gone digging into
them, they Just Work(tm) for the imaging tools applied to the VM
images.

> >If people really feel the need for robust random number services,
> >they've got other problems. I'd suggest they either apply an init
> >script to reset whatever they feel they need on every reboot, or find
>
> I think you’re downplaying a very real problem here, as an aside.

In the last 35 years, I've only seen some care much about the RNG.....
twice. And those hosts wound up with physical random number generators
on PCI slots, it was years ago.

> >The more host-by-host customization, admittedly the more billable
> >powers and the more yourself personally into each and every stop. But
> >it doesn't scale
>
> Huh? Scripting that creation from scratch is a job done once that
> scales very well. debootstrap is reasonably fast, installation of
> additional packages can be fast as well (since it’s a new image,
> use eatmydata or the dpkg option I’ve not yet remembered).

I've been the guy who had to do it with large deployments, up to about
20,000 hosts of quite varied hardware from different vendors and
different specs. I do believe they replaced my tools, after about 20
years. when someone found an effective open source tool. That
especially included kernel updates to support the newer platforms. I
have stories when someone deployed out-of-date OS images remotely on
top of the vendor image we gave hardware vendors for initial
deployment.

Being able to use a vendor's already existing tools, such as every
cloud provider's tools, saves a *lot* of time having to re-invent such
wheels.

> And, given the system’s all-new, I believe this is even more
> reliable than cloning something customised, then trying to
> adapt *that* to the new requirements.

There are trade-offs. One is that skew of the OS building tools can
lead to skew among the images. Another is that what you describe does
not scale automatically and horizontally for commercial auto-scaling
structures, which are almost always VM image based these days, and
wind up with the identical host key or skewed host keys for the same
re-allocated IP address problem, it's work to try to resolve both.
Much, much simpler, and more stable, to simply ignore known_hosts and
spend your time on the management of user public keys, which is
generally the far greater risk.

> >, and you will eventually be told to stop wasting your
> >time if your manager is attentive to how much time you're burning on
> >each deployment.
>
> If I’ve scripted the image creation, it’s no more work than
> a cloning approach.

Been there, done that, and it keeps needing tweaking.

> >> This is even more true as every new machine tends to get just the
> >> little bit of difference from the old ones that is easier to make
> >> when not cloning (such as different filesystem layout, software).
> >
> >And *that* is one of the big reasons for virtualization based
> >deployments, so people can stop caring about the physical subtleties.
>
> ?!?!?!
>
> How does that translate into needing, say, 8 GiB HDD for some VMs but
> 32 GiB HDD for some others?

Consistently create small images. Expand them to include that
available disk space with an init script embedded in the image.
Remember when I mentioned 20,000 hosts at a time? It's admittedly
similar to the eork needed for deploying images to new hardware.

> This has *NOTHING* to do with physical vs virtual platforms.

Virtual platforms label the disks and partitions fairly consistently.
Convincing /etc/fstab to work for newly deployed hardware can be....
tricky, if the image deployment uses distinct drive labeling. Been
there, done that, have scar tissue from the Promise SATA controller
drivers that renumbered the /dev/sd* labeled drives in the kernel to
pretend that their add-on card had the first labeled drives. Drove me
*nuts* unfurling that one, because it depended on which kernel you
used.

> >predicted, nor was reverse DNS likely to work at all which was its own
> >distinct burden for logging *on* those remote servers.
>
> Maybe invest your time into fixing infrastructure then…

The reverse DNS was not my infrastructure to fix. When you host
servers remotely, convincing the remote datacenter to do reverse DNS
correctly is.... not always an effective use of time.

> >> (Fun fact on the side, while doing admin stuff at $dayjob, I even
> […]
>
> >You probably don't work on the same scale I've worked, or had to
>
> No, not for that. If I had to do it at larger scale I would have
> scripted it. I didn’t so it turned out to be cheaper, work-time-wise,
> to do part of the steps by hand the few times I needed to do it.
> I don’t admin stuff at work for others any more, so that point is
> moot. But I did want to share this as an anecdote: when scaling very
> small, the “stupid” solution may be better than a clever one.

Yeah, one-offs or small tasks can just be faster to use a few lines of
shell or even a manual step. I've been dealing with bulky environments
where infrastructure as code is vital. Spending the time to ensure
individualized hostkeys, when there's a significant chance of IP
re-use and conflicting host keys to clean up, is... well it's time
better spent elsewhere. It's why tools like "ansible" typicall disable
the known_hosts file at run time with just the ssh_config settings I
mentioned. They don't have the time to manually validate SSH host key
conflicts when deploying new servers.

> >It's also a great way to stretch
> >our billable hours with very familiar tasks which only you know how to
> >do.
>
> I don’t need to do that. Besides, I’m employed, not freelancing,
> so I don’t even have to care about billable hours.
>
> bye,
> //mirabilos

Well, good for you. I'm sad to say I've seen people chortling over how
ver, very, very clever they were with deliberately hand-tuned setups
to assert their complete mastery over their turf, and been brought in
a few times to stabilize the mess when they left. It's lead me to a
lot of "keep it very, very simple" steps like "don't bother using
known_hosts".
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
Hi.

I think this thread has veered far enough from the discussion of
OpenSSH development to be considered off-topic.

--
Darren Tucker (dtucker at dtucker.net)
GPG key 11EAA6FA / A86E 3E07 5B19 5880 E860 37F4 9357 ECEF 11EA A6FA
Good judgement comes with experience. Unfortunately, the experience
usually comes from bad judgement.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On Tue, Feb 28, 2023 at 1:57?AM Darren Tucker <dtucker@dtucker.net> wrote:
>
> Hi.
>
> I think this thread has veered far enough from the discussion of
> OpenSSH development to be considered off-topic.

Fair enough, we got off into the weeds. The OpenSSH specific summary
is, I think, that managing the host keys for image based OS deployment
can be burdensome and confusing, and much, much easier by simply
discarding the reliance on .ssh/known_hosts on clients.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: ssh host keys on cloned virtual machines [ In reply to ]
On 2/28/23 06:30, Nico Kadel-Garcia wrote:
> On Tue, Feb 28, 2023 at 1:57?AM Darren Tucker <dtucker@dtucker.net> wrote:
>>
>> Hi.
>>
>> I think this thread has veered far enough from the discussion of
>> OpenSSH development to be considered off-topic.
>
> Fair enough, we got off into the weeds. The OpenSSH specific summary
> is, I think, that managing the host keys for image based OS deployment
> can be burdensome and confusing, and much, much easier by simply
> discarding the reliance on .ssh/known_hosts on clients.

And that is a problem.

OpenSSH should include documentation about how to manage known_hosts with
very large numbers of machines. The obvious approach that comes to mind
is for whatever automation one is using to automatically issue an SSH
certificate to every new machine. Every public cloud, and I suspect every
private cloud too, provides enough infrastructure to implement this securely.
--
Sincerely,
Demi Marie Obenour (she/her/hers)