Mailing List Archive

Human readable .ssh/known_hosts?
Hi list members,

just tried to get some old records out of my known_hosts, which is 'HashKnownHosts yes'. Is there a way to unhash host names and/or IPs?
Google tells about, how to add hosts, but not the opposite, may be I miss some thing.
Is this does not work at all, is there a best practice for cleaning old hosts and keys out?

Thanks, Martin!

--

Martin
GnuPG Key Fingerprint, KeyID '4FBE451A':
'2237 1E95 8E50 E825 9FE8 AEE1 6FF4 1E34 4FBE 451A'
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, 29 Sep 2020, Martin Drescher wrote:

> Hi list members,
>
> just tried to get some old records out of my known_hosts, which is
> 'HashKnownHosts yes'. Is there a way to unhash host names and/or
> IPs? Google tells about, how to add hosts, but not the opposite, may
> be I miss some thing. Is this does not work at all, is there a best
> practice for cleaning old hosts and keys out?

The hashing is intentionally one-way - you can't go backwards from a
hash to a hostname without an inordinate amount of work.

You can however find and delete hosts by name using ssh-keygen.

To find entries matching a hostname, use "ssh-keygen -F hostname", e.g.

$ ssh-keygen -lF haru.mindrot.org
# Host haru.mindrot.org found: line 146
haru.mindrot.org ECDSA SHA256:xjGrsgS6JzMojD3go1qULmh02LG8YpRirOwmoHnT/3M
# Host haru.mindrot.org found: line 165
haru.mindrot.org RSA SHA256:9nN+SOkKCQq6BLzybAUNlczAU0n+HbOIDxIrBIbPPmU
# Host haru.mindrot.org found: line 166
haru.mindrot.org ED25519 SHA256:43S30LGUkc2f9dDcLZG6O5KPKtPn7Xw2WkR2vCO/nnU

(the -l flag tells it to print fingerprints instead of full keys)

You can also delete entries using "ssh-keygen -R hostname".

-d
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
Martin Drescher kirjoitti tiistai 29. syyskuuta 2020:
> Hi list members,
>
> just tried to get some old records out of my known_hosts, which is 'HashKnownHosts yes'. Is there a way to unhash host names and/or IPs?
> Google tells about, how to add hosts, but not the opposite, may be I miss some thing.
> Is this does not work at all, is there a best practice for cleaning old hosts and keys out?

Unfortunately not. Hashing is one-way process by definition and you should not be able to get the input from a hash.

- juice -

--
Sent from my SFOS/XperiaX
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On 29.09.20 12:44, Damien Miller wrote:
> On Tue, 29 Sep 2020, Martin Drescher wrote:
>
>> Hi list members,
[...]> You can however find and delete hosts by name using ssh-keygen.
>
> To find entries matching a hostname, use "ssh-keygen -F hostname", e.g.

The point is, file has over 600 hashes stored.

> $ ssh-keygen -lF haru.mindrot.org
> # Host haru.mindrot.org found: line 146
> haru.mindrot.org ECDSA SHA256:xjGrsgS6JzMojD3go1qULmh02LG8YpRirOwmoHnT/3M
> # Host haru.mindrot.org found: line 165
> haru.mindrot.org RSA SHA256:9nN+SOkKCQq6BLzybAUNlczAU0n+HbOIDxIrBIbPPmU
> # Host haru.mindrot.org found: line 166
> haru.mindrot.org ED25519 SHA256:43S30LGUkc2f9dDcLZG6O5KPKtPn7Xw2WkR2vCO/nnU
>
> (the -l flag tells it to print fingerprints instead of full keys)
>
> You can also delete entries using "ssh-keygen -R hostname".
>
> -d

At this point, my best practice would possibly be, to start with an empty known host and build a new one from all hosts in my .ssh/config.

How would a 'lasst_seen' column in known_hosts be a nice feature? I'm not sure.

--

Martin
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
> At this point, my best practice would possibly be, to start with an
> empty known host and build a new one from all hosts in my .ssh/config.

You could move your user-known hosts file to the global location,
and empty yours.
That way new (and changed) get written to your new file, but the old
list is used as a backup.

Perhaps that would be a feature request - "also look at this file,
and silently migrate to the user's file if identical".



> How would a 'lasst_seen' column in known_hosts be a nice feature? I'm
> not sure.

Not sure about that. Age doesn't tell about validity.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On 29.09.20 13:08, Philipp Marek wrote:
>
>> At this point, my best practice would possibly be, to start with an
>> empty known host and build a new one from all hosts in my .ssh/config.
>
> You could move your user-known hosts file to the global location,
> and empty yours.
> That way new (and changed) get written to your new file, but the old
> list is used as a backup.
>
> Perhaps that would be a feature request - "also look at this file,
> and silently migrate to the user's file if identical".

Never mind, it's easy to run a ssh-keyscan for each host in a .ssh/config. What does not exist in in that config does not exist in real life, too.


--

Martin
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 6:29 AM Martin Drescher <drescher@snafu.de> wrote:
>
> Hi list members,
>
> just tried to get some old records out of my known_hosts, which is 'HashKnownHosts yes'. Is there a way to unhash host names and/or IPs?
> Google tells about, how to add hosts, but not the opposite, may be I miss some thing.
> Is this does not work at all, is there a best practice for cleaning old hosts and keys out?

I gave up on $HOME/.ssh/known_hosts a *long* time ago, because if
servers are DHCP distributed without static IP addresses they can wind
up overlapping IP addresses with mismatched hostkeys in an internal
addr4ess. Since most of the traffic for SSH is for internal hosts,
known_hosts is usually far more likely to break valid services than to
provide useful filtering, and has been worse than useless since ssh-1
was written in 1995.

There are setups SSH targets where it is useful for, primarily
externally and consistentlyconfigured hosts with stable DNS and
hostkeys, such as github or gitlab. But for internal services, it's
generally far more trouble than it's worth. To disable it universally,
put th4ese in $HOME/.ssh/config:

Host *
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
LogLevel=ERROR

These can be reset as desired for hosts that might have reliably
consistent DNS and hostkeys, but hostkeys have long proven to cost far
more time and effort when altered for legitimate reasons than helpful
in identifying malicious man-in-the-middle attacks. They've been far
more useful as critifcal components of providing an encrypted session
than for identifying the host.

This has been the case since SSH-1 was written in 1995.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On 2020-09-29 09:12, Nico Kadel-Garcia wrote:
> On Tue, Sep 29, 2020 at 6:29 AM Martin Drescher <drescher@snafu.de> wrote:
>>
>> Hi list members,
>>
>> just tried to get some old records out of my known_hosts, which is 'HashKnownHosts yes'. Is there a way to unhash host names and/or IPs?
>> Google tells about, how to add hosts, but not the opposite, may be I miss some thing.
>> Is this does not work at all, is there a best practice for cleaning old hosts and keys out?
>
> I gave up on $HOME/.ssh/known_hosts a *long* time ago, because if
> servers are DHCP distributed without static IP addresses they can wind
> up overlapping IP addresses with mismatched hostkeys in an internal
> addr4ess. Since most of the traffic for SSH is for internal hosts,
> known_hosts is usually far more likely to break valid services than to
> provide useful filtering, and has been worse than useless since ssh-1
> was written in 1995.
>
> There are setups SSH targets where it is useful for, primarily
> externally and consistentlyconfigured hosts with stable DNS and
> hostkeys, such as github or gitlab. But for internal services, it's
> generally far more trouble than it's worth. To disable it universally,
> put th4ese in $HOME/.ssh/config:
>
> Host *
> StrictHostKeyChecking no
> UserKnownHostsFile=/dev/null
> LogLevel=ERROR
>
> These can be reset as desired for hosts that might have reliably
> consistent DNS and hostkeys, but hostkeys have long proven to cost far
> more time and effort when altered for legitimate reasons than helpful
> in identifying malicious man-in-the-middle attacks. They've been far
> more useful as critifcal components of providing an encrypted session
> than for identifying the host.
>
> This has been the case since SSH-1 was written in 1995.

There are several alternative solutions, all of which are better than
disabling host key verification:

- You can place SSH host keys in DNS, and sign then with DNSSEC.
You can then run a local recursive resolver, such as Unwind or
Unbound (both in the OpenBSD base system), which validates DNSSEC.
ssh(1) can then check the server’s host keys against the SSHFP
DNS records.
- You can use SSH certificates. Each server presents a certificate to
attest to its identity. The certificate contains both the public
host key and a signature made by a trusted certificate authority.
If the certificate authority only signs legitimate certificates,
the identity of a server cannot be forged.
- You can use GSSAPI key exchange. OpenSSH doesn’t support this
natively, but there is a widely available patch that adds support
for it. With GSSAPI key exchange, the client and server authenticate
each other using their Kerberos credentials. This works best if
the machines are joined to a domain in a directory service, such
as FreeIPA or Active Directory.

Sincerely,

Demi
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, 29 Sep 2020, Nico Kadel-Garcia wrote:

> There are setups SSH targets where it is useful for, primarily
> externally and consistentlyconfigured hosts with stable DNS and
> hostkeys, such as github or gitlab. But for internal services, it's
> generally far more trouble than it's worth.

FWIW I think this is bad advice.

Services are only "internal" to the extent that you can trust your network.
Search "SSL added and removed here" for a practical demonstration of this
assumption yielding undesirable results.

Disabling hostkey checking is a big hammer, but occasionally useful for
lab environments. Generally I recommend that people who are having trouble
with hostkey management consider using host certificates.

-d
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, 29 Sep 2020 at 23:16, Nico Kadel-Garcia <nkadel@gmail.com> wrote:
[...]
> I gave up on $HOME/.ssh/known_hosts a *long* time ago, because if
> servers are DHCP distributed without static IP addresses they can wind
> up overlapping IP addresses with mismatched hostkeys

You can set CheckHostIP=no in your config. As long as the names don't
change it'll do what you want, and it's far safer than what you
suggest.

[...]
> This has been the case since SSH-1 was written in 1995.

CheckHostIP was added to OpenSSH in 1999 before the first release and
has been in every release since:
https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/readconf.c.diff?r1=1.7&r2=1.8&f=h

--
Darren Tucker (dtucker at dtucker.net)
GPG key 11EAA6FA / A86E 3E07 5B19 5880 E860 37F4 9357 ECEF 11EA A6FA (new)
Good judgement comes with experience. Unfortunately, the experience
usually comes from bad judgement.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 8:07 PM Damien Miller <djm@mindrot.org> wrote:
>
> On Tue, 29 Sep 2020, Nico Kadel-Garcia wrote:
>
> > There are setups SSH targets where it is useful for, primarily
> > externally and consistentlyconfigured hosts with stable DNS and
> > hostkeys, such as github or gitlab. But for internal services, it's
> > generally far more trouble than it's worth.

Sorry for typos. I'm due for eye surgery on my *other* eye, which got canceled.

> FWIW I think this is bad advice.
>
> Services are only "internal" to the extent that you can trust your network.
> Search "SSL added and removed here" for a practical demonstration of this
> assumption yielding undesirable results.

Stable hostkeys are theoretically useful, but in practice have caused
*far* more service disruption than they've prevented abuse. Few SSH
users are cautious enough and thorough enough to review mismatched
keys on an individual basis. The nearly universal "fix" is to simply
delete $HOME/.sh/known_hosts, to clear your cache, and start it over
from scratch. Clearing the keys is hindered by the hasing of the
entries.

> Disabling hostkey checking is a big hammer, but occasionally useful for
> lab environments. Generally I recommend that people who are having trouble
> with hostkey management consider using host certificates.

Yes, it is a big hammer. It's a big hammer for a problem that,
thinking back, I've needed to use it or some more arcane hammer for
the same problem professionally at least once a year since.... 1995?
When ssh-1 was first written?

Signing the hostkeys means the labor and management of signing the
hostkeys. It's time, work, and money for the servers, and time, work,
and money configuring the clients to require them. When you're
managing more than 10,000 servers, with a crew of 30, .and some fool
mishandles hostkey preservation for an OS deployment..... it's a nasty
problem. That was in.... 2000?

Signed Hostkeys since OpenSSH 5.4 are a possibly better approach if
you're actually concerned about SSH hostkey consistency. But the
additional certificate signature is extra time, work, and money that
very few environments, even large environments, are weilling to
invest. The problem is especially drastic for git servers. Github and
gitlab are consistent about the host keys., but many environments fail
to preserve hostkeys when they build or update their git servers. I
have..... stories about that one.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 8:22 PM Darren Tucker <dtucker@dtucker.net> wrote:
>
> On Tue, 29 Sep 2020 at 23:16, Nico Kadel-Garcia <nkadel@gmail.com> wrote:
> [...]
> > I gave up on $HOME/.ssh/known_hosts a *long* time ago, because if
> > servers are DHCP distributed without static IP addresses they can wind
> > up overlapping IP addresses with mismatched hostkeys
>
> You can set CheckHostIP=no in your config. As long as the names don't
> change it'll do what you want, and it's far safer than what you
> suggest.
>
> [...]
> > This has been the case since SSH-1 was written in 1995.
>
> CheckHostIP was added to OpenSSH in 1999 before the first release and
> has been in every release since:
> https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/readconf.c.diff?r1=1.7&r2=1.8&f=h

As I understand this option, it does not help at all with the nearly
inevitable re-use of the same IP address for a different host with a
different hostkey in, for example, a modest DHCP based environment.
Such environments are common both in smaller, private networks and in
large public networks, and it's perhaps startlingly common in cloud
environments: it's one of the reasons I'm so willing to disable
$HOME/.ssh/known_hosts.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Wed, 30 Sep 2020 at 11:09, Nico Kadel-Garcia <nkadel@gmail.com> wrote:
> On Tue, Sep 29, 2020 at 8:22 PM Darren Tucker <dtucker@dtucker.net> wrote:
> > [...] CheckHostIP [...]
> As I understand this option, it does not help at all with the nearly
> inevitable re-use of the same IP address for a different host with a
> different hostkey in, for example, a modest DHCP based environment.

I don't follow your reasoning here. If you set CheckHostIP=no then
ssh does not store or check IP addresses at all, so how can a host's
reuse of IP addresses cause an issue, so long as the host's hostkey
and the client's idea of its name (or hostkeyalias) remain the same?
Or are your hostnames not stable either?

$ ssh -o checkhostip=no -o userknownhostsfile=/tmp/hosts somehost
$ cat /tmp/hosts
somehost ecdsa-sha2-nistp256 AAAA [etc]

$ ssh -o checkhostip=yes -o userknownhostsfile=/tmp/hosts somehost
Warning: Permanently added the ECDSA host key for IP address
'192.168.32.1' to the list of known hosts.
$ cat /tmp/hosts
somehost ecdsa-sha2-nistp256 AAAA [etc]
192.168.32.1 ecdsa-sha2-nistp256 AAAA [etc]

If I then replace the IP address one with the fingerprint from another
host, i complains with CheckHostIP=yes as expected:

$ ssh -o checkhostip=yes -o userknownhostsfile=/tmp/hosts somehost echo it works
Warning: the ECDSA host key for 'somehost differs from the key for the
IP address '192.168.32.1'
Offending key for IP in /tmp/hosts:2
Matching host key in /tmp/hosts:1

but works fine with CheckHostIP=no:

$ ssh -o checkhostip=no -o userknownhostsfile=/tmp/hosts gate echo it works
it works

The other thing you could do (and this is new in OpenSSH 8.4) is you
can use %-TOKEN expansion on UserKnownHostsFile to store hostkeys in
per-host files, eg:

UserKnownHostsFile ~/.ssh/known_hosts.d/%k

which will cause the host keys, including per-IP keys if enabled, to
be stored in separate files per host.

--
Darren Tucker (dtucker at dtucker.net)
GPG key 11EAA6FA / A86E 3E07 5B19 5880 E860 37F4 9357 ECEF 11EA A6FA (new)
Good judgement comes with experience. Unfortunately, the experience
usually comes from bad judgement.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 9:56 PM Darren Tucker <dtucker@dtucker.net> wrote:
>
> On Wed, 30 Sep 2020 at 11:09, Nico Kadel-Garcia <nkadel@gmail.com> wrote:
> > On Tue, Sep 29, 2020 at 8:22 PM Darren Tucker <dtucker@dtucker.net> wrote:
> > > [...] CheckHostIP [...]
> > As I understand this option, it does not help at all with the nearly
> > inevitable re-use of the same IP address for a different host with a
> > different hostkey in, for example, a modest DHCP based environment.
>
> I don't follow your reasoning here. If you set CheckHostIP=no then
> ssh does not store or check IP addresses at all, so how can a host's
> reuse of IP addresses cause an issue, so long as the host's hostkey
> and the client's idea of its name (or hostkeyalias) remain the same?
> Or are your hostnames not stable either?

Sadly, in many environments, SSH keys linked to hostnames change. In
Cloud environments where DNS is not thoroughly under the control of he
client, the primary DNS entry and hostname may be the vendor provided
DNS entry linked to the IP address. These are not necessarily stable,
so SSH connections are made by a variety of means, particularly IP
addresses or the cloud assigned hostnames. Alternatively, and this is
one of my favorites, a service such as an SSH based rsync server or
git server can be upgraded by an engineer who fails to realize that
the hostkeys associated with the exposed service should be preserved
in the upgrade process, especially if it's not the same engineer who
did it *last* time.

I *wish* people would be consistent about associating SSH hostkeys
with services. Sadly, they're not.

> $ ssh -o checkhostip=no -o userknownhostsfile=/tmp/hosts somehost
> $ cat /tmp/hosts
> somehost ecdsa-sha2-nistp256 AAAA [etc]
>
> $ ssh -o checkhostip=yes -o userknownhostsfile=/tmp/hosts somehost
> Warning: Permanently added the ECDSA host key for IP address
> '192.168.32.1' to the list of known hosts.
> $ cat /tmp/hosts
> somehost ecdsa-sha2-nistp256 AAAA [etc]
> 192.168.32.1 ecdsa-sha2-nistp256 AAAA [etc]
>
> If I then replace the IP address one with the fingerprint from another
> host, i complains with CheckHostIP=yes as expected:

I see your point. Uses vary: Sadly, in many environments, DNS is not
complete or consistent. So IP based SSH connections are commonplace,
and this step effectively provides just the "do not use known_hosts"
results I prescribed.

> $ ssh -o checkhostip=yes -o userknownhostsfile=/tmp/hosts somehost echo it works
> Warning: the ECDSA host key for 'somehost differs from the key for the
> IP address '192.168.32.1'
> Offending key for IP in /tmp/hosts:2
> Matching host key in /tmp/hosts:1
>
> but works fine with CheckHostIP=no:
>
> $ ssh -o checkhostip=no -o userknownhostsfile=/tmp/hosts gate echo it works
> it works
>
> The other thing you could do (and this is new in OpenSSH 8.4) is you
> can use %-TOKEN expansion on UserKnownHostsFile to store hostkeys in
> per-host files, eg:

Interesting. These days, I work most frequently with Red Hat based
systems, and updating OpenSSH to a leading edge version rather than
the vendor provided version is.... well, it's work and sometimes it
upsets people, even if I've been doing it successfully for decades.
I'd need to think about that one. On a jumpgate server or ansible
server it could be awkward to clutter a user-owned directory with
thousands of such files, What this looks most like is a hack to work
around the hashing of entries in known_hosts, to male key management
more visible. without having to parse or edit .ssh/known_hosts more
dynamically, and to get filesystem timestamps on the recorded public
hostkeys. I need to think about that one. I don't think it solves some
of my use cases, for which I have a longstanding published solution,
but it's a potentially useful feature.

The more I think about this, the more I think "that could actually be
useful for locking down a deployable known_hosts configuration".

> UserKnownHostsFile ~/.ssh/known_hosts.d/% Bk
>
> which will cause the host keys, including per-IP keys if enabled, to
> be stored in separate files per host.

RHEL 8 is at openssh-8.0p1: I've not bothered to update my published
git repo for building backported RPMs to 8.4p1 yet. I'll post if I
find cycles to do that. Sadly, some of the repositories that were
willing to publish such updates for vendor provided software are no
longer supported, so it can be awkward to find reliable packaging for
such updated tools.

Nico Kadel-Garcia

> --
> Darren Tucker (dtucker at dtucker.net)
> GPG key 11EAA6FA / A86E 3E07 5B19 5880 E860 37F4 9357 ECEF 11EA A6FA (new)
> Good judgement comes with experience. Unfortunately, the experience
> usually comes from bad judgement.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, 29 Sep 2020, Nico Kadel-Garcia wrote:

> As I understand this option, it does not help at all with the nearly
> inevitable re-use of the same IP address for a different host with a
> different hostkey in, for example, a modest DHCP based environment.
> Such environments are common both in smaller, private networks and in
> large public networks, and it's perhaps startlingly common in cloud
> environments: it's one of the reasons I'm so willing to disable
> $HOME/.ssh/known_hosts.

Again, you should read the documentation for CheckHostIP. Turing it off
makes known_hosts solely bind to hostnames and, as long as you use names
to refer to hosts, avoids any problems caused by IP address reuse.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 10:56 PM Damien Miller <djm@mindrot.org> wrote:
>
> On Tue, 29 Sep 2020, Nico Kadel-Garcia wrote:
>
> > As I understand this option, it does not help at all with the nearly
> > inevitable re-use of the same IP address for a different host with a
> > different hostkey in, for example, a modest DHCP based environment.
> > Such environments are common both in smaller, private networks and in
> > large public networks, and it's perhaps startlingly common in cloud
> > environments: it's one of the reasons I'm so willing to disable
> > $HOME/.ssh/known_hosts.
>
> Again, you should read the documentation for CheckHostIP. Turing it off
> makes known_hosts solely bind to hostnames and, as long as you use names
> to refer to hosts, avoids any problems caused by IP address reuse.

Have you used AWS? Unless you spend the time and effort, the hostname
registered in AWS DNS is based on the IP address. Many people do *not*
use consistent subnets for distinct classes of server or specific OS
images, so different servers wind up on the same IP address with
distinct hostkeys based on factors like autoscaling, for which IP
addresses are not predictable. You can work around it, by locking down
and sharing hostkeys for your OS images, or by segregating subnets
based on application and corresponding OS image. These present other
burdens.

For small networks, you can manage the keys and/or the DNS sanely and
consistently. It's also much easier if the same person doing security
tools like SSH is also managing DNS. But this is rare for larger
environments. It's partly why I recommend the "disable known_hosts"
hammer, it ends fiddling with what is likely to bite at an extremely
inopportune moment.
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On Tue, Sep 29, 2020 at 6:46 AM Damien Miller <djm@mindrot.org> wrote:
>
> On Tue, 29 Sep 2020, Martin Drescher wrote:
>
> > Hi list members,
> >
> > just tried to get some old records out of my known_hosts, which is
> > 'HashKnownHosts yes'. Is there a way to unhash host names and/or
> > IPs? Google tells about, how to add hosts, but not the opposite, may
> > be I miss some thing. Is this does not work at all, is there a best
> > practice for cleaning old hosts and keys out?
>
> The hashing is intentionally one-way - you can't go backwards from a
> hash to a hostname without an inordinate amount of work.
>
> You can however find and delete hosts by name using ssh-keygen.
>
> To find entries matching a hostname, use "ssh-keygen -F hostname", e.g.
>
> $ ssh-keygen -lF haru.mindrot.org
> # Host haru.mindrot.org found: line 146
> haru.mindrot.org ECDSA SHA256:xjGrsgS6JzMojD3go1qULmh02LG8YpRirOwmoHnT/3M
> # Host haru.mindrot.org found: line 165
> haru.mindrot.org RSA SHA256:9nN+SOkKCQq6BLzybAUNlczAU0n+HbOIDxIrBIbPPmU
> # Host haru.mindrot.org found: line 166
> haru.mindrot.org ED25519 SHA256:43S30LGUkc2f9dDcLZG6O5KPKtPn7Xw2WkR2vCO/nnU
>
One a side note, I see *some* entries in .ssh/known_hosts
showing the hostname or IP, while others do not. What causes this lack
of consistency?

> (the -l flag tells it to print fingerprints instead of full keys)
>
> You can also delete entries using "ssh-keygen -R hostname".
>
> -d
> _______________________________________________
> openssh-unix-dev mailing list
> openssh-unix-dev@mindrot.org
> https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On 2020/09/30 06:35, Mauricio Tavares wrote:
> On Tue, Sep 29, 2020 at 6:46 AM Damien Miller <djm@mindrot.org> wrote:
> >
> > On Tue, 29 Sep 2020, Martin Drescher wrote:
> >
> > > Hi list members,
> > >
> > > just tried to get some old records out of my known_hosts, which is
> > > 'HashKnownHosts yes'. Is there a way to unhash host names and/or
> > > IPs? Google tells about, how to add hosts, but not the opposite, may
> > > be I miss some thing. Is this does not work at all, is there a best
> > > practice for cleaning old hosts and keys out?
> >
> > The hashing is intentionally one-way - you can't go backwards from a
> > hash to a hostname without an inordinate amount of work.
> >
> > You can however find and delete hosts by name using ssh-keygen.
> >
> > To find entries matching a hostname, use "ssh-keygen -F hostname", e.g.
> >
> > $ ssh-keygen -lF haru.mindrot.org
> > # Host haru.mindrot.org found: line 146
> > haru.mindrot.org ECDSA SHA256:xjGrsgS6JzMojD3go1qULmh02LG8YpRirOwmoHnT/3M
> > # Host haru.mindrot.org found: line 165
> > haru.mindrot.org RSA SHA256:9nN+SOkKCQq6BLzybAUNlczAU0n+HbOIDxIrBIbPPmU
> > # Host haru.mindrot.org found: line 166
> > haru.mindrot.org ED25519 SHA256:43S30LGUkc2f9dDcLZG6O5KPKtPn7Xw2WkR2vCO/nnU
> >
> One a side note, I see *some* entries in .ssh/known_hosts
> showing the hostname or IP, while others do not. What causes this lack
> of consistency?

Changing between 'HashKnownHosts no' and 'HashKnownHosts yes' without
removing/rebuilding the file. See ssh-keygen -H.

_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
Nico Kadel-Garcia wrote:
> Damien Miller wrote:
> > Again, you should read the documentation for CheckHostIP. Turing it off
> > makes known_hosts solely bind to hostnames and, as long as you use names
> > to refer to hosts, avoids any problems caused by IP address reuse.
>
> Have you used AWS? Unless you spend the time and effort, the hostname
> registered in AWS DNS is based on the IP address. Many people do *not*
> use consistent subnets for distinct classes of server or specific OS
> images, so different servers wind up on the same IP address with
> distinct hostkeys based on factors like autoscaling, for which IP
> addresses are not predictable. You can work around it, by locking down
> and sharing hostkeys for your OS images, or by segregating subnets
> based on application and corresponding OS image. These present other
> burdens.

I use AWS and therefore will say a few words here. The general
default for an AWS EC2 node is that the hostname will look like this.

root@ip-172-31-29-33:~# hostname
ip-172-31-29-33

And note that this is in the RFC1918 unroutable private IP space.
That is not and cannot be the public IP address. The public IP
address is routed to it through an edge route. It's mapped. (And
always IPv4 since Amazon has been slow to support IPv6 and AFAIK
elastic IP addresses can only be IPv4 still to this day.)

And if one doesn't do anything then by default Amazon will provide a
DNS name for it in their domain space. In the case of the above it
will be something like this. [. Which I have obscured in an obvious
way. Do you immediately spot why this cannot be valid? :-) ]

ec2-35-168-278-321.compute-1.amazonaws.com

So in an auto-scale-out elastic configuration one might create a node
ip-172-31-29-33 at noon, might destroy that node by evening, and then
tomorrow recreate the node again as ip-172-31-10-42 but with the same
public IP address and associated amazonaws.com DNS name.

This host key collision with the AWS provided DNS name is only a
problem if 1) one uses the AWS provided DNS name and 2) one uses a
randomly generated ssh host key. Avoiding this problem can be done by
avoiding either of those two things. I don't do either of them.

I don't use the AWS supplied DNS name. I use my own DNS name in the
my domain space. However I know the stated problem was lack of
control of the DNS. Okay. But note that anyone can register a random
domain and and then have control of that namespace. I have used my
own domain name for my personal use when working with client systems.
Nothing prevents this.

Also for an elastic node that is just a template produced machine then
I believe that one should override the random ssh host key with a
static host key appropriate for the role it is performing. This can
be done at instantiation time automatically using any of cloud-init or
ansible or other system configuration tool. Since it then has a
repeatable host key for the role then it won't mismatch when
connecting to it. When re-created fresh it will have the same host
key that it had before. I do this.

In a previous life I used to manage a bare metal compute farm and when
the machines are simply srv001 through srv600 and all exactly
equivalent and smashed and recreated as needed then there is no need
for them to have unique host keys. That's counterproductive. I set
them all the same as appropriate for their role.

Bob
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
Re: Human readable .ssh/known_hosts? [ In reply to ]
On 2020-09-30 20:38, Bob Proulx wrote:
> Nico Kadel-Garcia wrote:
>> Damien Miller wrote:
>>> Again, you should read the documentation for CheckHostIP. Turing it off
>>> makes known_hosts solely bind to hostnames and, as long as you use names
>>> to refer to hosts, avoids any problems caused by IP address reuse.
>>
>> Have you used AWS? Unless you spend the time and effort, the hostname
>> registered in AWS DNS is based on the IP address. Many people do *not*
>> use consistent subnets for distinct classes of server or specific OS
>> images, so different servers wind up on the same IP address with
>> distinct hostkeys based on factors like autoscaling, for which IP
>> addresses are not predictable. You can work around it, by locking down
>> and sharing hostkeys for your OS images, or by segregating subnets
>> based on application and corresponding OS image. These present other
>> burdens.
>
> I use AWS and therefore will say a few words here. The general
> default for an AWS EC2 node is that the hostname will look like this.
>
> root@ip-172-31-29-33:~# hostname
> ip-172-31-29-33
>
> And note that this is in the RFC1918 unroutable private IP space.
> That is not and cannot be the public IP address. The public IP
> address is routed to it through an edge route. It's mapped. (And
> always IPv4 since Amazon has been slow to support IPv6 and AFAIK
> elastic IP addresses can only be IPv4 still to this day.)
>
> And if one doesn't do anything then by default Amazon will provide a
> DNS name for it in their domain space. In the case of the above it
> will be something like this. [. Which I have obscured in an obvious
> way. Do you immediately spot why this cannot be valid? :-) ]
>
> ec2-35-168-278-321.compute-1.amazonaws.com
>
> So in an auto-scale-out elastic configuration one might create a node
> ip-172-31-29-33 at noon, might destroy that node by evening, and then
> tomorrow recreate the node again as ip-172-31-10-42 but with the same
> public IP address and associated amazonaws.com DNS name.
>
> This host key collision with the AWS provided DNS name is only a
> problem if 1) one uses the AWS provided DNS name and 2) one uses a
> randomly generated ssh host key. Avoiding this problem can be done by
> avoiding either of those two things. I don't do either of them.
>
> I don't use the AWS supplied DNS name. I use my own DNS name in the
> my domain space. However I know the stated problem was lack of
> control of the DNS. Okay. But note that anyone can register a random
> domain and and then have control of that namespace. I have used my
> own domain name for my personal use when working with client systems.
> Nothing prevents this.
>
> Also for an elastic node that is just a template produced machine then
> I believe that one should override the random ssh host key with a
> static host key appropriate for the role it is performing. This can
> be done at instantiation time automatically using any of cloud-init or
> ansible or other system configuration tool. Since it then has a
> repeatable host key for the role then it won't mismatch when
> connecting to it. When re-created fresh it will have the same host
> key that it had before. I do this.
>
> In a previous life I used to manage a bare metal compute farm and when
> the machines are simply srv001 through srv600 and all exactly
> equivalent and smashed and recreated as needed then there is no need
> for them to have unique host keys. That's counterproductive. I set
> them all the same as appropriate for their role.
>
> Bob

I strongly recommend switching to SSH certificates instead. You can
write a program that provisions each node a certificate based on its
AWS-provided identity information, which is cryptographicly signed
by an AWS secret key and so cannot be forged.

Sincerely,

Demi