Mailing List Archive

Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED"
SSH List-dwellers:

I'm using OpenSSH in an environment with lots of clusters. These
clusters have IP addresses which are associated with a particular
application rather than with a particular host. Oftentimes
(especially for file transfers) it's helpful to ssh/scp to the IP
address associated with the application rather than the one associated
with the host. However, given that each host has its own host key, we
frequently get:

WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

Which of course panics the user the first time they see it, and causes
them to ignore it the second time onward-- neither of which are
desired behaviors...

I've thought about several solutions to this including:

1) Make all the host keys the same (hundreds of hosts, kind of
diminishes the value of a host key...)
2) Configure ssh to ignore host key changes (harder than you might
think since often new ssh clients are brought in)
3) Give each application its own dedicated ssh and host key (tricky to
set up and monitor, fairly high effort)
4) Tweak OpenSSH so that it will accept any host key from a list
(requires some programming effort, might not be a good idea)
5) Other?

What do you all think of option 4? In particular, I was thinking that
there might be a way to allow hosts on the same subnet to simply
prompt to add the additional key for the same DNS name rather than
popping up the man-in-the-middle warning. If there were multiple keys
present in known_hosts for a given hostname, any of them would be
accepted.

Could this be done without weakening the host security of OpenSSH?
Should I instead just hold The Great Re-Keying and go with option 1?

I appreciate any advice.

Thanks,

-- Steve Bonds
Re: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" [ In reply to ]
On Thu, 2009-09-17 at 16:53 -0700, Steve Bonds wrote:
> SSH List-dwellers:
>
> I'm using OpenSSH in an environment with lots of clusters. These
> clusters have IP addresses which are associated with a particular
> application rather than with a particular host. Oftentimes
> (especially for file transfers) it's helpful to ssh/scp to the IP
> address associated with the application rather than the one associated
> with the host. However, given that each host has its own host key, we
> frequently get:
>
> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
>
> Which of course panics the user the first time they see it, and causes
> them to ignore it the second time onward-- neither of which are
> desired behaviors...
>
> I've thought about several solutions to this including:
>
> 1) Make all the host keys the same (hundreds of hosts, kind of
> diminishes the value of a host key...)
> 2) Configure ssh to ignore host key changes (harder than you might
> think since often new ssh clients are brought in)
> 3) Give each application its own dedicated ssh and host key (tricky to
> set up and monitor, fairly high effort)
> 4) Tweak OpenSSH so that it will accept any host key from a list
> (requires some programming effort, might not be a good idea)
> 5) Other?
>
> What do you all think of option 4? In particular, I was thinking that
> there might be a way to allow hosts on the same subnet to simply
> prompt to add the additional key for the same DNS name rather than
> popping up the man-in-the-middle warning. If there were multiple keys
> present in known_hosts for a given hostname, any of them would be
> accepted.
>
> Could this be done without weakening the host security of OpenSSH?
> Should I instead just hold The Great Re-Keying and go with option 1?
>
> I appreciate any advice.
>
> Thanks,
>
> -- Steve Bonds

Maybe the issue doesn't really involve modifying OpenSSH at all. If you
have access to the hosts, wouldn't it be possible to
pre-generate .known_hosts with all the host keys in your cluster? Then
each client would have every key in it's .known_hosts, so it wouldn't
matter which host the client was connecting to.

Then if one of the keys change, you can generate a new .known_hosts.
Users are still alerted if a key changes on it's own.

Whatever your final solution, please remember to share with the
class. :]

~k
Re: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" [ In reply to ]
On Fri, Sep 18, 2009 at 10:08 AM, H. Kurth Bemis kurth-at-kurthbemis.com

> Maybe the issue doesn't really involve modifying OpenSSH at all. If you
> have access to the hosts, wouldn't it be possible to
> pre-generate .known_hosts with all the host keys in your cluster? Then
> each client would have every key in it's .known_hosts, so it wouldn't
> matter which host the client was connecting to.
>
> Then if one of the keys change, you can generate a new .known_hosts.
> Users are still alerted if a key changes on it's own.

I don't have access to all the clients-- but that's not necessarily a
show-stopper. My understanding of how ssh works (and this would be a
great chance to be educated to the contrary) is that it only allows
one host key per hostname or IP and if the first key it finds in the
known_hosts doesn't match, you get the MitM warning. If this is NOT
how it's supposed to work, I'll try my tests again-- maybe I mangled
the extra keys I put into known_hosts for testing...

> Whatever your final solution, please remember to share with the
> class. :]

Absolutely! I've been known to have the same problem twice, and it's
helpful to be able to go back and search for my solution from the last
time. To say nothing of helping out all the other people who end up
with the same problem. :-)

-- Steve
Re: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" [ In reply to ]
I suspect the problem is that the application specific IP addresses
move from host/server to server, and since the Host Key is tied to the
host/server rather than IP address - the SSH clients get worried.

If you have control over your user's SSH clients, then you could
implement something like #4 below. If you don't have control over
your user's SSH clients, you're hosed and have to run separate
instances of sshd bound individually to each IP address - the
management headache you're trying to avoid.

I would suggest as an enhancement to #4 that the SSH client be allowed
to define a named pool (of the server hosts) and a list of
"swimmers" (awful name) that would accept any of the host keys for the
pool. That way, you could have distinct, multiple pools. Each pool
could be administered separately, and a "swimmer" could only be in a
single pool at a time.

I don't see a good way around this that isn't a management nightmare.
Either you're managing sshd bound to an IP address rather than a host,
or you're managing some client configuration. At least, by managing
the sshd's, you're not binding yourself to any particular version of
the ssh client and can take advantage of any bug fixes or other
enhancements that might occur in the client.


On Sep 18, 2009, at 1:08 PM, H. Kurth Bemis wrote:

> On Thu, 2009-09-17 at 16:53 -0700, Steve Bonds wrote:
>> SSH List-dwellers:
>>
>> I'm using OpenSSH in an environment with lots of clusters. These
>> clusters have IP addresses which are associated with a particular
>> application rather than with a particular host. Oftentimes
>> (especially for file transfers) it's helpful to ssh/scp to the IP
>> address associated with the application rather than the one
>> associated
>> with the host. However, given that each host has its own host key,
>> we
>> frequently get:
>>
>> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
>>
>> Which of course panics the user the first time they see it, and
>> causes
>> them to ignore it the second time onward-- neither of which are
>> desired behaviors...
>>
>> I've thought about several solutions to this including:
>>
>> 1) Make all the host keys the same (hundreds of hosts, kind of
>> diminishes the value of a host key...)
>> 2) Configure ssh to ignore host key changes (harder than you might
>> think since often new ssh clients are brought in)
>> 3) Give each application its own dedicated ssh and host key (tricky
>> to
>> set up and monitor, fairly high effort)
>> 4) Tweak OpenSSH so that it will accept any host key from a list
>> (requires some programming effort, might not be a good idea)
>> 5) Other?
>>
>> What do you all think of option 4? In particular, I was thinking
>> that
>> there might be a way to allow hosts on the same subnet to simply
>> prompt to add the additional key for the same DNS name rather than
>> popping up the man-in-the-middle warning. If there were multiple
>> keys
>> present in known_hosts for a given hostname, any of them would be
>> accepted.
>>
>> Could this be done without weakening the host security of OpenSSH?
>> Should I instead just hold The Great Re-Keying and go with option 1?
>>
>> I appreciate any advice.
>>
>> Thanks,
>>
>> -- Steve Bonds
>
> Maybe the issue doesn't really involve modifying OpenSSH at all. If
> you
> have access to the hosts, wouldn't it be possible to
> pre-generate .known_hosts with all the host keys in your cluster?
> Then
> each client would have every key in it's .known_hosts, so it wouldn't
> matter which host the client was connecting to.
>
> Then if one of the keys change, you can generate a new .known_hosts.
> Users are still alerted if a key changes on it's own.
>
> Whatever your final solution, please remember to share with the
> class. :]
>
> ~k
>
RE: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" [ In reply to ]
I didn't appreciate the flexibility of known_hosts or sshd_known_hosts until following up on Kurth's response; so thanks for that.

However, personally, I lean toward #1. When reading #1, it seems to suggest having one key for the whole site. This I wouldn't do. Rather, whenever adding a second node and creating a cluster, just copy the keys from node one to node two. Each cluster will have unique keys. All applications on a cluster will have the same key. Doing this preserves notifications for possible MitM attacks, but doesn't require coordinated updates across the entire infrastructure.

Implementing this solution does involve a great wipe, when you sync all existing clusters, but after that, it becomes merely procedural when you build new clusters or udpate existing clusters. Conceivably, you could update the clusters one at a time, or in small batches, but I would plan this so customers only see one broadcast announcement regarding key changes. One great wipe doesn't inure users; but three mini wipes on consecutive weekends could lull some.

All that said, some follow-up considerations:

If all your users are on relatively few systems, then implementing a client side sshd_known_hosts is more straight-forward. It would not require users to understand or mess with known_hosts themselves. Similarly, if you have NFS /home drives or CIFS windows profiles where the network homogenizes the known_hosts experience, then this solution gains favor.

Speaking against updating known_hosts are differing clients on differing platforms. How does putty handle known_hosts?

What are your current procedures for migrating applications? If an application move requires a server change, even if the IP is moved, what do customers do when the key changes?


-- Jess Males


-----Original Message-----
From: listbounce@securityfocus.com [mailto:listbounce@securityfocus.com] On Behalf Of Steve Bonds
Sent: Thursday, September 17, 2009 7:53 PM
To: secureshell@securityfocus.com
Subject: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED"

SSH List-dwellers:

I'm using OpenSSH in an environment with lots of clusters. These
clusters have IP addresses which are associated with a particular
application rather than with a particular host. Oftentimes
(especially for file transfers) it's helpful to ssh/scp to the IP
address associated with the application rather than the one associated
with the host. However, given that each host has its own host key, we
frequently get:

WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

Which of course panics the user the first time they see it, and causes
them to ignore it the second time onward-- neither of which are
desired behaviors...

I've thought about several solutions to this including:

1) Make all the host keys the same (hundreds of hosts, kind of
diminishes the value of a host key...)
2) Configure ssh to ignore host key changes (harder than you might
think since often new ssh clients are brought in)
3) Give each application its own dedicated ssh and host key (tricky to
set up and monitor, fairly high effort)
4) Tweak OpenSSH so that it will accept any host key from a list
(requires some programming effort, might not be a good idea)
5) Other?

What do you all think of option 4? In particular, I was thinking that
there might be a way to allow hosts on the same subnet to simply
prompt to add the additional key for the same DNS name rather than
popping up the man-in-the-middle warning. If there were multiple keys
present in known_hosts for a given hostname, any of them would be
accepted.

Could this be done without weakening the host security of OpenSSH?
Should I instead just hold The Great Re-Keying and go with option 1?

I appreciate any advice.

Thanks,

-- Steve Bonds
Re: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" [ In reply to ]
On Fri, Sep 18, 2009 at 11:25 AM, Males, Jess wrote:
> I didn't appreciate the flexibility of known_hosts or sshd_known_hosts until following up on
> Kurth's response; so thanks for that.

Yes, that was a good link. In particular it appears that at least
part of #4 (and the important part at that!) may already be
implemented. I should have read a more recent version of the openssh
man pages before posting (but it's too late now!) but it appears that
one key can have multiple aliases associated with it and ssh will no
longer match just the first one but will keep looking through the
whole list.

I just ran another test, being much more careful about my cut and
paste this time, and it seems to work now (whoops!). Here's what my
sample known_hosts looks like:

<host1name>,<host1_IP>,<app1name>,<app1_IP>,<app2name>,<app2_IP> <host1 key>
<host2name>,<host2_IP>,<app1name>,<app1_IP>,<app2name>,<app2_IP> <host2 key>

> However, personally, I lean toward #1.  When reading #1, it seems to suggest having one
> key for the whole site.  This I wouldn't do.  Rather, whenever adding a second node and creating a
> cluster, just copy the keys from node one to node two.  Each cluster will have unique keys.  All
> applications on a cluster will have the same key.  Doing this preserves notifications for possible
> MitM attacks, but doesn't require coordinated updates across the entire infrastructure.

Since we frequently have hosts that move between clusters, I figured
that I'd probably have to keep the keys the same for all hosts. While
I could change the key when it changes clusters, that will trigger the
ol' MitM warning...

> All that said, some follow-up considerations:
>
> If all your users are on relatively few systems, then implementing a client side sshd_known_hosts is more
> straight-forward.  It would not require users to understand or mess with known_hosts themselves.
> Similarly, if you have NFS /home drives or CIFS windows profiles where the network homogenizes the
> known_hosts experience, then this solution gains favor.

Sadly, most of what I have is a hodge-podge across many systems.
However, my users are more savvy than in many cases and can mess with
known_hosts when necessary. I can also hit about 80% of the clients
using centralized ssh_known_hosts files.

> Speaking against updating known_hosts are differing clients on differing platforms.  How does putty
> handle known_hosts?

You'll love this-- it stores them in the Windows registry.

http://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-hostkeys

> What are your current procedures for migrating applications?  If an application move requires
> a server change, even if the IP is moved, what do customers do when the key changes?

They usually either panic or remove any key that generates a MitM
warning... ever. Again, not highly desirable behavior in either case.
:-)

I'm now leaning towards using scripted host key scans to build some
global ssh_known_hosts files with the cluster IPs listed on the same
lines as the host keys. This will require the script to collect some
cluster information so it won't be quite as simple as the four-liners
floating around. :-)

-- Steve