Mailing List Archive

Remote forwarding gnupg extra-socket?
It used to be that we could tell gpg-agent to create an extra-socket
(with restricted functionality) and then tell ssh to forward that socket
to a remote machine, giving gpg an equivalent agent forwarding
functionality to that of ssh.

In fact, you can still configure all of the above, but doing so is now
next to useless. Since v2.1 there is no way to tell the remote gpg
instance to use a non-default socket, and there is no reliable way to
tell ssh to mount the forwarded socket on the default location -- the
default is now under XDG_RUNTIME_DIR which is unpredictable in
principle, and ssh does not allow the use of remote envars in
RemoteForward directives.

Not only that, but if you are already logged in to the remote machine by
other means you MUST NOT use the default socket location lest you break
the existing session.

Can we please PLEASE have GPG_AGENT_SOCK back in the short term?

In the long term, it might be more productive to overload ssh-agent. IFF
the forwarded ssh-agent is really a gnupg-agent with enable-ssh-support,
it could optionally support a protocol extension allowing it to tunnel
extra-socket commands back to the originating gpg-agent over the
ssh-agent connection. The gpg on the remote machine could then test for
this protocol extension and if found use the ssh-agent socket instead.
This would remove the need for any custom ssh shenanigans (which only
sort-of work, even now).

The ssh-agent protocol allows for vendor-specific protocol extensions,
which would appear to be perfectly suited for this:

https://tools.ietf.org/id/draft-miller-ssh-agent-01.html#rfc.section.4.7

What do we think?

--
Andrew Gallagher
Re: Remote forwarding gnupg extra-socket? [ In reply to ]
Hi Andrew--

On Mon 2020-03-09 13:02:55 +0000, Andrew Gallagher wrote:
> It used to be that we could tell gpg-agent to create an extra-socket
> (with restricted functionality) and then tell ssh to forward that socket
> to a remote machine, giving gpg an equivalent agent forwarding
> functionality to that of ssh.

the extra-socket is created automatically these days, and doesn't need
to be enabled explicitly.

> In fact, you can still configure all of the above, but doing so is now
> next to useless. Since v2.1 there is no way to tell the remote gpg
> instance to use a non-default socket, and there is no reliable way to
> tell ssh to mount the forwarded socket on the default location -- the
> default is now under XDG_RUNTIME_DIR which is unpredictable in
> principle, and ssh does not allow the use of remote envars in
> RemoteForward directives.

There has been some discussion about this over on the SSH bugtracker:

https://bugzilla.mindrot.org/show_bug.cgi?id=3018
https://bugzilla.mindrot.org/show_bug.cgi?id=2740
https://bugzilla.mindrot.org/show_bug.cgi?id=3014
https://bugzilla.mindrot.org/show_bug.cgi?id=3140

> Can we please PLEASE have GPG_AGENT_SOCK back in the short term?

I'm not convinced that this is a good idea; more configurability means
more ways that people can break their setups, and debugging is even
harder.

> In the long term, it might be more productive to overload ssh-agent. IFF
> the forwarded ssh-agent is really a gnupg-agent with enable-ssh-support,
> it could optionally support a protocol extension allowing it to tunnel
> extra-socket commands back to the originating gpg-agent over the
> ssh-agent connection. The gpg on the remote machine could then test for
> this protocol extension and if found use the ssh-agent socket instead.
> This would remove the need for any custom ssh shenanigans (which only
> sort-of work, even now).
>
> The ssh-agent protocol allows for vendor-specific protocol extensions,
> which would appear to be perfectly suited for this:
>
> https://tools.ietf.org/id/draft-miller-ssh-agent-01.html#rfc.section.4.7

this is a very interesting suggestion, but i'm not sure exactly how it
would work. can you describe it in more detail? At the beginning of
this message, it looks like you were talking about forwarding the
extra-socket, and now it looks like you're talking about forwarding the
ssh-agent emulation. Are you talking about the same concern here?
they're (at least subtly) different.

Also, how is the remote gpg-agent supposed to know that there is some
other backend it should talk to (for either ssh-agent or any of the
gpg-agent sockets)?

--dkg
Re: Remote forwarding gnupg extra-socket? [ In reply to ]
Hi, Daniel.

On 27/03/2020 04:09, Daniel Kahn Gillmor wrote:
>
> There has been some discussion about this over on the SSH bugtracker:

I think token support is a good plan. Thanks.

>> Can we please PLEASE have GPG_AGENT_SOCK back in the short term?
>
> I'm not convinced that this is a good idea; more configurability means
> more ways that people can break their setups, and debugging is even
> harder.

I absolutely agree that removing this is a good idea in the medium/long
term; the problem is that in the short term we have lost functionality.

>> The ssh-agent protocol allows for vendor-specific protocol extensions,
>> which would appear to be perfectly suited for this:
>>
>> https://tools.ietf.org/id/draft-miller-ssh-agent-01.html#rfc.section.4.7
>
> this is a very interesting suggestion, but i'm not sure exactly how it
> would work. can you describe it in more detail? At the beginning of
> this message, it looks like you were talking about forwarding the
> extra-socket, and now it looks like you're talking about forwarding the
> ssh-agent emulation. Are you talking about the same concern here?
> they're (at least subtly) different.

I'm suggesting that the ssh-agent emulation protocol could encapsulate
the extra-socket protocol using a vendor extension, removing any need to
forward the extra-socket.

So in pseudo-protocol, with "gnupg-agent@gnupg.org" as the vendor
extension, and the encoded gnupg-agent messages serialised as an array
of octets:

```
SSH_AGENTC_EXTENSION "gnupg-agent@gnupg.org" gnupg_agent_request
```

This would return:

```
SSH_AGENT_SUCCESS gnupg_agent_response
```

or

```
SSH_AGENT_EXTENSION_FAILURE gnupg_agent_response
```

> Also, how is the remote gpg-agent supposed to know that there is some
> other backend it should talk to (for either ssh-agent or any of the
> gpg-agent sockets)?

It wouldn't be a remote gpg-agent, it would be a remote gpg client. If
it detected SSH_AUTH_SOCK in its environment it would make an ssh `query
extension` request on SSH_AUTH_SOCK, as per
https://tools.ietf.org/id/draft-miller-ssh-agent-01.html#rfc.section.4.7.1

```
SSH_AGENTC_EXTENSION "query"
```

To which a successful reply would be something like

```
SSH_AGENT_SUCCESS "gnupg-agent@gnupg.org"
```

Otherwise it would fall back on normal behaviour.

--
Andrew Gallagher
Re: Remote forwarding gnupg extra-socket? [ In reply to ]
On Mon, 9 Mar 2020 13:02, Andrew Gallagher said:

> In fact, you can still configure all of the above, but doing so is now
> next to useless. Since v2.1 there is no way to tell the remote gpg
> instance to use a non-default socket, and there is no reliable way to

Actually --extra-socket was introduced with 2.1.1 and the /var/run
standard location was introduced with 2.1.13. So I don't understand why
you think anything has changed for --extra-socket except that it is now
always generated unless you configure "extra-socket /dev/null"


> tell ssh to mount the forwarded socket on the default location -- the
> default is now under XDG_RUNTIME_DIR which is unpredictable in
> principle, and ssh does not allow the use of remote envars in

XDG_RUNTIME_DIR is not used:

/* It has been suggested to first check XDG_RUNTIME_DIR envvar.
* However, the specs state that the lifetime of the directory MUST
* be bound to the user being logged in. Now GnuPG may also be run
* as a background process with no (desktop) user logged in. Thus
* we better don't do that. */

We use /run/user/<uid> instead.

[.Site note: It seems that this directory is sometimes used for
XDG_RUNTIME_DIR or at least handled in that way. Which is a pretty
annoying misfeature. I often fall into this trap:

foo-box$ ssh -X bar-box
bar-box$ ssh foo-box
foo-box$ exit
bar-box$

After the "exit" elogind removes /run/user/<uid> and futher work (in
paricular new ssh connections) stop working because SSH_AUTH_SOCK
points to a now non-existing socket.
Mitigation is "loginctl enable-linger".
]


> Not only that, but if you are already logged in to the remote machine by
> other means you MUST NOT use the default socket location lest you break
> the existing session.

You mean you are running a gpg-agent on the remote box as well? Right
in this case you should use a different home directory for the remote use
of gpg-agent. gpgconf has options to help you with that. And of course
you should use

--no-autostart

Do not start the gpg-agent or the dirmngr if it has not yet been
started and its service is required. This option is mostly useful
on machines where the connection to gpg-agent has been redirected
to another machines. If dirmngr is required on the remote machine,
it may be started manually using gpgconf --launch dirmngr.

with your remote gpg.

> Can we please PLEASE have GPG_AGENT_SOCK back in the short term?

No, the removal solved so many problems that it will definitely not be
added again. The rule is: one gnupg homedir - one socket directory.

> In the long term, it might be more productive to overload ssh-agent. IFF
> the forwarded ssh-agent is really a gnupg-agent with enable-ssh-support,

That is a misunderstanding: gpg-agent does not emulate ssh-agent but
implements the ssh-agent-protocol; in that protocol gpg-agent is the
server and ssh is the client.

> The ssh-agent protocol allows for vendor-specific protocol extensions,
> which would appear to be perfectly suited for this:

Yes, it would be nice if the client site (ssh) would send certain
environment variables via the ssh-agent-protocol, so that gpg-agent
knows hows where to pop up the pinentry (that is what the gpg does).
It would also be very nice if ssh could be extended to call a configured
tool if it does not find an agent and then try again. This way we would
get auto start also via ssh.


Salam-Shalom,

Werner


--
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.
Re: Remote forwarding gnupg extra-socket? [ In reply to ]
On 27/03/2020 13:20, Werner Koch wrote:
>
> Actually --extra-socket was introduced with 2.1.1 and the /var/run
> standard location was introduced with 2.1.13. So I don't understand why
> you think anything has changed for --extra-socket except that it is now
> always generated unless you configure "extra-socket /dev/null"

It's the standard location that causes the issue, so it is since 2.1.13,
yes.

> XDG_RUNTIME_DIR is not used:
...
> We use /run/user/<uid> instead.

OK, but this is unpredictable for the same reasons that XDG_RUNTIME_DIR
is unpredictable - you cannot tell what $UID is from the remote side, so
you don't know where to tell ssh to create the extra socket.

> You mean you are running a gpg-agent on the remote box as well?

Maybe, depending on whether I left myself logged in on the physical console.

> Right
> in this case you should use a different home directory for the remote use
> of gpg-agent.

Yes, but won't all gpgs on the remote machine expect the extra-socket to
be under /var/run/$UID, regardless of $GNUPGHOME? And even if we solve
the local vs remote issue, we don't solve the issue of two simultaneous
remote connections, unless we create many $GNUPGHOMEs and track them
manually (a slightly contrived example, but it shows that the "solution"
is only a workaround).

The extra-socket only works reliably if it is unique per-session, but it
is not stored in a per-session location.

> gpg-agent does not emulate ssh-agent but
> implements the ssh-agent-protocol

Yes, that's what I meant. Apologies for the sloppy terminology.

>> The ssh-agent protocol allows for vendor-specific protocol extensions,
>> which would appear to be perfectly suited for this:
>
> Yes, it would be nice if the client site (ssh) would send certain
> environment variables via the ssh-agent-protocol, so that gpg-agent
> knows hows where to pop up the pinentry (that is what the gpg does).
> It would also be very nice if ssh could be extended to call a configured
> tool if it does not find an agent and then try again. This way we would
> get auto start also via ssh.

Sending environment variables would require code changes to ssh(d),
whereas vendor extensions would only require changes to gpg(-agent) -
they are treated as black boxes by ssh(d) and passed verbatim.

--
Andrew Gallagher
Re: Remote forwarding gnupg extra-socket? [ In reply to ]
On Fri, 27 Mar 2020 14:41, Andrew Gallagher said:
> On 27/03/2020 13:20, Werner Koch wrote:
>>
>> Actually --extra-socket was introduced with 2.1.1 and the /var/run
>> standard location was introduced with 2.1.13. So I don't understand why
>> you think anything has changed for --extra-socket except that it is now
>> always generated unless you configure "extra-socket /dev/null"
>
> It's the standard location that causes the issue, so it is since 2.1.13,
> yes.
>
>> XDG_RUNTIME_DIR is not used:
> ...
>> We use /run/user/<uid> instead.
>
> OK, but this is unpredictable for the same reasons that XDG_RUNTIME_DIR
> is unpredictable - you cannot tell what $UID is from the remote side, so
> you don't know where to tell ssh to create the extra socket.

What's wrong with

$ ssh kerckhoffs.g10code.com gpgconf --list-dirs agent-ssh-socket
/run/user/1000/gnupg/S.gpg-agent.ssh

>> You mean you are running a gpg-agent on the remote box as well?
>
> Maybe, depending on whether I left myself logged in on the physical console.

The standard use case is to run gpg on a server which you don't trust to
hold your private key. When ssh-ing to anther desktop (I have to do
this now often to my other office) you need to script something. Yes,
it would be cool if you could advice ssh with a simple option to send
some meta information to the server to be evaluated in .bashrc; but you
can do this also with an ssh wrapper and a dedicated envvar you allow in
sshd_config's AcceptEnv option.


> Yes, but won't all gpgs on the remote machine expect the extra-socket to
> be under /var/run/$UID, regardless of $GNUPGHOME? And even if we solve

There is a mechanism which allows this. For example if I do a manual
test in a dedicated homedir:

mybox:~/b/gnupg/test-card(GnuPGTest)$ gpgconf --list-dirs agent-ssh-socket
/run/user/1000/gnupg/d.ex81qn9mjkp3y5c94htkx8hy/S.gpg-agent.ssh

That is the homedir is hashed and appended to the standard socket dir.
Although things work automagically, gpgconf has two options to support
this:

--create-socketdir
Create a directory for sockets below /run/user or /var/run/user.
This is command is only required if a non default home directory is
used and the /run based sockets shall be used. For the default home
directory GnuPG creates a directory on the fly.

--remove-socketdir
Remove a directory created with command --create-socketdir.

[.I just noticed that I should update the description, it is actually
only needed if you want to create the directory prior to starting any
GnuPG daemon.]

> The extra-socket only works reliably if it is unique per-session, but it
> is not stored in a per-session location.

A session is defined by the GNUPGHOME envar or --homedir options. That
works the same on all platforms.

> Sending environment variables would require code changes to ssh(d),
> whereas vendor extensions would only require changes to gpg(-agent) -
> they are treated as black boxes by ssh(d) and passed verbatim.

And how do you set this vendor extensions with ssh(1)?


Salam-Shalom,

Werner

--
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.