Mailing List Archive

Upgrading from gpg1 to gpg2: lots of trouble, need help
Hi,

Happy Holidays!

I'm migrating from gpg1 to gpg2 and am having lots of
trouble. I apologise for the long email but it's been a
saga and others may encounter the same problems I did
and I have some (possibly stupid) suggestions and some
questions that I need answers for.

For most of my decryption use cases I can't use a
pinentry program. Instead, I have to start gpg-agent in
advance (despite what its manpage says) with
--allow-preset-passphrase so that I can then use
gpg-preset-passphrase so that when gpg is run later, it
can decrypt unaided.

Previously, on ubuntu14 and debian8, with (I think)
gpg-1.4.x and gpg-agent-2.0.x it worked fine but I had
great trouble getting it to work on ubuntu16 (with
gpg2-2.1.11) and debian9 (with gpg-2.1.18) and on
macos-10.11.6 (with macports gpg-2.2.3).

Suggestion 1
------------

Some of my troubles were due to gpg-preset-passphrase
needing the keygrip and no longer working with the
fingerprint as the cache id. It would accept the
fingerprint without error but when I tried to decrypt,
gpg would just hang there until I killed it. It wasn't
until I discovered that I needed to use the keygrip
that gpg could decrypt. This happened on the mac with
gpg-2.2.3.

If gpg-preset-passphrase doesn't work with fingerprints
anymore, maybe it could identify when a fingerprint has
been used and let the user know that they need to use
the keygrip instead. An error message to that effect
would have saved me a lot of time. Or it could just
fetch the keygrip that corresponds to the supplied
fingerprint. But maybe this isn't possible.

Suggestion 2
------------

I think much of the rest of my troubles had to do with
the keyring migration needing to have happened before
gpg tried to decrypt anything but it hadn't happened. I
remember at some point while testing something manually
the keyring migration happened and then gpg started
working. But it's all a bit of a blur. I spent several
days and nights on this and my brain was quite frazzled
at the time. Keyring migration seems to happen
automatically when performing some operations but not
all. Possibly because I'm using gpg-preset-passphrase.
Maybe it could be triggered in more places.

And another thing...
--------------------

I also discovered that I need to disable systemd's
handling of gpg-agent (on debian9 with gpg-2.1.18) if I
want to control when gpg-agent starts and stops and
which options are passed to it. I know this is not
recommended but I've had too much trouble in the past
with systemd thinking that it knows when a "user" has
"logged out" and then deciding to "clean up" causing me
masses of grief that I just can't bring myself to trust
it to know what it's doing.

I've disabled systemd's handling of gpg-agent on the
debian9 hosts with:

systemctl --global mask --now gpg-agent.service
systemctl --global mask --now gpg-agent.socket
systemctl --global mask --now gpg-agent-ssh.socket
systemctl --global mask --now gpg-agent-extra.socket
systemctl --global mask --now gpg-agent-browser.socket

(from /usr/share/doc/gnupg-agent/README.Debian)

I know someone on the internet has expressed
unhappiness about people doing this and not being happy
about supporting people who do it but please just pretend
that it's a non-systemd system. Not everything is Linux
after all. Gnupg should still work.

Question 1
----------

The most important use case I have is where a host will
ssh to another host which performs decryption on its
behalf. The second host has to be prepared first by me
starting a gpg-agent and presetting the passphrase for
a limited time so that it is ready to decrypt when the
other host connects.

On the decrypting host, I run a command that does
something like:

sudo -u thing --set-home -- gpgconf --kill gpg-agent

screen -- \
sudo -u thing --set-home -- \
gpg-agent --homedir /etc/thing/.gnupg \
--allow-preset-passphrase \
--default-cache-ttl 3600 \
--max-cache-ttl 3600 \
--daemon -- \
bash --login

(Then /etc/thing/.bash_login runs gpg-preset-passphrase)

While these screen/sudo/gpg-agent/bash processes are
running, the first host can connect with ssh and run a
single command that will decrypt and retrieve some
data. I can detach from the screen session knowing that
this access will last for 3600 seconds or until I come
back and terminate the screen/sudo/gpg-agent/bash
session.

I've managed to get this working again on the ubuntu16
host with gpg-2.1.11 but on the debian9 host with
gpg-2.1.18 (but with systemd handling of gpg-agent
disabled), it doesn't work. If I run the decryption
command from within the screen/bash session, it works,
and the only gpg-agent process is the one created by
the above commands:

gpg-agent --homedir /etc/store/.gnupg --allow-preset-passphrase \
--default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- \
/bin/bash --login

But as soon as the first host connects via ssh (and
tries to run gpg), there is a new gpg-agent process as
well as the one above:

gpg-agent --homedir /etc/store/.gnupg --use-standard-socket --daemon

And the decryption no longer works from the ssh
connection or from the screen/sudo/gpg-agent/bash
session.

I would have thought that, now that the use of the
standard socket is mandatory, this wouldn't happen. It
seems as though, when the ssh connection ran gpg, it
ignored the existing gpg-agent and started a new
gpg-agent which took over the standard socket. Maybe
not, there are several standard sockets including what
looks like an ssh-specific one:

0 srwx------ 1 thing thing 0 Dec 18 14:23 S.gpg-agent
0 srwx------ 1 thing thing 0 Dec 18 14:23 S.gpg-agent.browser
0 srwx------ 1 thing thing 0 Dec 18 14:23 S.gpg-agent.extra
0 srwx------ 1 thing thing 0 Dec 18 14:23 S.gpg-agent.ssh

On the ubuntu16 host where this is working, there is
only the S.gpg-agent socket.

Previously, with gpg-agent-2.0.x, I would tell
gpg-agent to write its environment variables to a file
that the incoming ssh connection could use to connect
to that gpg-agent. Now that's impossible and it seems
that gpg is starting a separate gpg-agent with a
separate socket for the incoming ssh connection.

Can anyone help me to get this situation working on the
debian9 host?

Would this work?

ln -s S.gpg-agent S.gpg-agent.ssh

or is that just wishful/deranged thinking?

I'm delighted (i.e. able to stop panicking) that I
managed to get it working on the ubuntu16 host but I
really need to have this working on multiple hosts and
all the others are recently upgraded debian9 hosts
where it doesn't work. And eventually, the ubuntu host
will no doubt get a version of gpg that behaves like
the one on the debian9 host.

I really really need to get this working.

Any help would be greatly appreciated.

Question 2
----------

There is another thing that I don't understand that I'd
like to. I'd like to be able to tell, before running
gpg, whether or not gpg-agent currently has a cached
passphrase. I found a method on the internet that
became this:

gpg_userid="user@domain.org"
gpg_cache_id="`gpg2 --fingerprint --with-keygrip $gpg_userid | \
grep '^ ' | tail -1 | sed -e 's/^.*= *//'`"
echo "GET_PASSPHRASE --no-ask $gpg_cache_id Error Prompt Desc" | \
gpg-connect-agent --no-autostart | grep -q OK && echo OK || echo ERR

And it seemed to work ok until I realised that whether
it reported that the passphrase was present or not was
not always related to whether or not gpg would be able
to decrypt unaided. That wasted a lot of my time too. :-)

I set up something like the following shell functions:

export GPG_TTY="`tty`"

[ -d /usr/lib/gnupg2 ] && PATH="$PATH:/usr/lib/gnupg2" # debian/ubuntu
[ -d /opt/local/libexec ] && PATH="$PATH:/opt/local/libexec" # macports

gpg_userid="user@domain.org"
gpg_keygrip="`gpg2 --fingerprint --with-keygrip $gpg_userid | \
grep '^ ' | tail -1 | sed -e 's/^.*= *//'`"

function gpgcheck()
{
echo "GET_PASSPHRASE --no-ask $gpg_keygrip Error Prompt Desc" | \
gpg-connect-agent --no-autostart | grep -q OK && echo OK || echo ERR
ps auxwww | grep '[g]pg-agent'
}

function gpgstart()
{
gpgconf --kill gpg-agent
gpg-agent --allow-preset-passphrase --default-cache-ttl 3600 \
--max-cache-ttl 3600 --daemon
askpass | gpg-preset-passphrase --preset "$gpg_keygrip"
}

function gpgstop()
{
gpgconf --kill gpg-agent
}

And sure enough, after gpgstart, gpgcheck would report
that the passphrase was present and gpg could decrypt
unaided but at some later point, gpgcheck would report
that the passphrase wasn't present but gpg could still
decrypt unaided. It would be nice to have an
explanation of this behaviour and it would be nice to
know how to reliably check whether or not gpg-agent has
the passphrase cached. But it's not essential. As long
as I know that I can't trust this method, I know not to
rely on it. But it would be nice to have a method that
I could rely on.

This might have something to do with the multiple
standard sockets being used by different processes.


Question 3
----------

I have another use case that I also haven't managed to
get working. This is a new use case that I didn't have
working before migrating to gpg2. The above
gpgstart/gpgcheck/gpgstop functions were created while
trying to get this working.

I use ansible to do things on a small number of
servers. Each server has a different sudo password.
Ansible on its own doesn't cater for this situation but
it's possible to get ansible to run a program to get
sudo passwords for each host. I've set up the "pass"
program to store these passwords in individual
gpg-encrypted files so that ansible can fetch them
automatically.

Since ansible will start up many processes in parallel,
all needing to decrypt a sudo password without my
interaction, a pinentry program can't be used. I need
to preset the passphrase before running ansible but
when I do, it doesn't work. I run gpgstart and enter
the passphrase. Then I run gpgcheck and it reports that
the passphrase is present. Then I run ansible e.g.:

ansible all -b -m shell -a "echo yes"

However, it seems that as soon as I start ansible, the
gpg-agent loses the passphrase and I'm bombarded with
pinentry-curses processes. It all gets a bit crazy and
at best, my xterm's tty settings are all messed up
(i.e. if I type anything afterwards, it's all
gibberish) and I have to kill the xterm. At worst, my
laptop ends up filled with pinentry-curses processes,
all hammering the CPU, and I have to kill them as well
or force a shutdown.

Just before I start ansible, gpgcheck shows OK. As soon
as I start ansible, gpgcheck (in another xterm) shows
ERR (but the agent is still running). I know I said
that what gpgcheck reports doesn't always reflect gpg's
ability to access the passphrase to decrypt but in this
case (at least soon after gpgstart), it does seem to be
telling the truth.

This is on macos-10.11.6 with macports gpg-2.2.3.

Does anyone have any idea what might be going wrong
here?

An additional gpg-agent process does get automatically
started while this is happening:

gpg-agent --homedir /Users/me/.gnupg --use-standard-socket --daemon

Which no doubt has something to do with it. But I
don't understand why it refused to use the gpg-agent
process that already existed.

I just tried it again and managed to see this error
message:

gpg: waiting for lock (held by 20749)

The process with pid 20749 is:

gpg2 -d --quiet --yes --compress-algo=none --no-encrypt-to \
--batch --use-agent /Users/me/.password-store/ansible/s2.gpg

That would have been started by "pass".

And eventually I saw: "gpg: decryption failed: No secret key"

Some of ansible's subprocesses will work and some won't
so maybe some are getting the passphrase before it
disappears.

I saw this in gpg-agent's manpage:

SIGHUP This signal flushes all cached passphrases...

Is it possible that something here is sending gpg-agent
a SIGHUP?

If so, is there a way to prevent that?

Or maybe it has to do with the multiple standard
sockets as well.


Question 4
----------

One last use case. I have a .vimrc config that
automatically decrypts gpg files upon opening and
encrypts them upon writing. With gpg1, I could enter
the passphrase each time I opened an encrypted file and
it was fine. Now that the use of gpg-agent is mandatory
and pinentry programs always get used, I have a
problem. As far as I am aware, no single pinentry
program will work for all of my uses of vim. I use vim
in xterm or Terminal, sometimes locally, sometimes over
ssh. I also use macports MacVim in the mac windowing
system and an X11 gvim in fullscreen X11.

I'd rather not use pinentry-mac because it will take me
out of fullscreen X11 mode if I'm there. And if I'm
logged into the host via ssh from elsewhere I imagine
it probably won't work at all. I don't want to use the
curses pinentry either because while it will work
inside vim in an xterm, it won't work in MacVim or in
an X11 gvim window which is my most common way of using
vim. What I'd really like, is either the ability to not
use gpg-agent (unlikely) or a non-gui, non-curses
pinentry program that just printed a prompt to stdout
and read the passphrase from stdin. That would work in
vim and gvim and MacVim windows whether I am logging in
locally or remotely. Macports won't let me install pgp1
and pgp2 at the same time and I get the impression that
debian doesn't want me installing pgp1 either. It says
it's deprecated which is a great shame.

So if anyone knows of a non-gui non-curses pinentry
program, please let me know (preferably one that
doesn't hammer the CPU). I've had to resort to
presetting a passphrase in a gpg-agent before editing a
gpg-encrypted file which is ok but I'd rather be able
to enter the passphrase from within gvim like I use to.



Thanks in advance,
raf


_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
On Mon 2017-12-18 20:01:02 +1100, gnupg@raf.org wrote:
> For most of my decryption use cases I can't use a
> pinentry program. Instead, I have to start gpg-agent in
> advance (despite what its manpage says) with
> --allow-preset-passphrase so that I can then use
> gpg-preset-passphrase so that when gpg is run later, it
> can decrypt unaided.

can you explain more about this use case? it sounds to me like you
might prefer to just keep your secret keys without a passphrase in the
first place.

> I also discovered that I need to disable systemd's
> handling of gpg-agent (on debian9 with gpg-2.1.18) if I
> want to control when gpg-agent starts and stops and
> which options are passed to it. I know this is not
> recommended but I've had too much trouble in the past
> with systemd thinking that it knows when a "user" has
> "logged out" and then deciding to "clean up" causing me
> masses of grief that I just can't bring myself to trust
> it to know what it's doing.
>
> I've disabled systemd's handling of gpg-agent on the
> debian9 hosts with:
>
> systemctl --global mask --now gpg-agent.service
> systemctl --global mask --now gpg-agent.socket
> systemctl --global mask --now gpg-agent-ssh.socket
> systemctl --global mask --now gpg-agent-extra.socket
> systemctl --global mask --now gpg-agent-browser.socket
>
> (from /usr/share/doc/gnupg-agent/README.Debian)
>
> I know someone on the internet has expressed
> unhappiness about people doing this and not being happy
> about supporting people who do it but please just pretend
> that it's a non-systemd system. Not everything is Linux
> after all. Gnupg should still work.

i might be "someone on the internet" :)

I can pretend it's a non-systemd system if you like -- that means you
simply don't have functional per-user session management, and it's now
on you to figure out session management yourself.


Without going into detail on your many questions, it sounds to me like
your main concern has to do with pinentry not seeming well-matched to
the way that you connect to the machines you use, and the way you expect
user interaction to happen.

Let me ask you to zoom out a minute from the specific details you're
seeing and try to imagine what you *want* -- ideally, not just in terms
of what you've done in the past.

for example, do you really want to have keys stored on a remote machine,
or do you want them stored locally, with the goal of being able to *use*
them remotely? do you want to be prompted to confirm the use of each
private key? do you expect that confirmation to include a passphrase
entry? how do you conceive of your adversary in this context? are you
concerned about leaking private key material? auditing access? some
other constraints?

--dkg
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Hi Daniel,

Thanks for responding.

Daniel Kahn Gillmor wrote:

> On Mon 2017-12-18 20:01:02 +1100, gnupg@raf.org wrote:
> > For most of my decryption use cases I can't use a
> > pinentry program. Instead, I have to start gpg-agent in
> > advance (despite what its manpage says) with
> > --allow-preset-passphrase so that I can then use
> > gpg-preset-passphrase so that when gpg is run later, it
> > can decrypt unaided.
>
> can you explain more about this use case? it sounds to me like you
> might prefer to just keep your secret keys without a passphrase in the
> first place.

I'm assuming that you are referring to the use case in Question 1.

Definitely not. That would make it possible for the decryption to
take place at any time. I need it to only be able to take place
for short periods of time when I am expecting it.

> > I also discovered that I need to disable systemd's
> > handling of gpg-agent (on debian9 with gpg-2.1.18) if I
> > want to control when gpg-agent starts and stops and
> > which options are passed to it. I know this is not
> > recommended but I've had too much trouble in the past
> > with systemd thinking that it knows when a "user" has
> > "logged out" and then deciding to "clean up" causing me
> > masses of grief that I just can't bring myself to trust
> > it to know what it's doing.
> >
> > I've disabled systemd's handling of gpg-agent on the
> > debian9 hosts with:
> >
> > systemctl --global mask --now gpg-agent.service
> > systemctl --global mask --now gpg-agent.socket
> > systemctl --global mask --now gpg-agent-ssh.socket
> > systemctl --global mask --now gpg-agent-extra.socket
> > systemctl --global mask --now gpg-agent-browser.socket
> >
> > (from /usr/share/doc/gnupg-agent/README.Debian)
> >
> > I know someone on the internet has expressed
> > unhappiness about people doing this and not being happy
> > about supporting people who do it but please just pretend
> > that it's a non-systemd system. Not everything is Linux
> > after all. Gnupg should still work.
>
> i might be "someone on the internet" :)
>
> I can pretend it's a non-systemd system if you like -- that means you
> simply don't have functional per-user session management, and it's now
> on you to figure out session management yourself.

Which is exactly how I want it. I want to decide when gpg-agent
starts and when it stops. It is unrelated to per-user sessions.

> Without going into detail on your many questions, it sounds to me like
> your main concern has to do with pinentry not seeming well-matched to
> the way that you connect to the machines you use, and the way you expect
> user interaction to happen.

That's true for some of the use case problems I'm having but not
with this one. I could use pinentry-curses here because it's
happening over an ssh connection in an xterm, not inside a gvim
window where curses doesn't work. But I'm happy to keep using
gpg-preset-passphrase.

I think the real problem with this use case is that the incoming
ssh connections from the other hosts are starting their own
gpg-agent (I'm guessing using the S.gpg-agent.ssh socket) rather
than just connecting to the existing gpg-agent that I have put
the passphrase into (I'm guessing that gpg-agent uses the
S.gpg-agent socket).

> Let me ask you to zoom out a minute from the specific details you're
> seeing and try to imagine what you *want* -- ideally, not just in terms
> of what you've done in the past.

What I want *is* what I've done in the past. That's why I did it. :-)

> for example, do you really want to have keys stored on a remote machine,
> or do you want them stored locally, with the goal of being able to *use*
> them remotely? do you want to be prompted to confirm the use of each
> private key? do you expect that confirmation to include a passphrase
> entry? how do you conceive of your adversary in this context? are you
> concerned about leaking private key material? auditing access? some
> other constraints?
>
> --dkg

For the purposes of this use case, all the hosts are "remote".
i.e. None of this is happening on the host that I have
physically in front of me. They are all servers of different
kinds.

What I want is to have gpg and encrypted data and a
key-with-a-strong-passphrase on a small number of servers and
then, when needed and only when needed, I want to be able to
enable unassisted decryption by the uid that owns the
data/keys/gpg-agent. Other hosts that need access to the
decrypted data need to be able to ssh to the host that has
gpg/keys/data to get that data without my interaction.

I need to be able to ssh to the server with gpg/keys/data to set
things up. Then I need to be able to log out without gpg-agent
disappearing. Then the other servers need to be able to ssh to
that server and use the gpg-agent that I prepared earlier so as
to decrypt the data. Then I need to be able to ssh back in and
turn off gpg-agent.

The big picture is that there are some publically accessible
servers that need access to sensitive data (e.g. database
passwords and symmetric encryption keys and similar) that I
don't want stored on those servers at all. Instead there are
service processes that fetch the data from a set of several
other servers that are not publically accessible. This fetching
of data only needs to happen when the publically accessible
servers reboot or when the data fetching services are
restarted/reconfigured.

So, in answer to your questions:

> do you really want to have keys stored on a remote machine or do you
> want them stored locally, with the goal of being able to *use* them
> remotely?

I don't want the keys stored locally on my laptop. I don't want
the keys stored on the publically accessible remote hosts where
the data is ultimately needed. I want to store and use the keys
on a different set of non-publically accessible remote hosts.

> do you want to be prompted to confirm the use of each private key?
> do you expect that confirmation to include a passphrase entry?

No. The private key will be used four times for each host that
reboots. I don't want to have to be there to physically confirm
each use of the private key (or enter the passphrase each time).
After all, they may well happen at the same time and from within
ssh connections that I have nothing to do with. That would be
similar to my ansible user case.

I want to be able to enter the passphrase once (on each of the
gpg/data/key hosts) before I reboot the publically accessible
hosts, and I want that to be sufficient to enable multiple
incoming ssh connections from the rebooting hosts to get what
they need, and when the hosts have successfully rebooted I want
to be able to turn off gpg-agent.

If you prefer, the confirmation of the use of private keys is me
entering the passphrase into gpg-agent before the other hosts
make their ssh connections.

> how do you conceive of your adversary in this context?
> are you concerned about leaking private key material?
> auditing access? other constraints?

I'm concerned about everything. Physical theft of servers,
hackers, you name it. There are many, many defenses in place but
I have to assume that someone might be able to get past them
all. So making things as hard as possible for attackers is the
way to go. It seems like a good idea not to have the sensitive
data on the publically accessible hosts at all except in memory.

Someone suggested using gpg-agent forwarding but that, and the
first in your batch of questions above, seems to imply that the
expectation is for keys to be stored locally and that access to
those keys be made available to gpg processes on other hosts
that a human user has connected to (in much the same way as
ssh-agent forwarding works). That is not at all what I want to
happen. My local laptop should have nothing to do with any of
this except that it is where I ssh from to get everywhere else.
Also, for redundancy purposes, the data and keys need to be
stored on multiple servers in different locations. Even if I
consider those servers to be "local", it's still not what I want
because that assumes that it is the server with the keys that
connects to the other servers with data that needs to be
decrypted with those keys. In this case, it is those other servers
that will be making the connections to the server with the keys
(and the data). I don't want their rebooting to be delayed by my
having to log in to each of them with a passphrase or a
forwarded gpg-agent connection. I want them to make the
connection by themselves as soon as they are ready to, obtain
the data they need, and continue booting up.



I'm not sure I understand your reasons for asking all these
questions. Is it that you don't think that want I want to do is
still possible with gnupg2.1+ and are you trying to convince me
to fundamentally change what I'm doing?

I don't want to fundamentally change what I'm doing. I don't
have the time (unless there really is no alternative). I just
wanted to upgrade my servers from debian8 to debian9. I had no
idea this was going to happen.

Can incoming ssh connections use the existing gpg-agent that I
have already started and preset with a passphrase or not? Does
anyone know?

Is continuing to use gpg1 indefinitely an option? Will it
contine to work with recent versions of gpg-agent?

Debian says that gpg1 is deprecated but I've read that gpg1 is
now mostly only useful for embedded systems (or servers). Since
IoT and servers will never go away, does that mean that gpg1
will never go away? I'd be happy to keep using gpg1 if I knew
that it wouldn't go away and if I knew that it would keep
working with recent versions of gpg-agent.

cheers,
raf


_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Hi raf--

Hi On Wed 2017-12-20 14:11:26 +1100, gnupg@raf.org wrote:
> Daniel Kahn Gillmor wrote:
>> On Mon 2017-12-18 20:01:02 +1100, gnupg@raf.org wrote:
>> > For most of my decryption use cases I can't use a
>> > pinentry program. Instead, I have to start gpg-agent in
>> > advance (despite what its manpage says) with
>> > --allow-preset-passphrase so that I can then use
>> > gpg-preset-passphrase so that when gpg is run later, it
>> > can decrypt unaided.
>>
>> can you explain more about this use case? it sounds to me like you
>> might prefer to just keep your secret keys without a passphrase in the
>> first place.
>
> I'm assuming that you are referring to the use case in Question 1.
>
> Definitely not. That would make it possible for the decryption to
> take place at any time. I need it to only be able to take place
> for short periods of time when I am expecting it.

OK, so your preferred outcome is some way to enable a key for a limited
period of time. is that right?

> I think the real problem with this use case is that the incoming
> ssh connections from the other hosts are starting their own
> gpg-agent (I'm guessing using the S.gpg-agent.ssh socket) rather
> than just connecting to the existing gpg-agent that I have put
> the passphrase into (I'm guessing that gpg-agent uses the
> S.gpg-agent socket).

there should be only one S.gpg-agent.ssh socket, and therefore only one
agent. If you were using systemd and dbus user sessions, those system
management tools would make sure that these things exist. This is the
entire point of session management. It's complex to do by hand, and
choosing to abandon the tools that offer it to you seems gratuitously
masochistic. But ok…

> What I want is to have gpg and encrypted data and a
> key-with-a-strong-passphrase on a small number of servers and
> then, when needed and only when needed, I want to be able to
> enable unassisted decryption by the uid that owns the
> data/keys/gpg-agent. Other hosts that need access to the
> decrypted data need to be able to ssh to the host that has
> gpg/keys/data to get that data without my interaction.
>
> I need to be able to ssh to the server with gpg/keys/data to set
> things up. Then I need to be able to log out without gpg-agent
> disappearing. Then the other servers need to be able to ssh to
> that server and use the gpg-agent that I prepared earlier so as
> to decrypt the data. Then I need to be able to ssh back in and
> turn off gpg-agent.

I'm still not sure i understand your threat model -- apparently your
theorized attacker is capable of compromising the account on the
targeted host, but *only* between the times before you enable (and after
you disable) gpg-agent. Is that right?

Why do you need these multi-detached operations? by "multi-detached" i
mean that your sequence of operations appears to be:

* attach
* enable gpg-agent
* detach
* other things use…
* attach
* disable gpg-agent
* detach

wouldn't you rather monitor these potentially-vulnerable accounts (by
staying attached or keeping a session open while they're in use)?

> The big picture is that there are some publically accessible
> servers that need access to sensitive data (e.g. database
> passwords and symmetric encryption keys and similar) that I
> don't want stored on those servers at all. Instead there are
> service processes that fetch the data from a set of several
> other servers that are not publically accessible. This fetching
> of data only needs to happen when the publically accessible
> servers reboot or when the data fetching services are
> restarted/reconfigured.

so what is the outcome if the gpg-agent is disabled when these
reboots/restarts happen? how do you coordinate that access?

> I want to be able to enter the passphrase once (on each of the
> gpg/data/key hosts) before I reboot the publically accessible
> hosts, and I want that to be sufficient to enable multiple
> incoming ssh connections from the rebooting hosts to get what
> they need, and when the hosts have successfully rebooted I want
> to be able to turn off gpg-agent.
>
> If you prefer, the confirmation of the use of private keys is me
> entering the passphrase into gpg-agent before the other hosts
> make their ssh connections.

this approach seems congruent with my single-attach proposal:

* you log into "key management" host (this enables the systemd
gpg-agent user service)

* on "key management" host, enable key access using
gpg-preset-passphrase or something similar

* you trigger restart of public-facing service

* public-facing service connects to "key management" host, gets the
data it needs

* you verify that the restart of the public-facing service is successful

* you log out of "key management" host. dbus-user-session closes the
gpg-agent automatically with your logout, thereby closing the agent
and disabling access to those keys.

can you explain why that doesn't meet your goals?

> Also, for redundancy purposes, the data and keys need to be
> stored on multiple servers in different locations.

I think there are other ways to address your redundancy concerns that
don't involve giving each of the redundant backup servers access to the
cleartext of the secret key material at any time; so i'm not going to
address this redundancy concern.

> Even if I consider those servers to be "local", it's still not what I
> want because that assumes that it is the server with the keys that
> connects to the other servers with data that needs to be decrypted
> with those keys. In this case, it is those other servers that will be
> making the connections to the server with the keys (and the data). I
> don't want their rebooting to be delayed by my having to log in to
> each of them with a passphrase or a forwarded gpg-agent connection. I
> want them to make the connection by themselves as soon as they are
> ready to, obtain the data they need, and continue booting up.

Here, i think you're making an efficiency argument -- you want to
prepare the "key management" host in advance, so that during the boot
process of the public-facing service, it gets what it needs without
you needing to manipulate it directly.

> I'm not sure I understand your reasons for asking all these
> questions. Is it that you don't think that want I want to do is
> still possible with gnupg2.1+ and are you trying to convince me
> to fundamentally change what I'm doing?

I'm trying to extract high-level, security-conscious, sensible goals
from your descriptions, so that i can help you figure out how to meet
them. It's possible that your existing choices don't actually meet your
goals as well as you thought they did, and newer tools can help get you
closer to meeting your goals.

This may mean some amount of change, but it's change in the direction of
what you actually want, so hopefully it's worth the pain.

> Can incoming ssh connections use the existing gpg-agent that I
> have already started and preset with a passphrase or not? Does
> anyone know?

yes, i've tested it. it works.

> Is continuing to use gpg1 indefinitely an option? Will it
> contine to work with recent versions of gpg-agent?

gpg1 only "works" with versions of gpg-agent as a passphrase cache, but
modern versions of GnuPG use gpg-agent as an actual cryptographic agent,
which does not release the secret key at all.

This is actually what i think you want, as it minimizes exposure of the
secret key itself. gpg1 has access to the full secret key, while gpg2
deliberately does not.

gpg-preset-passphrase only unlocks access to secret key material in the
agent -- that is, it does *not* touch the passphrase cache. This means
that it is incompatible with gpg1, as noted in the manual page.

> Debian says that gpg1 is deprecated but I've read that gpg1 is
> now mostly only useful for embedded systems (or servers).

where did you read this? imho, gpg1 is now mostly only useful for
people with bizarre legacy constraints (like using an ancient, known-bad
PGP-2 key to maintain a system that is so crufty it cannot update the
list of administrator keys).

> Since IoT and servers will never go away, does that mean that gpg1
> will never go away? I'd be happy to keep using gpg1 if I knew that it
> wouldn't go away and if I knew that it would keep working with recent
> versions of gpg-agent.

i advise against this approach. please use the modern version. it is
well-maintained and should meet your needs.

--dkg
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Daniel Kahn Gillmor wrote:

> Hi raf--
>
> Hi On Wed 2017-12-20 14:11:26 +1100, gnupg@raf.org wrote:
> > Daniel Kahn Gillmor wrote:
> >> On Mon 2017-12-18 20:01:02 +1100, gnupg@raf.org wrote:
> >> > For most of my decryption use cases I can't use a
> >> > pinentry program. Instead, I have to start gpg-agent in
> >> > advance (despite what its manpage says) with
> >> > --allow-preset-passphrase so that I can then use
> >> > gpg-preset-passphrase so that when gpg is run later, it
> >> > can decrypt unaided.
> >>
> >> can you explain more about this use case? it sounds to me like you
> >> might prefer to just keep your secret keys without a passphrase in the
> >> first place.
> >
> > I'm assuming that you are referring to the use case in Question 1.
> >
> > Definitely not. That would make it possible for the decryption to
> > take place at any time. I need it to only be able to take place
> > for short periods of time when I am expecting it.
>
> OK, so your preferred outcome is some way to enable a key for a limited
> period of time. is that right?

Yes.

> > I think the real problem with this use case is that the incoming
> > ssh connections from the other hosts are starting their own
> > gpg-agent (I'm guessing using the S.gpg-agent.ssh socket) rather
> > than just connecting to the existing gpg-agent that I have put
> > the passphrase into (I'm guessing that gpg-agent uses the
> > S.gpg-agent socket).
>
> there should be only one S.gpg-agent.ssh socket, and therefore only one
> agent. If you were using systemd and dbus user sessions, those system
> management tools would make sure that these things exist. This is the
> entire point of session management. It's complex to do by hand, and
> choosing to abandon the tools that offer it to you seems gratuitously
> masochistic. But ok…

There is only one S.gpg-agent.ssh socket (I think). I'm
pretty sure that I was mistaken when I guessed that
S.gpg-agent.ssh had something to do with the incoming ssh
connection using gpg which started up its own gpg-agent
process. I now think that S.gpg-agent.ssh has to do with
ssh-agent support and nothing to do with this.

With gnupg-2.1.11 on ubuntu16, there is only a single socket:

~/.gnupg/S.gpg-agent

With gnupg-2.1.18 on debian9, there are four sockets:

~/.gnupg/S.gpg-agent
~/.gnupg/S.gpg-agent.browser
~/.gnupg/S.gpg-agent.extra
~/.gnupg/S.gpg-agent.ssh

This may have something to do with why what I am trying to
do works with gnupg-2.1.11 but not with gnupg-2.1.18.

The incoming ssh connection did start its own gpg-agent
process (even though there already was one running) but I no
longer think that it had anything to do with
S.gpg-agent.ssh. In fact, since the "user session" in which
the first gpg-agent process was started could no longer
access the passphrase, it seems as though the new gpg-agent
process took over the sockets so that all attempts to
communicate with gpg-agent via these sockets connected to
the new gpg-agent process that knew nothing and the original
gpg-agent process which knew the passphrase was
uncontactable. But again, I'm only guessing.

I saw a comment of yours on a mailing list archive about one
of the purposes of gpg-agent being to prevent access to its
contents from any process just because they had permissions
to use the sockets without alerting the user. It sounds like
that could be what is preventing my use case from working.
But again, I'm only guessing.

https://lists.gnupg.org/pipermail/gnupg-devel/2015-May/029804.html

Since you say that, if systemd was handling this, that it would
make sure that these sockets exist, perhaps my attempt to mask
them had no effect. Because as soon as I start the first
gpg-agent, all four sockets are created. I assume that it is
gpg-agent itself that creates them rather than systemd. They
disappear again when gpg-agent terminates. But that's the same
behaviour as on macos without systemd. The sockets are created
when gpg-agent starts and they are deleted when it stops. Which
seems sensible. Hardly masochistic. But perhaps my masochism
threshold is too high. :-)

> > What I want is to have gpg and encrypted data and a
> > key-with-a-strong-passphrase on a small number of servers and
> > then, when needed and only when needed, I want to be able to
> > enable unassisted decryption by the uid that owns the
> > data/keys/gpg-agent. Other hosts that need access to the
> > decrypted data need to be able to ssh to the host that has
> > gpg/keys/data to get that data without my interaction.
> >
> > I need to be able to ssh to the server with gpg/keys/data to set
> > things up. Then I need to be able to log out without gpg-agent
> > disappearing. Then the other servers need to be able to ssh to
> > that server and use the gpg-agent that I prepared earlier so as
> > to decrypt the data. Then I need to be able to ssh back in and
> > turn off gpg-agent.
>
> I'm still not sure i understand your threat model -- apparently your
> theorized attacker is capable of compromising the account on the
> targeted host, but *only* between the times before you enable (and after
> you disable) gpg-agent. Is that right?

Well, for physical theft of the servers, yes.

> Why do you need these multi-detached operations? by "multi-detached" i
> mean that your sequence of operations appears to be:
>
> * attach
> * enable gpg-agent
> * detach
> * other things use…
> * attach
> * disable gpg-agent
> * detach
>
> wouldn't you rather monitor these potentially-vulnerable accounts (by
> staying attached or keeping a session open while they're in use)?

I usually do but I want the ability to be able to detach from
the screen session. But it's only for a few minutes. Being
able to detach is not important. Having the incoming ssh
connections communicate with the existing gpg-agent process
is what's important.

In my testing of this, I didn't actually detach from the screen
session so that is not what is causing this problem.

> > The big picture is that there are some publically accessible
> > servers that need access to sensitive data (e.g. database
> > passwords and symmetric encryption keys and similar) that I
> > don't want stored on those servers at all. Instead there are
> > service processes that fetch the data from a set of several
> > other servers that are not publically accessible. This fetching
> > of data only needs to happen when the publically accessible
> > servers reboot or when the data fetching services are
> > restarted/reconfigured.
>
> so what is the outcome if the gpg-agent is disabled when these
> reboots/restarts happen? how do you coordinate that access?

If gpg-agent is disabled when the reboots happen, the client servers
fail to obtain the data until I enable gpg-agent. The clients
keep trying until it works.

> > I want to be able to enter the passphrase once (on each of the
> > gpg/data/key hosts) before I reboot the publically accessible
> > hosts, and I want that to be sufficient to enable multiple
> > incoming ssh connections from the rebooting hosts to get what
> > they need, and when the hosts have successfully rebooted I want
> > to be able to turn off gpg-agent.
> >
> > If you prefer, the confirmation of the use of private keys is me
> > entering the passphrase into gpg-agent before the other hosts
> > make their ssh connections.
>
> this approach seems congruent with my single-attach proposal:
>
> * you log into "key management" host (this enables the systemd
> gpg-agent user service)
>
> * on "key management" host, enable key access using
> gpg-preset-passphrase or something similar
>
> * you trigger restart of public-facing service
>
> * public-facing service connects to "key management" host, gets the
> data it needs
>
> * you verify that the restart of the public-facing service is successful
>
> * you log out of "key management" host. dbus-user-session closes the
> gpg-agent automatically with your logout, thereby closing the agent
> and disabling access to those keys.
>
> can you explain why that doesn't meet your goals?

Sorry, I thought I already did. The 4th point above does not
work. When the public-facing host connects via ssh to the
key management host, and runs gpg, instead of it successully
connecting to the existing gpg-agent process that I started
minutes earlier, it starts a new gpg-agent process which
doesn't know the passphrase and so the decryption fails.

Here are the gpg-agent processes after I start the first gpg-agent
process and preset the passphrase:

/usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
--default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login

Here are the gpg-agent processes after an inoming ssh connection that
attempts to use gpg:

/usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
--default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login
gpg-agent --homedir /etc/thing/.gnupg --use-standard-socket --daemon

That second gpg-agent process should not exist. The gpg
process that caused it to be started should have connected
to the existing gpg-agent process. The sockets for it
existed but perhaps there was some reason why it didn't use
them.

There must be some reason why gpg thinks it needs to start
gpg-agent. Perhaps it's because it's a different "user
session". They are from two different ssh connections after
all.

> > Even if I consider those servers to be "local", it's still not what I
> > want because that assumes that it is the server with the keys that
> > connects to the other servers with data that needs to be decrypted
> > with those keys. In this case, it is those other servers that will be
> > making the connections to the server with the keys (and the data). I
> > don't want their rebooting to be delayed by my having to log in to
> > each of them with a passphrase or a forwarded gpg-agent connection. I
> > want them to make the connection by themselves as soon as they are
> > ready to, obtain the data they need, and continue booting up.
>
> Here, i think you're making an efficiency argument -- you want to
> prepare the "key management" host in advance, so that during the boot
> process of the public-facing service, it gets what it needs without
> you needing to manipulate it directly.

That's correct.

> > I'm not sure I understand your reasons for asking all these
> > questions. Is it that you don't think that want I want to do is
> > still possible with gnupg2.1+ and are you trying to convince me
> > to fundamentally change what I'm doing?
>
> I'm trying to extract high-level, security-conscious, sensible goals
> from your descriptions, so that i can help you figure out how to meet
> them. It's possible that your existing choices don't actually meet your
> goals as well as you thought they did, and newer tools can help get you
> closer to meeting your goals.
>
> This may mean some amount of change, but it's change in the direction of
> what you actually want, so hopefully it's worth the pain.

I'm sure that's probably true and I do appreciate your efforts.

> > Can incoming ssh connections use the existing gpg-agent that I
> > have already started and preset with a passphrase or not? Does
> > anyone know?
>
> yes, i've tested it. it works.

That's hopeful but I wonder why it doesn't work for me.

> > Is continuing to use gpg1 indefinitely an option? Will it
> > contine to work with recent versions of gpg-agent?
>
> gpg1 only "works" with versions of gpg-agent as a passphrase cache, but
> modern versions of GnuPG use gpg-agent as an actual cryptographic agent,
> which does not release the secret key at all.

And I noticed that gpg1 can't use preset passphrases anymore anyway.
And gnupg-1.4.22 in macports says that it doesn't use the agent at all
anymore so that's not an option (probably for the best).

> This is actually what i think you want, as it minimizes exposure of the
> secret key itself. gpg1 has access to the full secret key, while gpg2
> deliberately does not.
>
> gpg-preset-passphrase only unlocks access to secret key material in the
> agent -- that is, it does *not* touch the passphrase cache. This means
> that it is incompatible with gpg1, as noted in the manual page.
>
> > Debian says that gpg1 is deprecated but I've read that gpg1 is
> > now mostly only useful for embedded systems (or servers).
>
> where did you read this?

I can't remember.

> imho, gpg1 is now mostly only useful for
> people with bizarre legacy constraints (like using an ancient, known-bad
> PGP-2 key to maintain a system that is so crufty it cannot update the
> list of administrator keys).
>
> > Since IoT and servers will never go away, does that mean that gpg1
> > will never go away? I'd be happy to keep using gpg1 if I knew that it
> > wouldn't go away and if I knew that it would keep working with recent
> > versions of gpg-agent.
>
> i advise against this approach. please use the modern version. it is
> well-maintained and should meet your needs.
>
> --dkg

Don't worry. I will. But it hasn't met many of my needs so far. :-)


Another reason that I disabled/masked systemd's handling of
the sockets is for consistency between the ubuntu16 host
with gnupg-2.1.11 and debian9 with gnupg-2.1.8. Only
the debian9 host has the systemd handling of sockets
(it started with gnupg-2.1.17).

Ah, systemd puts the sockets in a completely different
place: /run/user/*/gnupg/ instead of ~/.gnupg/. So much for
a standard socket location :-). That might be relevant. But
it shouldn't be if systemd is not handling the sockets.

Perhaps I didn't disable systemd's handling of the sockets
properly and it's still partially managing things. But it claims
to be masked so I don't think that's the problem.

No, something's not right. I've globally unmasked and enabled
the sockets but...

As my user, I can do:

> systemctl --global is-enabled gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
static
enabled
enabled
enabled
enabled

And:

> systemctl --user is-enabled gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
static
enabled
enabled
enabled
enabled

I had to specifically enable them with --user otherwise it said
disabled with --user even though it said enabled with --global.
I might have done --user disable in teh past as well. It's all
a bit of a blur.

But when I su to the user in question, I get:

> systemctl --user is-enabled gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
Failed to connect to bus: No such file or directory

But it still reports as enabled with --global.
Maybe that's enough. I don't know.

And, as that user, gpg can --list-secret-keys but when I try
to decrypt something, it doesn't ask for a passphrase and it
fails to decrypt but it does start gpg-agent and sockets are
created in ~/.gnupg even though systemd is now supposed to be
handling the sockets. This is without me starting up the
screen/sudo/gpg-agent/bash processes first.

> gpg --list-secret-keys
/etc/thing/.gnupg/pubring.gpg
-----------------------------
sec rsa2048 2016-01-13 [SC]
25EB4337C3CA32DE46774E1B17B64F00CD3C41D1
uid [ultimate] user <user@domain.com>
ssb rsa2048 2016-01-13 [E]

Hmm, it mentions the old keyring above, not the
migrated one in ~/.gnupg/private-keys-v1.d.
Maybe that's why --list-secret-keys worked but
the rest below doesn't.

> echo OK | gpg -e --default-recipient-self | gpg -d
gpg: encrypted with 2048-bit RSA key, ID 6E76F4FAAE42FC15, created 2016-01-13
"user <user@domain.com>"
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: No secret key

> ls -alsp .gnupg/S*
0 srwx------ 1 thing thing 0 Dec 21 15:45 .gnupg/S.gpg-agent
0 srwx------ 1 thing thing 0 Dec 21 14:47 .gnupg/S.gpg-agent.browser
0 srwx------ 1 thing thing 0 Dec 21 14:47 .gnupg/S.gpg-agent.extra
0 srwx------ 1 thing thing 0 Dec 21 14:47 .gnupg/S.gpg-agent.ssh

I am completely failing to understand what's going on here. :-)
Is systemd handling the sockets or not? There's no /run/user
directory for this user so probably not. Maybe I don't
understand --user and --global or systemd in general.

Sorry for taking up so much of your time.
I appreciate your effort to help.

cheers,
raf


_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
On Thu 2017-12-21 16:19:00 +1100, raf wrote:
> Sorry, I thought I already did. The 4th point above does not
> work. When the public-facing host connects via ssh to the
> key management host, and runs gpg, instead of it successully
> connecting to the existing gpg-agent process that I started
> minutes earlier, it starts a new gpg-agent process which
> doesn't know the passphrase and so the decryption fails.
>
> Here are the gpg-agent processes after I start the first gpg-agent
> process and preset the passphrase:
>
> /usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
> --default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login
>
> Here are the gpg-agent processes after an inoming ssh connection that
> attempts to use gpg:
>
> /usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
> --default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login
> gpg-agent --homedir /etc/thing/.gnupg --use-standard-socket --daemon
>
> That second gpg-agent process should not exist. The gpg
> process that caused it to be started should have connected
> to the existing gpg-agent process. The sockets for it
> existed but perhaps there was some reason why it didn't use
> them.
>
> There must be some reason why gpg thinks it needs to start
> gpg-agent. Perhaps it's because it's a different "user
> session". They are from two different ssh connections after
> all.

this is the part that i'm unable to reproduce.

Are both of these processes running as the same user account?

does something at some point destroy or mask the standard socket created
by the first process, so that a new gpg invocation decides to start up a
new instance of gpg-agent?

if your old session was being terminated, then you'd expect the first
agent to actually disappear. that's not happening.

and neither of these agents is beign launched by systemd, because if it
were it would have a --supervised .

> But when I su to the user in question, I get:
>
> > systemctl --user is-enabled gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
> Failed to connect to bus: No such file or directory
>
> But it still reports as enabled with --global.
> Maybe that's enough. I don't know.

are you su'ing with a login shell (i.e. with - or -l or --login), or
not?

> I am completely failing to understand what's going on here. :-)
> Is systemd handling the sockets or not? There's no /run/user
> directory for this user so probably not. Maybe I don't
> understand --user and --global or systemd in general.

why is there no /run/user for this user? if you're running a modern
version of systemd, and your user has actually started a session, there
should be a /run/user created automatically.

--dkg

_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Daniel Kahn Gillmor wrote:

> On Thu 2017-12-21 16:19:00 +1100, raf wrote:
> > Sorry, I thought I already did. The 4th point above does not
> > work. When the public-facing host connects via ssh to the
> > key management host, and runs gpg, instead of it successully
> > connecting to the existing gpg-agent process that I started
> > minutes earlier, it starts a new gpg-agent process which
> > doesn't know the passphrase and so the decryption fails.
> >
> > Here are the gpg-agent processes after I start the first gpg-agent
> > process and preset the passphrase:
> >
> > /usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
> > --default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login
> >
> > Here are the gpg-agent processes after an inoming ssh connection that
> > attempts to use gpg:
> >
> > /usr/bin/gpg-agent --homedir /etc/thing/.gnupg --allow-preset-passphrase \
> > --default-cache-ttl 3600 --max-cache-ttl 3600 --daemon -- /bin/bash --login
> > gpg-agent --homedir /etc/thing/.gnupg --use-standard-socket --daemon
> >
> > That second gpg-agent process should not exist. The gpg
> > process that caused it to be started should have connected
> > to the existing gpg-agent process. The sockets for it
> > existed but perhaps there was some reason why it didn't use
> > them.
> >
> > There must be some reason why gpg thinks it needs to start
> > gpg-agent. Perhaps it's because it's a different "user
> > session". They are from two different ssh connections after
> > all.
>
> this is the part that i'm unable to reproduce.
>
> Are both of these processes running as the same user account?

Yes. They are both owned by the user I am calling "thing".

> does something at some point destroy or mask the standard socket created
> by the first process, so that a new gpg invocation decides to start up a
> new instance of gpg-agent?

Nothing that I am aware of. The sockets are still there in the
file system. However, as soon as the incoming ssh connection
runs gpg which starts its own new gpg-agent, the original
screen+sudo+gpg-agent+bash "session" can no longer decrypt the
data. It's behaving "as if" the new gpg-agent has taken over the
sockets so connections via them no longer access the first
gpg-agent that knows the passphrase but rather access the second
gpg-agent that doesn't know the passphrase. I'm not saying that
that is what is happening, just that such behaviour might look
like what I'm seeing.

> if your old session was being terminated, then you'd expect the first
> agent to actually disappear. that's not happening.
>
> and neither of these agents is beign launched by systemd, because if it
> were it would have a --supervised .
>
> > But when I su to the user in question, I get:
> >
> > > systemctl --user is-enabled gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
> > Failed to connect to bus: No such file or directory
> >
> > But it still reports as enabled with --global.
> > Maybe that's enough. I don't know.
>
> are you su'ing with a login shell (i.e. with - or -l or --login), or
> not?

I would have used "-" but I was only using su for the purpose of
checking the systemctl's gpg-agent enabled status. I just tried
it again with "-" and got the same result as above.

For the actual decryption, I'm using sudo. From the original
post, the command to set things up contains something like:

/usr/bin/screen -- \
/usr/bin/sudo -u thing --set-home -- \
/usr/bin/gpg-agent --homedir /etc/thing/.gnupg \
--allow-preset-passphrase \
--default-cache-ttl 3600 \
--max-cache-ttl 3600 \
--daemon $gpg_agent_info -- \
/bin/bash --login

So the sudo doesn't have "-i" for a login shell (because
gpg-agent is run instead) but bash is run with "--login".

> > I am completely failing to understand what's going on here. :-)
> > Is systemd handling the sockets or not? There's no /run/user
> > directory for this user so probably not. Maybe I don't
> > understand --user and --global or systemd in general.
>
> why is there no /run/user for this user? if you're running a modern
> version of systemd, and your user has actually started a session, there
> should be a /run/user created automatically.

I don't know why. It's systemd 232-25+deb9u1.

> --dkg

The main thing is that you can't reproduce the behaviour that
I'm seeing with the incoming ssh connection running gpg.

I take that as a good sign. It means that what I am trying to do
should work. When I get back to work, I'll do some tracing and
get a better look at what is happening when the incoming ssh
connection runs gpg and compare it to gpg when run from the
screen session before the incoming ssh connection takes place
(while it still works and can decrypt data).

Thanks,
raf


_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
On Sun 2018-01-07 23:23:16 +1100, gnupg@raf.org wrote:
> For the actual decryption, I'm using sudo. From the original
> post, the command to set things up contains something like:
>
> /usr/bin/screen -- \
> /usr/bin/sudo -u thing --set-home -- \
> /usr/bin/gpg-agent --homedir /etc/thing/.gnupg \
> --allow-preset-passphrase \
> --default-cache-ttl 3600 \
> --max-cache-ttl 3600 \
> --daemon $gpg_agent_info -- \
> /bin/bash --login

this is deliberately launching a second agent, outside of the basic
supervision that should already be in place.

If you want to use the standard system agent, please do not launch a
separate agent.

This should be as simple as:

screen -- sudo -u thing --login

or, if you're doing this as root already, then you don't need sudo at
all, and it could just be:

screen -- su - testuser

If this is run from cron, it will spawn a new session, and that session
will have a systemd session manager capable of spawning gpg-agent as
needed.

unfortunately, it will not spawn a new session if run from an existing
session, see the discussion at
https://github.com/systemd/systemd/issues/7451 .

if you want to manually start a new session for a new user from within
an existing session on a machine managed by systemd, apparently
machinectl may be the way to go, but i haven't explored that in full.

hope this helps,

--dkg

_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Daniel Kahn Gillmor wrote:

> On Sun 2018-01-07 23:23:16 +1100, gnupg@raf.org wrote:
> > For the actual decryption, I'm using sudo. From the original
> > post, the command to set things up contains something like:
> >
> > /usr/bin/screen -- \
> > /usr/bin/sudo -u thing --set-home -- \
> > /usr/bin/gpg-agent --homedir /etc/thing/.gnupg \
> > --allow-preset-passphrase \
> > --default-cache-ttl 3600 \
> > --max-cache-ttl 3600 \
> > --daemon $gpg_agent_info -- \
> > /bin/bash --login
>
> this is deliberately launching a second agent, outside of the basic
> supervision that should already be in place.

No. It's starting the *first* agent. Remember, I had disabled
systemd's handling of gpg-agent so there is no supervising
gpg-agent process started by systemd.

When I showed the two gpg-agent processes that existed after the
incoming ssh connection ran gpg, they were the only two
gpg-agent processes owned by the 'thing' user. There was no
supervising one or I would have shown that one as well.

The problem is that the subsequent incoming ssh connection runs
gpg and that gpg process starts a second gpg-agent process
(which has no knowledge of the passphrase) rather than
connecting to this first gpg-agent process (which does have
knowledge of the passphrase - at least it does until the new
gpg-agent is started possibly because it took over the sockets
that were created by the first gpg-agent process).

> If you want to use the standard system agent, please do not launch a
> separate agent.

As I stated some time ago, I don't want to use the "standard
system agent" because I don't trust systemd to know when it's ok
to remove resources. I have had too much trouble caused by
systemd concluding that it was time to remove crucial resources
to be able trust it with anything that I need to rely on.

> This should be as simple as:
>
> screen -- sudo -u thing --login
>
> or, if you're doing this as root already, then you don't need sudo at
> all, and it could just be:
>
> screen -- su - testuser

It's not run as root.

> If this is run from cron, it will spawn a new session, and that session
> will have a systemd session manager capable of spawning gpg-agent as
> needed.

It's not run from cron. It wouldn't make sense to run it from cron.

> unfortunately, it will not spawn a new session if run from an existing
> session, see the discussion at
> https://github.com/systemd/systemd/issues/7451 .
>
> if you want to manually start a new session for a new user from within
> an existing session on a machine managed by systemd, apparently
> machinectl may be the way to go, but i haven't explored that in full.

That must explain why systemd didn't create a /var/run
subdirectory for the 'thing' user during the sudo process (when
I re-enabled systemd's handling of gpg-agent).

But machinectl seems to be for containers. I'd rather not go
there since it might not be right since I'm not using
containers. It seems like a hack.

I think this is just another argument/example to support my
preference for avoiding the additional complexity of systemd
here and just using gnupg by itself.

> hope this helps,
>
> --dkg

Thanks. I appreciate the effort and research but it doesn't
really help. It doesn't address the issue of the incoming ssh
connection's gpg process starting up a new gpg-agent process
rather than connecting to the existing one.

But don't worry. I'm sure I've wasted enough of your time. When
I get time, I'll debug what's happening and either realise what
needs to be done or work around it somehow.

Thanks,
raf


_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Upgrading from gpg1 to gpg2: lots of trouble, need help [ In reply to ]
Sent from my iPhone

_______________________________________________
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users