Mailing List Archive

2.3 keydb problems
After switching to the new SQLite keydb, I can no longer list my keyring
and things are a little weird.

[rjh@localhost build]$ gpg -vvvv --list-keys
gpg: NOTE: THIS IS A DEVELOPMENT VERSION!
gpg: It is only intended for test purposes and should NOT be
gpg: used in a production environment or with production keys!
gpg: using character set 'utf-8'
gpg: Note: RFC4880bis features are enabled.
gpg: key 1E7A94D4E87F91D5: accepted as trusted key

... and there it hangs. No further output is generated.

Any ideas?

_______________________________________________
Gnupg-devel mailing list
Gnupg-devel@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-devel
Re: 2.3 keydb problems [ In reply to ]
On Fri, 26 Feb 2021 15:51, Robert J. Hansen said:

> gpg: key 1E7A94D4E87F91D5: accepted as trusted key
>
> ... and there it hangs. No further output is generated.

No immediate idea. What you can do is to out

log-file /somewhere/keyboxd.log
verbose
debug ipc,lookup

int keyboxd.conf, gpgconf --kill keyxboxd, and try again. This should
at least show where it hangs.


Shalom-Salam,

Werner
Re: 2.3 keydb problems [ In reply to ]
> int keyboxd.conf, gpgconf --kill keyxboxd, and try again. This should
> at least show where it hangs.

gpgconf hanged while trying to kill keyboxd. Interesting.

I 'sudo kill'ed the keyboxd process and tried again. It appears to be
waiting forever for a stale lock:

[rjh@localhost ~]$ tail -f keyboxd.log
2021-02-27 12:17:04 keyboxd[65434] listening on socket
'/run/user/1000/gnupg/S.keyboxd'
2021-02-27 12:17:05 keyboxd[65435] waiting for lock (held by 535569) ...
2021-02-27 12:17:06 keyboxd[65435] waiting for lock (held by 535569) ...
2021-02-27 12:17:08 keyboxd[65435] waiting for lock (held by 535569) ...
2021-02-27 12:17:12 keyboxd[65435] waiting for lock (held by 535569) ...
2021-02-27 12:17:20 keyboxd[65435] waiting for lock (held by 535569) ...
2021-02-27 12:17:28 keyboxd[65435] waiting for lock (held by 535569) ...

There is no process with PID 535569.

The same stale lock is responsible for gpgconf not being able to kill
the process.

_______________________________________________
Gnupg-devel mailing list
Gnupg-devel@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-devel
Re: 2.3 keydb problems [ In reply to ]
> I 'sudo kill'ed the keyboxd process and tried again.  It appears to
> be waiting forever for a stale lock:

The problem -- with the same error (waiting for a lock held by 535569)
-- persists across reboots. I'm comfortable saying this is a serious
bug. Want me to file it in git?



_______________________________________________
Gnupg-devel mailing list
Gnupg-devel@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-devel
Re: 2.3 keydb problems [ In reply to ]
On Sat, 27 Feb 2021 12:20, Robert J. Hansen said:

> 2021-02-27 12:17:28 keyboxd[65435] waiting for lock (held by 535569) ...
>
> There is no process with PID 535569.

Can you please check the file

~/.gnupg/public-keys.d./pubring.db.lock

It should have two lines: The first with the pid and the second giving
the hostname. Can you delete that file?

We use the same locking code all over gnupg, so it can't be a problem of
keyboxd only.

What you could also do is to "strace -p 65435" to see the system call
errors. For reference here is the code pertaining to this (from
common/dotlock.c):

/* Check for stale lock files. */
if ( (pid = read_lockfile (h, &same_node)) == -1 )
{
if ( errno != ENOENT )
{
saveerrno = errno;
my_info_0 ("cannot read lockfile\n");
my_set_errno (saveerrno);
return -1;
}
my_info_0 ("lockfile disappeared\n");
goto again;
}
else if ( pid == getpid() && same_node )
{
my_info_0 ("Oops: lock already held by us\n");
h->locked = 1;
return 0; /* okay */
}
else if ( same_node && kill (pid, 0) && errno == ESRCH )
// This should trigger the removal of a stale lock file.
// To cope with remote file systems this requires that the the lock file
// was created by this box (same_host). The question is why there is a
// stale lock file at all.
{
/* Note: It is unlikely that we get a race here unless a pid is
reused too fast or a new process with the same pid as the one
of the stale file tries to lock right at the same time as we. */
my_info_1 (_("removing stale lockfile (created by %d)\n"), pid);
unlink (h->lockname);
goto again;
}

if (lastpid == -1)
lastpid = pid;
ownerchanged = (pid != lastpid);

if (timeout)
{
struct timeval tv;

/* Wait until lock has been released. We use increasing retry
intervals of 50ms, 100ms, 200ms, 400ms, 800ms, 2s, 4s and 8s
but reset it if the lock owner meanwhile changed. */
if (!wtime || ownerchanged)
wtime = 50;
else if (wtime < 800)
wtime *= 2;
else if (wtime == 800)
wtime = 2000;
else if (wtime < 8000)
wtime *= 2;

if (timeout > 0)
{
if (wtime > timeout)
wtime = timeout;
timeout -= wtime;
}

sumtime += wtime;
if (sumtime >= 1500)
{
sumtime = 0;
my_info_3 (_("waiting for lock (held by %d%s) %s...\n"),
pid, maybe_dead, maybe_deadlock(h)? _("(deadlock?) "):"");
}


tv.tv_sec = wtime / 1000;
tv.tv_usec = (wtime % 1000) * 1000;
select (0, NULL, NULL, NULL, &tv);
goto again;
}




--
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.