Mailing List Archive

Risk Assessment
First and foremost thanks to everybody for their assistance to a completely
ignorant gpg user, ESPECIALLY Werner who went out of his way to help me
with all sorts of stupid Unix/Windoze/NT/c/configure/make questions.

I ended up rebooting under Windows 95 rather than continue down the path of
trying to compile under CygWin32 on NT for myself. Actually, while I
couldn't get it to generate keys under Win NT, it does the decryption fine.
I'm a happy camper.

I've finally succeeded in transmitting a gpg-encrypted message and
decrypting it. I'd like to automate the process for my "e-commerce"
[sorry] system.

Along the way, I've realized that there is simply no viable way to get the
level of security that most of you expect, and I'm definitely fudging some
corners here on security. However, I'd appreciate a risk-assessment from
folks who understand this stuff, now that I have a better idea of what I'm
doing, particularly the wrong things... I know they're wrong, I just don't
know how wrong they are. :-)

The goal is to transmit online orders for CDs to my online clients without
actually spending any of their money, since they don't have any. They're
mostly starving musicians you never heard of... Yet. If they get famous
we'll farm out the web-site orders to somebody with more $ecurity. Not
this week.


So how risky is it, and how could it be hacked, when I:
----------------------------------------------------------------------------

Did everything via telnet, since my ISP is 1000 miles away.

Generated keys using insecure memory, since I can't chown the binary to root.

Exported all the keys and then the secret keys (--export-secret-keys isn't
listed in -h, btw).

Elected not to use a passphrase, since it would be in a web-site script,
which is publicly visible anyway. Yeah, I *could* create a third script in
a secure area to call that would spit the password out to the encryptor...
if I knew exactly how to do that...

Sneaker-netting the public and secret keyrings to my client's Windoze box
and importing there, with insecure memory, and no real random device.

Will be encrypting the data with insecure memory from a PHP web-script.
Or not upgrading as often as I should.
The ISP gave me compiler access, but I still can't chown the binary to
root, nor seriously expect them to do so every few days... Would there be
a way that your average paranoid ISP would be able to let me chown a
specific file to root?...

E-mailing the encrypted order to the client.

The decrypting is all being done by a user on a windows box, who
understands infinitely less of this stuff than I do, if you can believe
that. :-)


I'd appreciate any feedback on these points.


I suspect I'm still not using the whole secret/public keys properly... I
generated all the public and secret keyrings on the Unix box (via telnet)
and then exported them to the Windoze box... In retrospect, perhaps it
would have been better to generate the client's secret keyring on the
Windoze box and export only the public ones from each to the other. But
I'd be trading the telnet/RAM-sniffing risk for the crappy RNG (hey, that
stands for Random Number Generator, doesn't it?!) on Windoze. My hatred of
Windoze made me assume that it was still better to do it all on Unix.

Oh yeah, all those +s and -s that went by during the random generation...
Can any meaning be assigned to their occurrences? I mean, can y'all watch
them go by and say, "Uh oh, better do it again, not random enough."?

I can redo the key generation, since I'm not trying to automate that.


Why do I get the feeling that there's a lot of folks out there that are
just taking credit card orders on a "secure" server, and then transmitting
them in clear-text via e-mail to their storefront POS credit-card
machines?... There *have* to be people other than me who are
unable/unwilling to pay CyberCash rates...

THANKS AGAIN!!!

-- "TANSTAAFL" Rich lynch@cognitivearts.com webmaster@ and www. all of:
R&B/jazz/blues/rock - jademaze.com music industry org - chatmusic.com
acoustic/funk/world-beat - astrakelly.com sculptures - olivierledoux.com
my own company - l-i-e.com uncommon ground - uncommonground.com
Re: Risk Assessment [ In reply to ]
On Thu, 22 Oct 1998, Richard Lynch wrote:
> Did everything via telnet, since my ISP is 1000 miles away.

Any network in between can capture your packets. This means that if there
is a hostile (or compromised) network in between, your data could have
been captured.

This wouldn't necessarily need to be part of an attack against you
specifically; your key could be captured as a "lucky bonus" during a
compromise of an intermediary system.

Solution: use ssh for remote terminal access, instead of telnet. For
windows, you can use any number of products, such as stock ssh right up to
SecureCRT and others. This still implies a level of trust in the remote
system, but it ensures that an intermediary cannot collect data from your
connection.

> Generated keys using insecure memory, since I can't chown the binary to root.

"insecure memory"? It's only insecure if GPG doesn't clean memory properly
after generating a key (as other processes could then reuse the free()'d
memory later; protected memory spaces are exactly that: protected. Only
the process (or root) should be able to get at that memory space.

Unless I'm -really- missing something here? BUGTRAQ regulars, be kind. ;-)

If there really -is- an issue here, then solutions:

a) define a non-shared environment that you control for the purposes of
this project (ie a machine -you- control), or

b) social solutions. Find a way to get the access you need to make
changes, or find technical solutions to the "inconvenience issue" of
the administrator setting up your software as needed, and arrange for
social security of your data (through contracts).

> Exported all the keys and then the secret keys (--export-secret-keys isn't
> listed in -h, btw).

Reliance on security of the underlying operating system, and operater
cognizance of security issues. As long as you're aware that whoever owns
the system (hopefully you) can read the data, and can place explicit trust
in that, in addition to being aware of file permission issues, you're
fine.

But you've got an implicit level of trust with the administration of the
system you're doing this on. Don't put anything on the system you wouldn't
hand them anyway, unless it's password-protected before it gets on the
system.

> Elected not to use a passphrase, since it would be in a web-site script,
> which is publicly visible anyway.

Not much you -can- do here. This is a case where, if you want automation,
you must rely on the security of underlying operating system (and the
trust issue between you and the administrator) to save you.

Assume, however, that if you have a root compromise, -all- of your keys
are compromised. Your "break-in recovery" procedure should cover a means
by which you'll regenerate those keys and get them to the customer.

> Yeah, I *could* create a third script in a secure area to call that
> would spit the password out to the encryptor... if I knew exactly how
> to do that...

Again, you're still relying on the security of the system anyway, so I'd
think you're better off with the original approach.

> Sneaker-netting the public and secret keyrings to my client's Windoze box
> and importing there, with insecure memory, and no real random device.

Lack of /dev/random shouldn't matter here, since they've already been
generated. Or am I missing something again?

However, you're now putting your security in the hands of Windows, and are
assuming you won't be compromised by the latest IE or Netscape exploit.
I'm assuming that this is a user's workstation?

Again, a trust issue with the underlying system operator, and their
intentions/competance.

Soltions: Define a trust level in the operator. Training, agreements, and
the customer's own self-interest can help here.

> Will be encrypting the data with insecure memory from a PHP web-script.

Again, insecure memory seems like an oxymoron to me, if post-processing
cleanup is done properly and you place trust in the system you're using.

However, if memory serves, PHP operates as a part of the web server
(running code as the web server user, not via something like suexec). If
this is the case, a compromise of the web server itself is a compromise
for you.

Solution: Don't use PHP (or SSI, or anything of that nature) for
security-critical code. Use an external script (preferably something
compiled so that you can ensure proper memory cleanup before process
termination) which runs as -your- userid, so that you minimize the
potential for compromise (you still have a weak link in suexec, but that's
hand-auditable code; I know, I've done it ;-).

> Or not upgrading as often as I should.

This is a problem; it sounds like you have a trust issue with the
administrator. There's nothing here that can help you; your best bet is an
NDA with the ISP, or some sort of legal restraint, because there isn't
much of a technical means of protecting yourself here.

> The ISP gave me compiler access, but I still can't chown the binary to
> root, nor seriously expect them to do so every few days... Would there be
> a way that your average paranoid ISP would be able to let me chown a
> specific file to root?...

Very unlikely, but a legally binding contract could ease their minds on
this, as well as yours. I am not an attorney, and this is not legal
advice; you should consult with your attorney on issues like this.

What is the root of the technical requirement here for root ownership (and
presumably suid permissions)? I must -really- be missing something here...

> E-mailing the encrypted order to the client.

If encrypted before transmission (assuming no breakdown earlier in the
chain), you're fine here. If there was any kind of breakdown earlier in
the chain (ie. the keys are compromised at any point), this serves as a
means of capturing the data (same problem as with telnet).

Solution: use ssh tunnels for transmission of the message off-site.

> The decrypting is all being done by a user on a windows box, who
> understands infinitely less of this stuff than I do, if you can believe
> that. :-)

This isn't necessarily a problem, except for the potential of a compromise
of their own system.

Solution: proper care and feeding of your users. ;-) This means training,
as well as configuration and upgrade assistance.

> I suspect I'm still not using the whole secret/public keys properly... I
> generated all the public and secret keyrings on the Unix box (via telnet)
> and then exported them to the Windoze box... In retrospect, perhaps it
> would have been better to generate the client's secret keyring on the
> Windoze box and export only the public ones from each to the other. But
> I'd be trading the telnet/RAM-sniffing risk for the crappy RNG (hey, that
> stands for Random Number Generator, doesn't it?!) on Windoze. My hatred of
> Windoze made me assume that it was still better to do it all on Unix.

In this case, it actually was a bad assumption. You can get more
randomness using /dev/random under Linux, but that is heavily offset by
the sheer number of potential vulnerabilities in generating the keys on a
shared, untrusted, remote system.

> I can redo the key generation, since I'm not trying to automate that.

Might not be a bad idea. ;-)

> Why do I get the feeling that there's a lot of folks out there that are
> just taking credit card orders on a "secure" server, and then transmitting
> them in clear-text via e-mail to their storefront POS credit-card
> machines?... There *have* to be people other than me who are
> unable/unwilling to pay CyberCash rates...

Not us; we did the CyberCash thing, and provide it to our customers. It's
worked well, but you've still got a number of issues to work around, as
well as ensure that your own security is up-to-snuff.

Doing this via email would work well, but this kind of thing -only- works
when there is a specific level of trust in the systems you're using, and
their operators (both in terms of intent and skill), as well as a specific
level of trust in the security of those systems.

Without that, you've got no security. Technical solutions only go so far.

--
Edward S. Marshall <emarshal@logic.net> http://www.logic.net/~emarshal/ -o)
------------------------------------------------------ ----- ---- --- -- - /\\
Who'd have thought that we'd be freed from the Gates of hell by a penguin? _\_v

Linux labyrinth 2.1.125 #9 SMP Sat Oct 17 14:46:24 CDT 1998 i586 unknown
9:45pm up 5 days, 6:48, 5 users, load average: 0.01, 0.02, 0.00
Re: Risk Assessment [ In reply to ]
At 10:27 PM 10/22/98, Edward S. Marshall wrote:
>On Thu, 22 Oct 1998, Richard Lynch wrote:
>> Did everything via telnet, since my ISP is 1000 miles away.

>> Generated keys using insecure memory, since I can't chown the binary to root.
>
>"insecure memory"? It's only insecure if GPG doesn't clean memory properly

PGP seemed very adamant in its warnings that I was using "insecure memory".
I was earlier told that to fix it, I had to chmod the binary to 4xxx
(?)... My reading of the man page tells me that 4xxx means to suid before
running... My interpretation of that was that it would suExec (suid?) to
the owner, which is just me, who was running it in the first place...

>a) define a non-shared environment that you control for the purposes of
> this project (ie a machine -you- control), or

I do control the Windoze box... well, as much as one can be said to be in
control of a Windoze box. :-)

>But you've got an implicit level of trust with the administration of the
>system you're doing this on. Don't put anything on the system you wouldn't
>hand them anyway, unless it's password-protected before it gets on the
>system.

I trust the ISP. They are extremely competent, and seem to be really
on-the-ball security wise. They are even users of CyberCash. It's just
that my client's profit margin... well, I've already moaned enough about
that. :-)

>> Elected not to use a passphrase, since it would be in a web-site script,
>> which is publicly visible anyway.
>
>Not much you -can- do here. This is a case where, if you want automation,
>you must rely on the security of underlying operating system (and the
>trust issue between you and the administrator) to save you.
>
>Assume, however, that if you have a root compromise, -all- of your keys
>are compromised. Your "break-in recovery" procedure should cover a means
>by which you'll regenerate those keys and get them to the customer.

That's pretty easy... I can make new keys whenever I'm feeling insecure,
and getting them to the clients is not too terribly difficult.

>> Yeah, I *could* create a third script in a secure area to call that
>> would spit the password out to the encryptor... if I knew exactly how
>> to do that...
>
>Again, you're still relying on the security of the system anyway, so I'd
>think you're better off with the original approach.

The system itself is pretty secure... It's my attempting to use it on a $0
budget that's and with minimal understanding that's causing the holes.
Which is why I'm bugging you folks for help. :-^

>> Sneaker-netting the public and secret keyrings to my client's Windoze box
>> and importing there, with insecure memory, and no real random device.
>
>Lack of /dev/random shouldn't matter here, since they've already been
>generated. Or am I missing something again?

There was no real random device on the key I generated on the Windoze box
as part of the installation... A key I'm not actually using, as I
currently understand it.

>However, you're now putting your security in the hands of Windows, and are
>assuming you won't be compromised by the latest IE or Netscape exploit.
>I'm assuming that this is a user's workstation?

I'm not seeing IE or Netscape really involved... The script e-mails them
the encrypted data, and they run the secret decoder ring locally on their
Windoze box. I can even train them to disconnect from the network before
running the decoder, if it matters...

>> Will be encrypting the data with insecure memory from a PHP web-script.
>
>However, if memory serves, PHP operates as a part of the web server
>(running code as the web server user, not via something like suexec). If
>this is the case, a compromise of the web server itself is a compromise
>for you.

PHP can be set up as a Module or as a cgi. Currently, it's a cgi, since
I'm the only one using it, and the Micro$loth FrontPage Module all the
other virtual webmasters are using is incompatible.

My ISP set the web up to suExec to my shell login before executing cgis...
or, at least, that's how I understood what he e-mailed me when we set up
the account way back when... The other option, as I recall, was to run as
'nobody' or as 'www' or whatever, and then my scripts/cgi directory would
be... exploitable... by all my fellow clients.

>Solution: Don't use PHP (or SSI, or anything of that nature) for
>security-critical code. Use an external script (preferably something
>compiled so that you can ensure proper memory cleanup before process
>termination) which runs as -your- userid, so that you minimize the
>potential for compromise (you still have a weak link in suexec, but that's
>hand-auditable code; I know, I've done it ;-).

If it helps, the PHP scripts and such will be running from a secure
server... or at least that's how I think it's going to end up. My
understanding of secure servers is about on par with my understanding of
gpg. :-(

>> Or not upgrading as often as I should.
>
>This is a problem; it sounds like you have a trust issue with the
>administrator. There's nothing here that can help you; your best bet is an
>NDA with the ISP, or some sort of legal restraint, because there isn't
>much of a technical means of protecting yourself here.

It's a "reasonable expectation of service level" issue, not a trust issue,
really. I trust the ISP. I even am reasonably certain that they know what
they are doing when it comes to security. [The *seem* to know what they
are doing.]

I just can't expect them to compile gpg as often as it should be compiled
during alpha and beta phases, for what I'm paying them. I already
negotiated my compiler access so I could stay on top of gpg updates, but it
looks like it was a marginal win, since I still need to bug them all too
frequently to chown/chmod the binary to use "secure memory" =?= "chown to
root and chmod to 4xxx". If I'm even understanding this secure memory
thing right...

>What is the root of the technical requirement here for root ownership (and
>presumably suid permissions)? I must -really- be missing something here...

They unbent enough to give me compiler access. *root* access as a client
of an ISP virtual host would be something I simply wouldn't even ask for.
They're already trusting me way more than expected by turning me loose with
a c compiler on a virtual host box.

>Solution: use ssh tunnels for transmission of the message off-site.

I'll have to research ssh tunnels... unless somebody can confirm or deny
their [potential] existence over AOL/modem to the client's Windoze 95
box...

>> The decrypting is all being done by a user on a windows box, who
>> understands infinitely less of this stuff than I do, if you can believe
>> that. :-)
>
>This isn't necessarily a problem, except for the potential of a compromise
>of their own system.
>
>Solution: proper care and feeding of your users. ;-) This means training,
>as well as configuration and upgrade assistance.

Well, it will be the blind leading the blind, but I reckon I'll do what I
can. :-)

>> Why do I get the feeling that there's a lot of folks out there that are
>> just taking credit card orders on a "secure" server, and then transmitting
>> them in clear-text via e-mail to their storefront POS credit-card
>> machines?... There *have* to be people other than me who are
>> unable/unwilling to pay CyberCash rates...
>
>Not us; we did the CyberCash thing, and provide it to our customers. It's
>worked well, but you've still got a number of issues to work around, as
>well as ensure that your own security is up-to-snuff.
>
>Doing this via email would work well, but this kind of thing -only- works
>when there is a specific level of trust in the systems you're using, and
>their operators (both in terms of intent and skill), as well as a specific
>level of trust in the security of those systems.

The systems as they are seem reasonably secure, so far as I can tell from
what the ISP has told me. And I trust them. It's the things I'm trying to
do that worry me. :-)

I really appreciate all your help, everybody.

THANK YOU!!!

-- "TANSTAAFL" Rich lynch@cognitivearts.com webmaster@ and www. all of:
R&B/jazz/blues/rock - jademaze.com music industry org - chatmusic.com
acoustic/funk/world-beat - astrakelly.com sculptures - olivierledoux.com
my own company - l-i-e.com uncommon ground - uncommonground.com
Re: Risk Assessment [ In reply to ]
On Thu, Oct 22, 1998 at 09:06:16PM -0500, Richard Lynch wrote:
> Did everything via telnet, since my ISP is 1000 miles away.

telnet is snoopable, you should use ssh... :)

> Generated keys using insecure memory, since I can't chown the binary to root.

Probably not bad.

> Exported all the keys and then the secret keys (--export-secret-keys isn't
> listed in -h, btw).

Okay, then delete them. You don't need them. :)

> Elected not to use a passphrase, since it would be in a web-site script,
> which is publicly visible anyway. Yeah, I *could* create a third script in
> a secure area to call that would spit the password out to the encryptor...
> if I knew exactly how to do that...

Ah, but then you're missing out on the cool part of Public Key
encryption. You don't NEED a key on the remote Unix box... well, you
do, but not the secret key.

Think of it as handing a unlocked padlock to someone and asking them to
lock the gate with it. They don't need the combination to use it, they
can just clamp it on and it's done.

In the same way, with PK, you can live happily with no secret keys at
all, yet still mail encrypted data to people. They have what it needs
to decrypt: you just need the open padlock (ie, their public key).

> Will be encrypting the data with insecure memory from a PHP web-script.
> Or not upgrading as often as I should.

If you're running Stronghold or one of the author 'secure' variants of
Apache, your only real danger is if root is compromised on that machine,
and even then it's not going to yield more than a credit card or two
that may show up in the core dump.

This is on a 'secure' server, right?

> The ISP gave me compiler access, but I still can't chown the binary to
> root, nor seriously expect them to do so every few days... Would there be
> a way that your average paranoid ISP would be able to let me chown a
> specific file to root?...

Nope. :)

> E-mailing the encrypted order to the client.

Good. That is probably the weakest link in the chain, since it is a
nice repository of credit cards waiting to be collected.

> I suspect I'm still not using the whole secret/public keys properly... I
> generated all the public and secret keyrings on the Unix box (via telnet)
> and then exported them to the Windoze box... In retrospect, perhaps it
> would have been better to generate the client's secret keyring on the
> Windoze box and export only the public ones from each to the other. But
> I'd be trading the telnet/RAM-sniffing risk for the crappy RNG (hey, that
> stands for Random Number Generator, doesn't it?!) on Windoze. My hatred of
> Windoze made me assume that it was still better to do it all on Unix.

Actually, PGP under Windows should have a fine random number generator.
You just need noise from the real world like keypress timing, and other
events like moving the mouse. There's no reason these can't be done
on Windows.

Of course, Windows sucks, but not for that reason. :)

> Oh yeah, all those +s and -s that went by during the random generation...
> Can any meaning be assigned to their occurrences? I mean, can y'all watch
> them go by and say, "Uh oh, better do it again, not random enough."?

Nope, it's measuring the primeness of things, not the randomness. It's
more of a progress indicator: if the output stops, /dev/random is
'empty' (it hasn't seen enough real world randomness to return anything:
so moving the mouse/causing net activity and other random sources need
to be used to give it a source of data).

> I can redo the key generation, since I'm not trying to automate that.

I'd do them as you said above: on the Windows machine (preferably on the
customers, so you leave no traces on your own) and then export just the
public key to the remote system.

> Why do I get the feeling that there's a lot of folks out there that are
> just taking credit card orders on a "secure" server, and then transmitting
> them in clear-text via e-mail to their storefront POS credit-card
> machines?... There *have* to be people other than me who are
> unable/unwilling to pay CyberCash rates...

There no doubt are. Most aren't even that automated: they just email it
in clear text for a human to pick up once a day. (The commercial
'server' license for PGP is several thousand dollars last I checked. I
doubt most people are paying that, and thus leaving piles of credit
cards sitting in plain text in mailboxes.)

It's dangerous as can be to do that, but I have no doubt it's being done
all the time.

--
Brian Moore | "The Zen nature of a spammer resembles
Sysadmin, C/Perl Hacker | a cockroach, except that the cockroach
Usenet Vandal | is higher up on the evolutionary chain."
Netscum, Bane of Elves. Peter Olson, Delphi Postmaster
Re: Risk Assessment [ In reply to ]
On Thu, Oct 22, 1998 at 10:27:51PM -0500, Edward S. Marshall wrote:
> On Thu, 22 Oct 1998, Richard Lynch wrote:
> Solution: use ssh for remote terminal access, instead of telnet. For
> windows, you can use any number of products, such as stock ssh right up to
> SecureCRT and others. This still implies a level of trust in the remote
> system, but it ensures that an intermediary cannot collect data from your
> connection.

This is true, but not needed in this case: generate the keys on the
windows machine. The remote system only needs the public key, not the
secret key.

> "insecure memory"? It's only insecure if GPG doesn't clean memory properly
> after generating a key (as other processes could then reuse the free()'d
> memory later; protected memory spaces are exactly that: protected. Only
> the process (or root) should be able to get at that memory space.

Nope: any other process running under the same uid can get to it, too.

> Assume, however, that if you have a root compromise, -all- of your keys
> are compromised. Your "break-in recovery" procedure should cover a means
> by which you'll regenerate those keys and get them to the customer.

But that's PK: let them compromise the public keys. Heck, publish them
on the key servers if you want. They won't do anyone any good.

Let the secret keys live on the remote user's machine and life is
peachy: the only gaps in security are within the web server itself, PHP,
and GPG.

The only key lost is the public key, and that'll only allow people to
forge mail pretending they filled out a web form instead of email. BFD.

> What is the root of the technical requirement here for root ownership (and
> presumably suid permissions)? I must -really- be missing something here...

gdb gpg <pid>

--
Brian Moore | "The Zen nature of a spammer resembles
Sysadmin, C/Perl Hacker | a cockroach, except that the cockroach
Usenet Vandal | is higher up on the evolutionary chain."
Netscum, Bane of Elves. Peter Olson, Delphi Postmaster
Re: Risk Assessment [ In reply to ]
"Edward S. Marshall" <emarshal@logic.net> writes:

> "insecure memory"? It's only insecure if GPG doesn't clean memory properly
> after generating a key (as other processes could then reuse the free()'d
> memory later; protected memory spaces are exactly that: protected. Only
> the process (or root) should be able to get at that memory space.

With "insecure memory" I mean memory which may get swapped out to
harddisk and if you have physical access to the machine and you can
inspect the harddisk (read /dev/sd* ) - than there is a possibility,
that you will find a secret key or a passphrase on the disk (it is not
very complicated to localte a secret key on a huge harddisk: There
are some properties which makes it "easy": Look for a area on the
harddisk which is realy random and than do furthers tests.

Anyway, all memory which held some secret information is overwritten
- so the risk is very minimal. Non swapable key storage is a request
by many cryptographers and that is the reason why it is there.
There is a option "--no-secmem-warning".

> > Exported all the keys and then the secret keys (--export-secret-keys isn't
> > listed in -h, btw).

On purpose - you should never need it. Yesterday I imported ~75000
RSA keys and found 30 secret keys in it - I think there is a pgpcrack
which does a dictionary attack on secret keys :-)

> Assume, however, that if you have a root compromise, -all- of your keys
> are compromised. Your "break-in recovery" procedure should cover a means

There is nothing to do against the allmighty Mr. Root: She can simply
install a trojan horse for everything or if you are verifying
signatures of every binary and lib, why not install a "special" kernel
which makes backup copies of all data entered when the "echo" is
disabled on a tty, ... and and and.

You need a hardware device for real good security (with integrated
keyboard to enter the passphrase).


Werner
Re: Risk Assessment [ In reply to ]
On Thu, 22 Oct 1998, brian moore wrote:
> This is true, but not needed in this case: generate the keys on the
> windows machine. The remote system only needs the public key, not the
> secret key.

Good point. Going back and reading his original post, I really must have
been asleep at the wheel when I wrote that.

> > What is the root of the technical requirement here for root ownership (and
> > presumably suid permissions)? I must -really- be missing something here...
>
> gdb gpg <pid>

True, but to exploit this, you'd need to have already compromised the
system to the point where you could execute code as the user. That's a
pretty significant wedge to get ahold of.

--
Edward S. Marshall <emarshal@logic.net> http://www.logic.net/~emarshal/ -o)
------------------------------------------------------ ----- ---- --- -- - /\\
Who'd have thought that we'd be freed from the Gates of hell by a penguin? _\_v