The notion of "risk" in the Nessus report has never been formalized.
One can find "Critical", "High", "Serious", "Medium", "Low", "None",
and miscellaneous comments. AFAIK, nobody really knows if "serious" is
worse than "critical" or "high".
1. If we keep the current system, we should at least define a precise
scale, or even replace it with a grade.
But some people will give more importance to the disponibility of their
service (e.g. ISP with peering or QoS agreements), others will
consider the confidentiality of their data as vital (e.g. army)
2. Maybe we could add a "security objective", e.g. taken from "DICP"
= disponibility, integrity, confidentiality, proof / imputability
(= authentication & logs).
[.I'm not sure that "DICP" is international. French bankers love it]
So we might say, e.g. for a denial of service where the source address
of the packets can be spoofed "Risk: D5 P4", or for a buffer overflow
that allows full root control of the machine "Risk: 5" (no need to add
an objective, everything is concerned)
However, I don't think that's great, because
i. "D1 I0 C5 P4" is not simple, most people will prefer a simple
grade.
ii. Only the consequence of the attack appears in the grade, not the
probability / ease.
iii. Environments are differents and what looks critical here will be
only medium there.
I'm afraid that we'll never be able to solve this point, though.
3. So I thought that the "risk" could be computed from several factors
and user-defined tables / functions.
First, consequences, maybe related to the security objectives?!
- access to "restricted" information (e.g. webroot)
- potential access to a few files
- potential access to sensitive user files
- potential access to any files
etc.
Note that attack against integrity of system data may lead to
disponibility, confidentiality or proof problems; attacks against
confidentiality of system data may create proof problems...
So most people just want to know if their machine can be compromised
or not.
An overall "consequence" mark could be computed with just a mean of
the DICP marks.
e.g. somebody give more weight to confidentiality, others to
integrity...
Second, ease of attack:
- immediate (e.g. just use a browser)
- working exploit "in the wild"
- buggy exploit released, has to be fixed
- technical details known, no exploit released
- technical details unknown, but confirmed by "serious people".
- theoretical
A table would give the final mark. The idea is that "risk" is not
"linear". An theoretical attack that have the worst consequences is
usually considered as much more dangerous than a very easy attack that
has only small consequences.
To avoid brain damaged questions, maybe this table should be hard
coded in Nessus.
Is this a brain damaged idea?
--
mailto:arboi@bigfoot.com
GPG Public keys: http://michel.arboi.free.fr/pubkey.txt
http://michel.arboi.free.fr/ http://arboi.da.ru/
FAQNOPI de fr.comp.securite : http://faqnopi.da.ru/
One can find "Critical", "High", "Serious", "Medium", "Low", "None",
and miscellaneous comments. AFAIK, nobody really knows if "serious" is
worse than "critical" or "high".
1. If we keep the current system, we should at least define a precise
scale, or even replace it with a grade.
But some people will give more importance to the disponibility of their
service (e.g. ISP with peering or QoS agreements), others will
consider the confidentiality of their data as vital (e.g. army)
2. Maybe we could add a "security objective", e.g. taken from "DICP"
= disponibility, integrity, confidentiality, proof / imputability
(= authentication & logs).
[.I'm not sure that "DICP" is international. French bankers love it]
So we might say, e.g. for a denial of service where the source address
of the packets can be spoofed "Risk: D5 P4", or for a buffer overflow
that allows full root control of the machine "Risk: 5" (no need to add
an objective, everything is concerned)
However, I don't think that's great, because
i. "D1 I0 C5 P4" is not simple, most people will prefer a simple
grade.
ii. Only the consequence of the attack appears in the grade, not the
probability / ease.
iii. Environments are differents and what looks critical here will be
only medium there.
I'm afraid that we'll never be able to solve this point, though.
3. So I thought that the "risk" could be computed from several factors
and user-defined tables / functions.
First, consequences, maybe related to the security objectives?!
- access to "restricted" information (e.g. webroot)
- potential access to a few files
- potential access to sensitive user files
- potential access to any files
etc.
Note that attack against integrity of system data may lead to
disponibility, confidentiality or proof problems; attacks against
confidentiality of system data may create proof problems...
So most people just want to know if their machine can be compromised
or not.
An overall "consequence" mark could be computed with just a mean of
the DICP marks.
e.g. somebody give more weight to confidentiality, others to
integrity...
Second, ease of attack:
- immediate (e.g. just use a browser)
- working exploit "in the wild"
- buggy exploit released, has to be fixed
- technical details known, no exploit released
- technical details unknown, but confirmed by "serious people".
- theoretical
A table would give the final mark. The idea is that "risk" is not
"linear". An theoretical attack that have the worst consequences is
usually considered as much more dangerous than a very easy attack that
has only small consequences.
To avoid brain damaged questions, maybe this table should be hard
coded in Nessus.
Is this a brain damaged idea?
--
mailto:arboi@bigfoot.com
GPG Public keys: http://michel.arboi.free.fr/pubkey.txt
http://michel.arboi.free.fr/ http://arboi.da.ru/
FAQNOPI de fr.comp.securite : http://faqnopi.da.ru/