Mailing List Archive

Sys Admin question
This probably isn't the best place to ask, but it's Linux, and I trust the
natural order of internet flamethrowing to cordially correct me.
I have a Linux server with evil users looking to destroy it (ISP).
Even effortlessly, I find that I can consume all of the VM on the system.
The limits are set to something rather reasonable (10M of addressable
memory per process, 12 processes max per user). Stricter restrictions
make trivial tasks impossible (man, for one). Under this scheme, a single
user can still consume 120 megs of virtual memory.
This isn't what I had in mind. It'd be much easier to say "User A can only
use X megs of memory at most!" rather than say how much memory each
process can use and how many processes the user can spawn.
Perusing the kernel source shows that such a framework is in place, but no
real meat is attached to it, so it rules that option out (unless I'm
misreading).
What can I do in the meantime? There's only so much swap space that I can
add, and I'm still vulnerable if enough users decide that they want to run
resource intensive tasks.
It seems like the system can't say that enough is enough for a greedy user
and start making mmap() fail for them, choosing to instead axe some
processes (such as init(!)) to satisfy memory demand.
Am I missing something bluntly obvious?
Thanks
-Michael Bacarella
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: Sys Admin question [ In reply to ]
On Tue, Oct 26, 1999 at 09:37:00AM -0400, Michael Bacarella wrote:
> This probably isn't the best place to ask, but it's Linux, and I trust the
> natural order of internet flamethrowing to cordially correct me.
>
> I have a Linux server with evil users looking to destroy it (ISP).
> Even effortlessly, I find that I can consume all of the VM on the system.
>
> The limits are set to something rather reasonable (10M of addressable
> memory per process, 12 processes max per user). Stricter restrictions
> make trivial tasks impossible (man, for one). Under this scheme, a single
> user can still consume 120 megs of virtual memory.
Yes, memory accounting is per process. We don't have
(nor BSD has) a per *user* memory limit, although such
might be of what people like you would love to see.
> Perusing the kernel source shows that such a framework is in place, but no
> real meat is attached to it, so it rules that option out (unless I'm
> misreading).
>
> What can I do in the meantime? There's only so much swap space that I can
> add, and I'm still vulnerable if enough users decide that they want to run
> resource intensive tasks.
With 2.2 kernels (and fresh utilities) you can create 2 GB swap
partitions, and (if I read mm/swapfile.c correctly), there is
no upper hard limit on number of swap files/partitions allowed
in the system. ( That is, after some fast swap, do throw in a
20 GB IDE disk sliced to 10 partitions, 2GB each.. I have heard
that such monsters are around USD 400 apiece.. )
> It seems like the system can't say that enough is enough for a greedy user
> and start making mmap() fail for them, choosing to instead axe some
> processes (such as init(!)) to satisfy memory demand.
>
> Am I missing something bluntly obvious?
No. (Aside of not mentioning what kernel you are now running.)
> Thanks
> -Michael Bacarella
/Matti Aarnio <matti.aarnio@sonera.fi>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: Sys Admin question [ In reply to ]
> It seems like the system can't say that enough is enough for a greedy user
> and start making mmap() fail for them, choosing to instead axe some
> processes (such as init(!)) to satisfy memory demand.
you can always start axeing users...although the law kinda frowns upon
this method.
obviously i mean some kind of lesser threat.
Don't screw with the system or you'll find yourself not using the system
anymore..or something to that effect.
-Cygnus
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
admin@intergrafix.net Intergrafix Internet Services
"Dream as if you'll live forever, live as if you'll die today"
http://cygnus.ncohafmuta.com http://www.intergrafix.net
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
On Tue, 26 Oct 1999, Michael Bacarella wrote:
>
> This probably isn't the best place to ask, but it's Linux, and I trust the
> natural order of internet flamethrowing to cordially correct me.
>
> I have a Linux server with evil users looking to destroy it (ISP).
> Even effortlessly, I find that I can consume all of the VM on the system.
>
> The limits are set to something rather reasonable (10M of addressable
> memory per process, 12 processes max per user). Stricter restrictions
> make trivial tasks impossible (man, for one). Under this scheme, a single
> user can still consume 120 megs of virtual memory.
>
> This isn't what I had in mind. It'd be much easier to say "User A can only
> use X megs of memory at most!" rather than say how much memory each
> process can use and how many processes the user can spawn.
>
> Perusing the kernel source shows that such a framework is in place, but no
> real meat is attached to it, so it rules that option out (unless I'm
> misreading).
>
> What can I do in the meantime? There's only so much swap space that I can
> add, and I'm still vulnerable if enough users decide that they want to run
> resource intensive tasks.
>
> It seems like the system can't say that enough is enough for a greedy user
> and start making mmap() fail for them, choosing to instead axe some
> processes (such as init(!)) to satisfy memory demand.
>
> Am I missing something bluntly obvious?
>
> Thanks
> -Michael Bacarella
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.rutgers.edu
> Please read the FAQ at http://www.tux.org/lkml/
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: Sys Admin question [ In reply to ]
> obviously i mean some kind of lesser threat.
> Don't screw with the system or you'll find yourself not using the system
> anymore..or something to that effect.
>
One problem we do have is that running clean out of memory can deadlock a
2.2 box. This is fixed in 2.2.14pre1 by Andrea.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: Sys Admin question [ In reply to ]
On Tue, Oct 26, 1999 at 05:01:10PM +0300, Matti Aarnio wrote:
> Yes, memory accounting is per process. We don't have
> (nor BSD has) a per *user* memory limit, although such
> might be of what people like you would love to see.
Per user limits sound like they're going to be fun. Either the limits
overcommit resources which makes them rather useless or user processes
are going to starve each other. Think of funnies like x11amp crashing
your shell and more ...
Ralf
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: Sys Admin question [ In reply to ]
>On Tue, Oct 26, 1999 at 05:01:10PM +0300, Matti Aarnio wrote:
>
>> Yes, memory accounting is per process. We don't have
>> (nor BSD has) a per *user* memory limit, although such
>> might be of what people like you would love to see.
>
>Per user limits sound like they're going to be fun. Either the limits
>overcommit resources which makes them rather useless or user processes
>are going to starve each other. Think of funnies like x11amp crashing
>your shell and more ...
Per user limits do not have to overcommit unless the total resources specified
is greater that what really exist.
Better to starve each other at the user level. If the user wants to overload
the system, at least the user is limited to his/her own processes.
And that is still better than crashing the system. (Besides, if you are
charging for usage, then you can make a case for more resources to upper
management, or the user must justify the load to upper management)
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil
Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/