lkml.org 
[lkml]   [2000]   [Aug]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Definitions
On Sun, Aug 13, 2000 at 11:59:17PM -0400, Horst von Brand wrote:
> Great! So the Evil Bastard can only eat up 60Mb of RAM (/tmp, /var, /home
> have pieces that are writable to a user). He just needs a buddy to kill
> this machine (128Mb RAM) very much dead then. Now I can sleep well. Or
> perhaps have half a dozen users doing nothing in particular, and the
> machine is OOM.
>
> 20Mb RAM for a filesystem per user is _ridiculous_, AFAIAC. Most users
> around here have much less for a quota!

First the limit is per file system, not per file system per user.

Better forbid the users mmap and TCP/unix sockets then, because they can
easily pin more memory for their socket buffers or page tables (similar to
a lot of other subsystems) As long as there is no bean counter every user
can relatively easy tie up arbitary amounts of memory in Linux. Alternatively
you have to set their process/fd descriptors so low to make the machine
unusable for the user:

e.g. 64K socket buffer per fd, you want to limit it to 5MB max/user
5MB / 64K = 80 sockets. So you give them 10 processes with 8 file descriptors
each or 8 processes with 10 file descriptors? How many of the programs
your users use will still run with such low limits?

The only more or less reliable way currently to prevent local DoS by users
is to separate them in UML virtual machines.

-Andi

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:58    [W:0.100 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site