lkml.org 
[lkml]   [2000]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Overcommitable memory??
On Mon, 20 Mar 2000, Jesse Pollard wrote:

>On Mon, 20 Mar 2000, David Whysong wrote:
>>On Mon, 20 Mar 2000, James Sutherland wrote:
>>>On Mon, 20 Mar 2000 05:39:48 -0600, you wrote:

>>>>>Once we are OOM, you can't give user-space any choices.
>>>>YOU ARN'T OOM - a specific user is out of resources, not a catastrophic
>>>>failure.
>>A user being out of (memory) resources shares many of the same problems of
>>a system that is OOM. You still need to kill one or more user processes,
>>and you can't give the user the choice, because the user might decide to
>>keep on running.
>The user process (one of them) has been aborted. As long as the user
>remains below the limit, the user can keep on running.

The user task has been aborted. The user has hit his OOM limit, and his
process has been killed. It's the same OOM problem from a user's
perspective.

>>Preventing system OOM using resource limits is equivalent to disabling
>>overcommit. You have to restrict each of N users to 1/N of the total
>>system memory.
>
>NO. Different users can have different limits. The total of the concurrent
>users must not exceed the amount of the resources. If it is decided to do
>so then you are overcommiting the resource, which may be valid in many
>cases (single user workstations are one, but there you may not want to
>enable quotas anyway).
>
>1/N is the worst way to set quotas. It is one way to divide a users
>quota over his process group, but I'm not exactly in favor of that either.

Sure, different users can have different limits. But for N possible users
on a machine, the average limit MUST be 1/N in order to prevent system
wide failure.

If you don't do that, then what's the point of per-user limits? They don't
prevent system-level OOM situations, they promote user-level OOM
(OOResources), and they add overhead.

>1. Determine how large the individual processes have to be to work. Use
> the worst case processes.

Translation: Pick some value that is not always constant or known...

(If you always knew in advance how much memory the system will ever need,
we wouldn't be having this discussion.)

>2. Determine how many users use the worst case process at the same time.
>3. Determine how many users can run at the same time.

...multiply by N. So you end up with an average limit proportional to 1/N.

You've optimized for worst-case scenarios at the expense of the common
case.

Dave

David Whysong dwhysong@physics.ucsb.edu
Astrophysics graduate student University of California, Santa Barbara
My public PGP keys are on my web page - http://www.physics.ucsb.edu/~dwhysong
DSS PGP Key 0x903F5BD6 : FE78 91FE 4508 106F 7C88 1706 B792 6995 903F 5BD6
D-H PGP key 0x5DAB0F91 : BC33 0F36 FCCD E72C 441F 663A 72ED 7FB7 5DAB 0F91





-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.169 / U:0.988 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site