lkml.org 
[lkml]   [2000]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: On the issue of low memory situations
Date
On Thu, 16 Mar 2000, Nicholas Vinen wrote:
>I can't pretend to be an expert but I'd like to throw a comment or two into
>the overcommitment and out of memory issues being discussed.
>
> First, with overcommitment, a process could find itself going to access
>memory to write to it and there's no space to commit to its allocation any
>more. In this case, I can think of a few things that could happen. The write
>could fault. The write could block until that memory became available. The
>allocation of the page could even block if memory is low, rather than out. I
>think being able to select this behaviour is going to be important, in
>combination perhaps with a userland daemon.
> For example, if root could allocate one of these three behaviours to each
>process, then a system could work like this: the kernel has its usual extra
>pages allocated for itself. Important processes like init, cron, kflushd
>etc. could be able to exhaust all of memory and sleep when it's out (would
>any of these sleeping cause a system to die?). Perhaps some processes like X
>would also sleep rather than segfault if it's out, but it might also be
>smart enough to close down if it notices that there's not enough memory left
>for it to do much more than that. Most user programs would just fault if
>they came too close to exhausting memory.
> Then, your userland daemon which allocates all its memory at init would sit
>there and take down processes which misbehaved - any time that the minimum
>free memory is reached, a heuristic is used to determine a process to kill
>by a combination of its memory usage, its owner, its memory "priority"
>(perhaps another assignable value similar to low memory behaviour) and
>perhaps a specific rule as well. This way, when I run gcc on some file
>that's so complicated that it tries to take up all my RAM and swap space to
>compile, before it manages to achieve this it's going to fault. My apache is
>probably going to be put to sleep if it gets too close to exhausting memory,
>at which point the daemon is going to kill off anything of lower priority.
>Maybe nothing else is running at lower memory priority in which case apache
>will continue to sleep and eventually be killed if nothing else can and the
>memory is needed. Even when most of memory is exhausted, cron and other
>important processes should still have enough room to continue.

The daemon could not have a fixed amount of memory. Each time it attempts to
record more data, it will have to expand. New processes will have to have
more entries for data...

> This certainly seems like a much more acceptable alternative to me than
>what currently happens. E.g. I did have this gcc problem I mentioned, gcc
>used all memory and then faulted, leaving X in an unusable state. If X had a
>higher "memory priority" than gcc (as it probably should) then I wouldn't
>have had this problem.
>
> What do you think? OOM is not unavoidable and overcommitment is pretty much
>a requirement. But I don't see any reason that when overcommitted allocation
>is honored, it necessarily has to be done down to the last page in the
>system for any process. It seems to me that this is the main cause of the
>problem. And perhaps some processes would rather wait for memory to become
>available than die.

OOM can be avoided in most cases. Using resource accounting to determine
where, and how much to allocate allows the finite resources to be managed.
When the OOR (Out-Of-Reserve) occurs, the accounting records can be used
to determine where to reduce quota, and where to expand it. When this is
no longer possible, it provides the justification to expand the resources.
One of the systems my site had was oversubscribed by a 3:1 ratio. When the
system did finaly hang, we could determine that too much resource was
allocated for interactive use (it wasn't being used). It was reallocated
to the batch system, and the problem didn't occur again for a long while.

At no time did the system kill "random" processes. It was always known
exactly who to kill. It was always known what that process was trying to
do. The accounting gave us a way to track it, and to reallocate it.

When it started occuring with some frequency (once a month was considered
"frequent") we could justify expanding the swap space. The oversubscription
reduced to 2.5:1 and again the problem reduced. This is on a system with
16GB of memory and the swap space ended up around 40GB. (if I remember the
system correctly - it was 3 years ago..)
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@cats-chateau.net

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.223 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site