lkml.org 
[lkml]   [1999]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectOOM


Bernd Paysan wrote:

: I'll try over the weekend. The algorithm I have in mind looks as
: follows:
:
: Every process has a score.
[snip complicated point-and-scoring]


Why don't you look up Rik's OOM job-to-kill selector first. It has
the advantages that
o it's stateless (no additional information to be saved
per process)
o it works solely from information available in the kernel
at the time of OOM
o it has just the bare minimum of what people call 'policy'
('intelligence' is a better word) to protect the system
from severe harm
o it does not consider which process requested the page

OOM can be caused by malicious DoS attacks or by harmless lusers [2] using a
desktop machine to run large programs. Rik's selector will offer quite good
protection against the malicious case and amost never fail to pick the right
job in the careless-user-case (the idea of minimizing work loss turns out to
be a good protection against the malicious adversary as well).

It requires a reliable detection that Linux is out of memory. I do not know
anything about that aspect. Let the gurus look into that. And it has a small
issue (IMHO) with jiffies wraparound when it calculates, in wall-clock time,
how long the process has been around. [1]

And, as Andrea pointed out to me, the selected job might not be scheduled
soon to handle the signal. [5]

****

I'm sure that for special cases much better algorithms can be constructed
with the help of userspace interaction (daemons, /proc, config files).

If these fail or are not there, however, the kernel should never just kill
the job which requests the page. That results in wild killing of jobs until
by chance the right one is hit, rendering the machine mostly useless. [4]
(At least not in a stable state so that it can be recommended for batch
operations). Even rebooting is better.

This problem is somewhat hidden by the fact that in most cases the memory
hog requests a series of pages in sequence and so things tend to work out
fine. That's good luck, but you can't rely on that.



Best regards,

Claus

****

Footnotes:

[1] The problem with jiffies wraparound could be attacked in the following
ways:

a) You accept that you have to reboot the machine before jiffies
wraparound
b) You accept that after jiffies wraparound some
have-been-around-forever jobs might get killed
c) You find a way of detecting whether a job has gone through a jiffies
wraparound

A simple suggestion for c would be that at jiffies state 0x00000000 and
0x80000000 all jobs which have been around longer than 0x80000000 are
marked with a special state bit. This quantizes the job's time-of
existance as 0,1,2,...,0x7fffffff jiffies or "very long". That should be
enough.
That is, however, a job state required by OOM. And probably not clean
enough for the gurus.

[2] Personally I'm mostly hit by the "I'm a stupid user" case. The stupid
user is myself. Unfortunately I can't predict my own job sizes, and so
I end up with this memory configuration:
total used free shared buffers cached
Mem: 62936 55508 7428 29228 4296 21544
-/+ buffers/cache: 29668 33268
Swap: 2104528 9172 2095356
Before I run a large job, I figure out how many people are on my
desktop box and set a proper ulimit[3]. Sigh. But I have been bitten
once too often.

[3] Ulimits are not a good solution. They work on a single-process basis
and not on the global machine level.

[4] Expecting all typical system programs to handle tight memory
gracefully is probably an illusion.

[5] That a job is not easily killable is IMHO not a valid excuse to let
it continue hogging memory. It just makes the solution harder.

--
claus.fischer@intel.com Intel Corporation SC12-205 ... not speaking
phone +1-408-765-6808 2200 Mission College Blvd. for Intel
fax +1-408-765-9322 Santa Clara, CA 95052-8119


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.056 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site