lkml.org 
[lkml]   [2000]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Avoiding OOM on overcommit...?

On Sun, 26 Mar 2000, Richard Gooch wrote:

> Linda Walsh writes:
> > Richard Gooch wrote:
> > > Because it's different (read harder)? I still haven't seen a
> > > description of how we handle stack exhaustion properly. All we can do
> > > there is kill the offending process.
> > >
> >
> > Naw....we have lots of options. The first and best 1) A gram
> > of prevention is worth a kilogram of cure. If the developer or user
> > knows that a given program uses alot of stack space, then an option
> > in 'ld' to specify size of stack to commit, or 2) User says "runprog
> > -stacksize=1M <programname>" or some such option -- and 1M of stack
> > space is committed/reserved before when loading the program. 3) We
> > use a signal -- I sorta like overuse of SIGSTKFLT, but there may be
> > reasons not to use it. If in default, program dies as from a SEGV,
> > and 4) if ignored, program is put to sleep waiting on free pages.
>
> Each of these options is flawed:
>
> 1&2) you have to do this for all processes

Only critical processes. Only ones that system stability depends on.
The whole point of this is that when memory runs out, less important
things die, things that won't lose data, like a compiler or xosview or
maybe even Netscape (better than if it happens to be X that dies, or
init).

>
> 3) you can't handle a signal caused by stack exhaustion

You CAN handle a signal caused by NEAR stack exhaustion.

>
> 4) deadlock.

Obviously some things that are important for system function will have
to be impervious to this and have memory reserved for them. Otherwise the
whole exercise is pointless. You can't have all processes be equal.

>
> > Ok, maybe that wasn't lots, but it was at least 4! :-)
>
> That was 4 null options. I'd settle for one that could be shown to
> work, or at least reasonably expected to.
>
> > Not to sound repetitious, but why is lack of memory (a resource as
> > David puts it) so different from #processes, #open files/system /process,
> > out of diskspace on a write call, unable to acquire a 'lock' resource...
> >
> > Those just return error's or 'sleep'. Why should we come up with
> > a new paradigm for memory?
>
> Because there isn't always syscall interface for getting memory.
>
> Look, I think it would be nice you you could provably ensure that
> processes can be made safe from OOM. Some people will like that. But
> until a lightweight, effective scheme is proposed that can support it,
> I think we should steer clear of ad-hoc solutions that only give half
> guarantees. It's better not to claim something at all than claim it
> and people find out later it's not true.
>
> Regards,
>
> Richard....
> Permanent: rgooch@atnf.csiro.au
> Current: rgooch@ras.ucalgary.ca
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.rutgers.edu
> Please read the FAQ at http://www.tux.org/lkml/
>


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.072 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site