lkml.org 
[lkml]   [1998]   [Oct]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Out Of Memory in v. 2.1
Hi,

On Sun, 04 Oct 1998 14:23:25 -0600 (MDT), Kurt Fitzner
<kf_bulk@nexus.v-wave.com> said:

> On 04-Oct-98 Carlos Morgado wrote:
>> The OOM killers should stay outside the main tree until a good working
>> solution comes along.

> There's a perfectly good solution. A little revolutionary. Something like
> this... when RAM + SWAP is all allocated, and when a program goes to allocate
> more, then malloc() et al could actually return a null pointer.

What happens when your user program has done this, used up all memory,
and a system daemon asks for more memory? Say, named or (shock) even
init? The system daemon dies --- that's OK, is it? And do we let the
user space memory hog prevent networking allocations?

It's a lot more complex than this, really.

> I mean, for heaven's sake... when the pentium f00f bug was announce,
> everyone gasped and said "Oh no, now any user on my system can lock
> up my machine and I can't do anyhting about it". Yet, the memory
> allocation scheme in Linux is so poorly designed, that any user can
> lock up a machine, and there is nothing you can do about it. No one
> is jumping to fix that problem, so why bother with the Pentium f00f
> bug?

If you let user processes allocate all memory then _something_ has got
to die. There's no way round that. You are not suggesting any
solution to that fundamental problem.

> If the allocation functions returned null instead of overallocating, then
> there would be no problem. What's the deal with overallocating anyways...
> did someone figure that most programs allocate memory that they're never
> going to use?

Yes. Every single time you make a writable private mapping of a file
you are declaring a potential request to allocate all of that memory;
in fact, only a small portion of it is ever likely to get dirtied.
True overcommit protection would have to pessimistically assume that
*every* such process has the potential to dirty every such page.

Think about other common cases such as http servers, where the server
commonly forks off many child processes. In actual fact, much of the
data space inherited by the children will remain shared with the
parent process. However, to do overcommit protection, we'd have to
assume that every child might dirty every such shared page.

Overcommit protection simply *must* make these pessimistic
assumptions, and doing so seriously degrades our ability to use memory
efficiently: we'd need to reserve large amounts of swap for the
worst-case scenario and hence give up on using substantial amounts of
disk.

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.101 / U:0.644 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site