lkml.org 
[lkml]   [2000]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Avoiding OOM on overcommit...?
Horst von Brand wrote:
> AFAIKS, the kernel is careful to ask for memory (real RAM, not swap space!)
> only if it really needs it. But it also uses "otherwise unused" memory for
> various caches, which can be cut back if the need arises.
---
Thank-goodness! I'd hate to think of the problems if it were
otherwise! :-)

>
> > Specifically I
> > was thinking of calls that used overcommit -- meaning allocing space that
> > they really didn't intend to use, but you are right -- all of those cases
> > would need to be handled as far as memory allocation bookkeeping. But we
> > already do bookkeeping for 'free' memory, 'used' memory, 'shared' memory
> > -- would adding 'committed' or 'reserved' memory really be that much more
> > difficult or costly?
>
> Not in itself, the problem is that if you don't ever want to overcommit
> anything you must know exactly how much memory each activity could use, in
> the very worst case.
---
No...you are confusing the concept of OS overcommitment with
prediction of an applications future requests for memory (which can be
denied).

The only thing a program has to "predict" is a maximum stack
size -- which is physically reserved as a *minimum* at run time. All
other requests for memory can be denied with an error code.


> I can understand there are people worried by stuff like C2 security, but in
> that case you can work with overcommitment, just make sure the tasks
> crucial for C2 can't run out of resources (unless they are broken or the
> sysadmin is a complete idiot, that is), and then do as you say: If they do
> run out, take the whole system down.
---
Well -- that's sorta the point -- Everything from 'atd' to 'vi'
would need to be rewritten to 'touch' pages of alloc'ed memory. If you want
to promise integrity -- then you can choose to run with no 'virtual swap' and
guaranteed _minimum_ stack space sizes allocated at run time. With the
current model, say, auditd could think it malloc'ed a 2Meg buffer -- thus
it thinks it has it's space guaranteed. If we are in a OOM state, when auditd
goes to access that buffer, it will SEGV -- can't map address to physical
object, or a "OOM" killer routine runs and kills another process pseudo
randomly. What I'm saying is we need to provide a model that doesn't
overcommit. You, personally, or anyone else doesn't have to use that model.
But such a model, if in the kernel would allow for operational assurance
(allowing for failures to occur predictably).

The idea here is to *prevent* overcommitment. OOM can't be
prevented, but if you have eliminated overcommitment, how OOM is handled
can be predicted to a certain level. Otherwise, you end up with a
completely untrusted (non-predictable) state after an OOM event. That's
fine on some systems, but on others, not. The idea is configurability --
is that such a bad thing? The *ability* to not overcommit would change
nothing for you, but for me, it would limit OOM failures to determinant,
finite class.

-l

--
Linda A Walsh | Trust Technology, Core Linux, SGI
law@sgi.com | Voice: (650) 933-5338

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.159 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site