lkml.org 
[lkml]   [2000]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Overcommitable memory??
On Mon, 20 Mar 2000, Horst von Brand wrote:

> They are a way of killing processes without declaring OOM (OOM effect
> without OOM :). If they are set up to not overcommit, ever. Rarely done, as
> it is a huge waste. Start over.
>

if the app is bad then the app is bad. At least with per-user limits you
can set things up so that bad/untrustworthy apps only affect themselves.

ie, it's no longer a kernel problem.

> Very much agree. It would be nice if we were given the alternative,
> which we aren't right now. First question is, who is going to use
> this?

a lot of people. me for one. I already make heavy use of process limits.

> How extensively? For what kinds of uses? Next question is, how
> much does this cost, when it is used and when it isn't? How much
> developer/maintainer/ tester manpower it consumes has to be counted
> in here. Both in the kernel configuration case "Overcommit memory
> (Y|n)",

are we talking about the same thing? I'm talking per-user limits,
something which we might eventually see in linux.

You seem to be talking about having a non-overcommitting vm. (something
which i think we'll never see in linux - but lets not argue about that).

-paul jakma.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.149 / U:0.344 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site