lkml.org 
[lkml]   [2000]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Linux 2.2.15pre12
    On Mon, 6 Mar 2000, Rik van Riel wrote:

    >
    > Think about running a big simulation process that fork()s to
    > exec() /bin/mail in order to email its status or a partial
    > solution to the person that started the simulation.
    >
    > A big rendering process that fork()/exec()s lpr.
    >
    > Without overcommit you'd need to have the 500 MB of swap free
    > that the big simulation is using, even though it'll only use
    > 1 MB for the little process that's being exec()ed...
    >

    of course.. yes..

    > > (i'm having a discussion with some other unix users at the
    > > moment about linux's OOM vulnerability).
    >
    > It's not a vulnerability as such, it's more of a design
    > decision. Overcommit makes sense in a *lot* of situations.
    >

    it makes sense, yes, but it means linux is more vulnerable to leaky
    applications than other unices seem to be.

    > Running more programs than what can fit in your memory
    > never makes sense, the difference between overcommitting
    > and non-overcommitting systems is how they handle an
    > overload situations.
    >

    the question is how do you guarantee stability under overload?
    it's no good that linux is more efficient generally under load than Unix
    xyz, if under severe load linux goes OOM and dies when Unix xyz manages
    to keep going.

    linux deadlocking because netscape was left running a java applet over a
    weekend does not give it a good reputation for stability. And no measure
    of clever OOM killing heuristics can recover that reputation - because
    Unix xyz doesn't get into that OOM situation in the first place.

    basically, what are these other Unices doing that linux does not? Eg
    Solaris configured for 'lazy' VM - which i understand overcommits on
    memory allocation just as linux does.

    > Non-overcommitting systems will refuse further allocations
    > when memory _could_ be filled up by all current processes.
    > Overcommitting systems will allow the system to go much
    > further, possibly processing the same load that a non-overcomm.
    > system would not be able to process. However, when it breaks
    > down it breaks down in a slightly less predictable way ...
    >

    which really does not look good. I know setting ulimits is a good way to
    prevent this, but ulimits are still not the complete answer.

    (ie they are inflexible. And even if one user can't bring down a
    machine, a group of users still could)

    > regards,
    >
    > Rik

    -paul.


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:56    [W:4.142 / U:0.884 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site