lkml.org 
[lkml]   [2000]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: Overcommitable memory??
    Date
    On Thu, 23 Mar 2000 14:54:49 -0800, you wrote:

    >
    >> >BTW, could you please point at a clean example of a program in actual use
    >> >that handles freeing up memory when OOM looms? Would be useful to have
    >> >handy...
    >>
    >> A description of one was just posted to this thread. On malloc()
    >> failure, it starts diverting new connections elsewhere and garbage
    >> collecting itself. Quite nice, although it should do so before
    >> malloc() starts failing...
    >
    > If there was a portable way to detect imminent resource exhaustion before
    >malloc starts to fail, I'd happily use it. On Linux, I don't even have
    >malloc failing. And on no platform do I have a good way to detect when I'm
    >starting to force the system to swap and it might help performance-wise if I
    >shrink my memory usage.
    >
    > What I tell our Linux customers to do is use resource limits to force
    >'malloc' to fail. That requires some manual tuning, you have to think about
    >how much RAM you have, how much the kernel will be using for network stuff,
    >and how much other applications will be using. It's better than forcing the
    >system to a crawl though.
    >
    > DS
    >
    > PS: The issue of helping applications detect resource issues and respond to
    >them is completely orthogonal to the issue of overcommittment. If a system
    >has 90Gb of swap, I still want to know that I'm forcing the system to use
    >it, because performance is going to start going down the toilet.

    Yes, I know. Examining Squid may help you here (or just bring your
    last meal back - portable - yes, stable - yes, fast and efficient -
    yes. Nice clear code? Not quite...)

    Really, the kernel should "talk" to applications a lot more about
    things like this. You could probably do something similar via a /proc
    entry, but that's not exactly portable...

    What sort of applications are these, by the way?

    I suppose Unix has evolved in this way because most applications can't
    be very flexible about memory usage; if an application requests 10Mb
    to do something, 9Mb probably won't do. Then along come WWW servers
    etc., with almost the opposite requirements - the more memory the
    better, but they can operate on fairly trivial levels if needed
    (thanks to overcommit).


    James.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:57    [W:0.021 / U:0.236 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site