lkml.org 
[lkml]   [2001]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [patch] mempool-2.5.1-D2
    From
    > 
    > On Mon, 17 Dec 2001, Stephan von Krawczynski wrote:
    >
    > > [...] You will obviously _not_ shoot down allocated and still used
    > > bios, no matter how long they are going to take. So your fixed
    size
    > > pool will run out in certain (maybe weird) conditions. If you
    cannot
    > > resize (alloc additional mem from standard VM) you are just dead.
    >
    > sure, the pool will run out under heavy VM load. Will it stay empty
    > forever? Nope, because all mempool users are *required* to
    deallocate the
    > buffer after some (reasonable) timeout. (such as IO latency.) This
    is
    > pretty much by definition. (Sure there might be weird cases like IO
    > failure timeouts, but sooner or later the buffer will be returned,
    and it
    > will be reused.)

    Hm, and where is the real-world-difference to standard VM? I mean
    today your bad-ass application gets shot down by L's oom-killer and
    your VM will "refill". So you're not going to die for long in the
    current situation either.
    I have yet to see the brilliance in mempools. I mean, for sure I can
    imagine systems that are going to like it (e.g. embedded) a _lot_. But
    these are far off the "standard" system profile.
    I asked this several times now, and I will continue to, where is the
    VM _design_ guru that explains the designed short path to drop page
    caches when in need of allocable mem, regarding a system with
    aggressive caching like 2.4? This _must_ exist. If it does not, the
    whole issue is broken, and it is obvious that nobody will ever find an
    acceptable implementation.
    I turned this problem about a hundred times round now, and as far as I
    can see everything comes down to the simple fact, that VM has to
    _know_ the difference between a only-cached page and a _really-used_
    one. And I do agree with Rik, that the only-cached pages need an aging
    algorithm, probably a most-simple approach (could be list-ordering).
    This should answer the question: who's dropped next?
    On the other hand you have aging in the used-pages for finding out
    who's swapped out next. BUT I would say that swapping should only
    happen when only-cached pages are down to a minimum level (like 5% of
    memtotal).
    Forgive my simplistic approach, where are the guys to shoot me?
    And where the hell is the need for mempool in this rough design idea?

    Regards,
    Stephan

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:14    [W:0.028 / U:0.376 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site