lkml.org 
[lkml]   [1998]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Strange interrupt behaviour


    On Tue, 14 Jul 1998, Gerard Roudier wrote:
    >
    > If your program seems to demonstrate that having even up to 5% memory
    > free does not help a lot and that the ratio of pages you will throw away

    No, read the math again.

    The only reason I wrote a program to do the calculations was that I was
    too lazy (and possibly too inept) to symbolically solve the value of

    x * (x-2) * (x-4) * .. (y times)
    -----------------------------
    x ^ y

    so I just wrote a program to iterate and do the calculations for me.

    Having 5% memory free almost guarantees that you'll have consecutive pages
    for an 8MB machine. It also showed very clearly that on a 4MB machine it
    was getting painful - you needed to keep a fair amount of your memory free
    to give the same kinds of guarantees.

    Essentially, the basic rule is that as you increase your memory size, you
    do need to increase the number of pages you keep free, but the free pages
    can increase much more slowly than the used pages. And this is why current
    2.1.x kernels are painful on small-memory machines but not on large-memory
    machines - because on large-memory machines the likelihood of finding
    consecutive pages with even just 1% of all pages free is just almost a
    certainty.

    Anyway, the point of the excercise was more to show that you don't
    actually have to page out all that much of your memory to get consecutive
    areas, even if you page out randomly. We already (for completely unrelated
    reasons) try to keep a certain amount of memory free, and that amount has
    been hovering at around the 5% mark anyway.

    The math just goes to show that 5% free should be plenty for a 8MB
    machine.

    The fact that is obviously is _not_ enough is that while we do try to keep
    something like 5% free by kswapd, we try to do so over time, and not
    "locally". So locally the number of free pages can dip a lot below 5%, and
    that's when the problems happen.

    To re-iterate my argument: the basic approach of randomly keeping free
    memory seems to be basically a mathematically sound approach. That's all
    the math I did says (with the caveat I had in my original mail about the
    model not being exact). As such, the theory says that if my model is
    accurate enough, then it _should_ be enough to just make sure that
    allocations are synchronized with the code that keeps the free pages
    available.

    That's all I'm saying. Essentially, I've tried to convince people with raw
    numbers that the VM layer doesn't actually need any major overhaul, it
    only needs to get the slight fixes. People that have talked about major
    overhauls haven't shown me either code _or_ reasoning, so..

    Linus


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html

    \
     
     \ /
      Last update: 2005-03-22 13:43    [W:4.557 / U:0.768 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site