Messages in this thread | | | Date | Tue, 14 Jul 1998 11:13:53 -0700 (PDT) | From | Linus Torvalds <> | Subject | Re: Strange interrupt behaviour |
| |
On Tue, 14 Jul 1998, Gerard Roudier wrote: > > A blind algorithm that would ensure that 1 dual page is available should > try to keep free half of the memory + 1 PAGE.
Nope. It would be stupid to be blind, when it's so easy to not be blind. See mm/page_alloc.c - free_memory_available().
I actually tried to force keeping large chunks available (128kB areas instead of 8kB areas), and that certainly did not work due to fragmentation. But keeping a 8kB area free even with random replacement is fairly trivial (do the math, and you'll see that the likelyhood for not finding two contiguous pages when you have 10% free memory is miniscule).
Note that people always slam the buddy allocator, but they do it without ever giving any alternative. David used to do this, and I think he finally tried out some alternatives and now I haven't seen him complain about buddy for some time - it's simply the best alternative there is to avoid fragmentation (buddy together with directed swap-out would obviously be better, but directed page-outs are hard).
These days it's Alan who slams buddy, and I hereby charge him with the holy goal of coming up with something better before he complains. Not just theory, but implementation.
> We donnot need dual pages very often.
Actually, we do. But we do not need them often enough that it would be a problem for kswapd.
The problem is not that we cannot keep up with the average rate, the problem is that we currently don't even _try_ to keep up with peak allocations because we don't ever synchronize with kswapd. Thus the current problems at even very fleeting peek times.
Linus
(*) For the math challenged: imagine that you have x pages, and y of those pages are free. What is the likelihood of not finding a single contiguous two-page area?
This boils down to how to place the free pages. It's essentially:
x * (x-2) * (x-4) * (x-8) * .. (y factors) ----------------------------- x * x * x * .. (y factors - ie x^y)
and the rate at which point this likelihood shrinks is very fast indeed.
For the case where we have 8MB of RAM (x = 2000) and 2% of that is free (y = 40) it's still about 50% likely that you won't find a double page, but at just 5% free pages it's less than half a percent and at 10% free memory we're talking exponents of -10 or so...
Now, the above is with the assumption that free page placement is completely random physically. That's not true: the buddy allocator tries to coalesce pages and tends to try to re-use the "scattered" pages first, which works in our favour. But at the same time each 2-page allocation will work to scatter the pages. Somebody would have to do a real simulation to see which is the stronger influence, but I'd expect the two forces to result in a net result that is not too far off the "simple" answer.
Note that yes, Gerard, with 512MB of RAM, even just 1% free means that we essentially always have 8kB areas free. Even just having 32MB instead of 8MB has the 2% free pages case (which was fifty-fifty with 8MB) be 96% likely to have contiguous pages..
Also note that we don't actually have to say "it's very unlikely". The above essentially means that even with random page-out, we can just continue until we get a contiguous area - and the math tells us that we'll essentially never have to page out for very long.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html
| |