lkml.org 
[lkml]   [2005]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19


On Thu, 3 Nov 2005, Martin J. Bligh wrote:
> >
> > These days we have things like per-cpu lists in front of the buddy
> > allocator that will make fragmentation somewhat higher, but it's still
> > absolutely true that the page allocation layout is _not_ random.
>
> OK, well I'll quit torturing you with incorrect math if you'll concede
> that the situation gets much much worse as memory sizes get larger ;-)

I don't remember the specifics (I did the stats several years ago), but if
I recall correctly, the low-order allocations actually got _better_ with
more memory, assuming you kept a fixed percentage of memory free. So you
actually needed _less_ memory free (in percentages) to get low-order
allocations reliably.

But the higher orders didn't much matter. Basically, it gets exponentially
more difficult to keep higher-order allocations, and it doesn't help one
whit if there's a linear improvement from having more memory available or
something like that.

So it doesn't get _harder_ with lots of memory, but

- you need to keep the "minimum free" watermarks growing at the same rate
the memory sizes grow (and on x86, I don't think we do: at least at
some point, the HIGHMEM zone had a much lower low-water-mark because it
made the balancing behaviour much nicer. But I didn't check that).

- with lots of memory, you tend to want to get higher-order pages, and
that gets harder much much faster than your memory size grows. So
_effectively_, the kinds of allocations you care about are much harder
to get.

If you look at get_free_pages(), you will note that we actyally
_guarantee_ memory allocations up to order-3:

...
if (!(gfp_mask & __GFP_NORETRY)) {
if ((order <= 3) || (gfp_mask & __GFP_REPEAT))
do_retry = 1;
...

and nobody has ever even noticed. In other words, low-order allocations
really _are_ dependable. It's just that the kinds of orders you want for
memory hotplug or hugetlb (ie not orders <=3, but >=10) are not, and never
will be.

(Btw, my statistics did depend on that fact that the _usage_ was an even
higher exponential, ie you had many many more order-0 allocations than you
had order-1). You can always run out of order-n (n != 0) pages if you just
allocate enough of them. The buddy thing works well statistically, but it
obviously can't do wonders).

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-11-18 23:46    [W:0.148 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site