lkml.org 
[lkml]   [2009]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 00/22] Cleanup and optimise the page allocator V7
From
Date
On Mon, 2009-04-27 at 15:38 +0100, Mel Gorman wrote:
> On Mon, Apr 27, 2009 at 03:58:39PM +0800, Zhang, Yanmin wrote:
> > On Wed, 2009-04-22 at 14:53 +0100, Mel Gorman wrote:
> > > Here is V7 of the cleanup and optimisation of the page allocator and
> > > it should be ready for wider testing. Please consider a possibility for
> > > merging as a Pass 1 at making the page allocator faster. Other passes will
> > > occur later when this one has had a bit of exercise. This patchset is based
> > > on mmotm-2009-04-17 and I've tested it successfully on a small number of
> > > machines.
> > We ran some performance benchmarks against V7 patch on top of 2.6.30-rc3.
> > It seems some counters in kernel are incorrect after we run some ffsb (disk I/O benchmark)
> > and swap-cp (a simple swap memory testing by cp on tmpfs). Free memory is bigger than
> > total memory.
> >
>
> oops. Can you try this patch please?
>
> ==== CUT HERE ====
>
> Properly account for freed pages in free_pages_bulk() and when allocating high-order pages in buffered_rmqueue()
>
> free_pages_bulk() updates the number of free pages in the zone but it is
> assuming that the pages being freed are order-0. While this is currently
> always true, it's wrong to assume the order is 0. This patch fixes the
> problem.
>
> buffered_rmqueue() is not updating NR_FREE_PAGES when allocating pages with
> __rmqueue(). This means that any high-order allocation will appear to increase
> the number of free pages leading to the situation where free pages appears to
> exceed available RAM. This patch accounts for those allocated pages properly.
>
> This is a candidate fix to the patch
> page-allocator-update-nr_free_pages-only-as-necessary.patch. It has yet to be
> verified as fixing a problem where the free pages count is getting corrupted.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> mm/page_alloc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3db5f57..dd69593 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -545,7 +545,7 @@ static void free_pages_bulk(struct zone *zone, int count,
> zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
> zone->pages_scanned = 0;
>
> - __mod_zone_page_state(zone, NR_FREE_PAGES, count);
> + __mod_zone_page_state(zone, NR_FREE_PAGES, count << order);
> while (count--) {
> struct page *page;
>
> @@ -1151,6 +1151,7 @@ again:
> } else {
> spin_lock_irqsave(&zone->lock, flags);
> page = __rmqueue(zone, order, migratetype);
> + __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
Here 'i' should be 1?

> spin_unlock(&zone->lock);
> if (!page)
> goto failed;
I ran a cp kernel source files and swap-cp workload and didn't find
bad counter now.




\
 
 \ /
  Last update: 2009-04-28 04:01    [W:0.246 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site