lkml.org 
[lkml]   [2016]   [Jun]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch -mm 2/2] mm, compaction: abort free scanner if split fails
If the memory compaction free scanner cannot successfully split a free
page (only possible due to per-zone low watermark), terminate the free
scanner rather than continuing to scan memory needlessly.

If the per-zone watermark is insufficient for a free page of
order <= cc->order, then terminate the scanner since future splits will
also likely fail.

This prevents the compaction freeing scanner from scanning all memory on
very large zones (very noticeable for zones > 128GB, for instance) when
all splits will likely fail.

Signed-off-by: David Rientjes <rientjes@google.com>
---
Note: I think we may want to backport this to -stable since this problem
has existed since at least 3.11. This patch won't cleanly apply to any
stable tree, though. If people think it should be backported, let me know
and I'll handle the failures as they arise and rebase.

mm/compaction.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -496,7 +496,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
order = page_order(page);
isolated = __isolate_free_page(page, order);
if (!isolated)
- goto isolate_fail;
+ break;
set_page_private(page, order);
total_isolated += isolated;
list_add_tail(&page->lru, freelist);
@@ -518,6 +518,9 @@ isolate_fail:

}

+ if (locked)
+ spin_unlock_irqrestore(&cc->zone->lock, flags);
+
/*
* There is a tiny chance that we have read bogus compound_order(),
* so be careful to not go outside of the pageblock.
@@ -539,9 +542,6 @@ isolate_fail:
if (strict && blockpfn < end_pfn)
total_isolated = 0;

- if (locked)
- spin_unlock_irqrestore(&cc->zone->lock, flags);
-
/* Update the pageblock-skip if the whole pageblock was scanned */
if (blockpfn == end_pfn)
update_pageblock_skip(cc, valid_page, total_isolated, false);
@@ -1068,6 +1068,7 @@ static void isolate_freepages(struct compact_control *cc)
block_end_pfn = block_start_pfn,
block_start_pfn -= pageblock_nr_pages,
isolate_start_pfn = block_start_pfn) {
+ unsigned long isolated;

/*
* This can iterate a massively long zone without finding any
@@ -1092,8 +1093,12 @@ static void isolate_freepages(struct compact_control *cc)
continue;

/* Found a block suitable for isolating free pages from. */
- isolate_freepages_block(cc, &isolate_start_pfn,
- block_end_pfn, freelist, false);
+ isolated = isolate_freepages_block(cc, &isolate_start_pfn,
+ block_end_pfn, freelist, false);
+ /* If isolation failed, do not continue needlessly */
+ if (!isolated && isolate_start_pfn < block_end_pfn &&
+ cc->nr_freepages <= cc->nr_migratepages)
+ break;

/*
* If we isolated enough freepages, or aborted due to async
\
 
 \ /
  Last update: 2016-06-22 00:41    [W:0.082 / U:1.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site