lkml.org 
[lkml]   [2009]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 19/20] Batch free pages from migratetype per-cpu lists
Date
When the PCP lists are too large, a number of pages are freed in bulk.
Currently the free lists are examined in a round-robin fashion but it's
not unusual for only pages of the one type to be in the PCP lists so
quite an amount of time is spent checking empty lists. This patch still
frees pages in a round-robin fashion but multiple pages are freed for
each migratetype at a time.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 36 ++++++++++++++++++++++++------------
1 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 50e2fdc..627837c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -532,22 +532,34 @@ static void free_pcppages_bulk(struct zone *zone, int count,
spin_lock(&zone->lock);
zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
zone->pages_scanned = 0;
- while (count--) {
+
+ /* Remove pages from lists in a semi-round-robin fashion */
+ while (count) {
struct page *page;
struct list_head *list;
+ int batch;

- /* Remove pages from lists in a round-robin fashion */
- do {
- if (migratetype == MIGRATE_PCPTYPES)
- migratetype = 0;
- list = &pcp->lists[migratetype];
- migratetype++;
- } while (list_empty(list));
+ if (++migratetype == MIGRATE_PCPTYPES)
+ migratetype = 0;
+ list = &pcp->lists[migratetype];

- page = list_entry(list->prev, struct page, lru);
- /* have to delete it as __free_one_page list manipulates */
- list_del(&page->lru);
- __free_one_page(page, zone, 0, page_private(page));
+ /*
+ * Free from the lists in batches of 8. Batching avoids
+ * the case where the pcp lists contain mainly pages of
+ * one type and constantly cycling around checking empty
+ * lists. The choice of 8 is somewhat arbitrary but based
+ * on the expected maximum size of the PCP lists
+ */
+ for (batch = 0; batch < 8 && count; batch++) {
+ if (list_empty(list))
+ break;
+ page = list_entry(list->prev, struct page, lru);
+
+ /* have to delete as __free_one_page list manipulates */
+ list_del(&page->lru);
+ __free_one_page(page, zone, 0, page_private(page));
+ count--;
+ }
}
spin_unlock(&zone->lock);
}
--
1.5.6.5


\
 
 \ /
  Last update: 2009-02-23 00:25    [W:1.502 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site