lkml.org 
[lkml]   [2005]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 04/16] mm: balance zone aging in direct reclaim path
Add 10 extra priorities to the direct page reclaim path, which makes 10 round of
balancing effort(reclaim only from the least aged local/headless zone) before
falling back to the reclaim-all scheme.

Ten rounds should be enough to get enough free pages in normal cases, which
prevents unnecessarily disturbing remote nodes. If further restrict the first
round of page allocation to local zones, we might get what the early zone
reclaim patch want: memory affinity/locality.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

mm/vmscan.c | 31 ++++++++++++++++++++++++++++---
1 files changed, 28 insertions(+), 3 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1194,6 +1194,7 @@ static void
shrink_caches(struct zone **zones, struct scan_control *sc)
{
int i;
+ struct zone *z = NULL;

for (i = 0; zones[i] != NULL; i++) {
struct zone *zone = zones[i];
@@ -1208,11 +1209,34 @@ shrink_caches(struct zone **zones, struc
if (zone->prev_priority > sc->priority)
zone->prev_priority = sc->priority;

- if (zone->all_unreclaimable && sc->priority != DEF_PRIORITY)
+ if (zone->all_unreclaimable && sc->priority < DEF_PRIORITY)
continue; /* Let kswapd poll it */

+ /*
+ * Balance page aging in local zones and following headless
+ * zones.
+ */
+ if (sc->priority > DEF_PRIORITY) {
+ if (zone->zone_pgdat != zones[0]->zone_pgdat) {
+ cpumask_t cpu = node_to_cpumask(
+ zone->zone_pgdat->node_id);
+ if (!cpus_empty(cpu))
+ break;
+ }
+
+ if (!z)
+ z = zone;
+ else if (age_gt(z, zone))
+ z = zone;
+
+ continue;
+ }
+
shrink_zone(zone, sc);
}
+
+ if (z)
+ shrink_zone(z, sc);
}

/*
@@ -1256,7 +1280,8 @@ int try_to_free_pages(struct zone **zone
lru_pages += zone->nr_active + zone->nr_inactive;
}

- for (priority = DEF_PRIORITY; priority >= 0; priority--) {
+ /* The added 10 priorities are for scan rate balancing */
+ for (priority = DEF_PRIORITY + 10; priority >= 0; priority--) {
sc.nr_mapped = read_page_state(nr_mapped);
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
@@ -1290,7 +1315,7 @@ int try_to_free_pages(struct zone **zone
}

/* Take a nap, wait for some writeback to complete */
- if (sc.nr_scanned && priority < DEF_PRIORITY - 2)
+ if (sc.nr_scanned && priority < DEF_PRIORITY)
blk_congestion_wait(WRITE, HZ/10);
}
out:
--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-12-07 11:27    [W:0.217 / U:0.792 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site