lkml.org 
[lkml]   [2011]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH v4 3/5] memcg : stop scanning if enough
memcg :avoid node fallback scan if possible.

Now, try_to_free_pages() scans all zonelist because the page allocator
should visit all zonelists...but that behavior is harmful for memcg.
Memcg just scans memory because it hits limit...no memory shortage
in pased zonelist.

For example, with following unbalanced nodes

Node 0 Node 1
File 1G 0
Anon 200M 200M

memcg will cause swap-out from Node1 at every vmscan.

Another example, assume 1024 nodes system.
With 1024 node system, memcg will visit 1024 nodes
pages per vmscan... This is overkilling.

This is why memcg's victim node selection logic doesn't work
as expected.

This patch is a help for stopping vmscan when we scanned enough.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/vmscan.c | 10 ++++++++++
1 file changed, 10 insertions(+)
Index: mmotm-0710/mm/vmscan.c
===================================================================
--- mmotm-0710.orig/mm/vmscan.c
+++ mmotm-0710/mm/vmscan.c
@@ -2058,6 +2058,16 @@ static void shrink_zones(int priority, s
}

shrink_zone(priority, zone, sc);
+ if (!scanning_global_lru(sc)) {
+ /*
+ * When we do scan for memcg's limit, it's bad to do
+ * fallback into more node/zones because there is no
+ * memory shortage. We quit as much as possible when
+ * we reache target.
+ */
+ if (sc->nr_to_reclaim <= sc->nr_reclaimed)
+ break;
+ }
}
}



\
 
 \ /
  Last update: 2011-07-27 07:59    [from the cache]
©2003-2011 Jasper Spaans