lkml.org 
[lkml]   [2008]   [Oct]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH] memcg: do not recalculate section unnecessarily in init_section_page_cgroup
From
Date
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
---

diff -urNp linux-2.6.28-rc2-mm1-orig/mm/page_cgroup.c linux-2.6.28-rc2-mm1/mm/page_cgroup.c
--- linux-2.6.28-rc2-mm1-orig/mm/page_cgroup.c 2008-10-30 12:49:27.000000000 +0900
+++ linux-2.6.28-rc2-mm1/mm/page_cgroup.c 2008-10-30 12:52:41.000000000 +0900
@@ -99,13 +99,11 @@ struct page_cgroup *lookup_page_cgroup(s

int __meminit init_section_page_cgroup(unsigned long pfn)
{
- struct mem_section *section;
+ struct mem_section *section = __pfn_to_section(pfn);
struct page_cgroup *base, *pc;
unsigned long table_size;
int nid, index;

- section = __pfn_to_section(pfn);
-
if (section->page_cgroup)
return 0;

@@ -131,7 +129,6 @@ int __meminit init_section_page_cgroup(u
__init_page_cgroup(pc, pfn + index);
}

- section = __pfn_to_section(pfn);
section->page_cgroup = base - pfn;
total_usage += table_size;
return 0;



\
 
 \ /
  Last update: 2008-10-30 05:41    [W:0.049 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site