lkml.org 
[lkml]   [2009]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: boot panic with memcg enabled (Was [PATCH 3/4] memcg: don't use bootmem allocator in setup code)
    Li Zefan wrote:
    > (This patch should have CCed memcg maitainers)
    >
    > My box failed to boot due to initialization failure of page_cgroup, and
    > it's caused by this patch:
    >
    > + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
    >
    > I added a printk, and found that order == 11 == MAX_ORDER.
    >
    > Pekka J Enberg wrote:
    >> From: Yinghai Lu <yinghai@kernel.org>
    >>
    >> The bootmem allocator is no longer available for page_cgroup_init() because we
    >> set up the kernel slab allocator much earlier now.
    >>
    >> Cc: Ingo Molnar <mingo@elte.hu>
    >> Cc: Johannes Weiner <hannes@cmpxchg.org>
    >> Cc: Linus Torvalds <torvalds@linux-foundation.org>
    >> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
    >> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
    >> ---
    >> mm/page_cgroup.c | 12 ++++++++----
    >> 1 files changed, 8 insertions(+), 4 deletions(-)
    >>
    >> diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
    >> index 791905c..3dd4a90 100644
    >> --- a/mm/page_cgroup.c
    >> +++ b/mm/page_cgroup.c
    >> @@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
    >> struct page_cgroup *base, *pc;
    >> unsigned long table_size;
    >> unsigned long start_pfn, nr_pages, index;
    >> + struct page *page;
    >> + unsigned int order;
    >>
    >> start_pfn = NODE_DATA(nid)->node_start_pfn;
    >> nr_pages = NODE_DATA(nid)->node_spanned_pages;
    >> @@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
    >> return 0;
    >>
    >> table_size = sizeof(struct page_cgroup) * nr_pages;
    >> -
    >> - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
    >> - table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
    >> - if (!base)
    >> + order = get_order(table_size);
    >> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
    >> + if (!page)
    >> + page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);

    This should potentially come with a KERN_WARNING indicating the page_cgroup now
    is allocated out of the current node rather than the desired node. It'll help
    debug potential issues later.

    >> + if (!page)
    >> return -ENOMEM;
    >> + base = page_address(page);
    >> for (index = 0; index < nr_pages; index++) {
    >> pc = base + index;
    >> __init_page_cgroup(pc, start_pfn + index);

    Looks good to me, does it work for you, Yinghai? Kamezawa-San could you take a look


    --
    Balbir


    \
     
     \ /
      Last update: 2009-06-12 17:05    [W:0.024 / U:126.160 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site