lkml.org 
[lkml]   [2010]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:x86/urgent] x86, memblock: Fix early_node_mem with big reserved region.
    Commit-ID:  419db274bed4269f475a8e78cbe9c917192cfe8b
    Gitweb: http://git.kernel.org/tip/419db274bed4269f475a8e78cbe9c917192cfe8b
    Author: Yinghai Lu <yinghai@kernel.org>
    AuthorDate: Thu, 28 Oct 2010 09:50:17 -0700
    Committer: H. Peter Anvin <hpa@linux.intel.com>
    CommitDate: Thu, 28 Oct 2010 15:52:36 -0700

    x86, memblock: Fix early_node_mem with big reserved region.

    Xen can reserve huge amounts of memory for pre-ballooning, but that
    still shows as RAM in the e820 memory map. early_node_mem could not
    find range because of start/end adjusting, and will go through the
    fallback path. However, the fallback patch is still using
    memblock_x86_find_range_node(), and it is partially top-down because
    it go through active_range entries from low to high.

    Let's use memblock_find_in_range instead memblock_x86_find_range_node.
    So get real top down in fallback path.

    We may still need to make memblock_x86_find_range_node to do overall
    top_down work.

    Reported-by: Jeremy Fitzhardinge <jeremy@goop.org>
    Tested-by: Jeremy Fitzhardinge <jeremy@goop.org>
    Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Yinghai Lu <yinghai@kernel.org>
    LKML-Reference: <4CC9A9C9.8020700@kernel.org>
    Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
    ---
    arch/x86/mm/numa_64.c | 7 ++-----
    1 files changed, 2 insertions(+), 5 deletions(-)

    diff --git a/arch/x86/mm/numa_64.c b/arch/x86/mm/numa_64.c
    index 60f4985..7ffc9b7 100644
    --- a/arch/x86/mm/numa_64.c
    +++ b/arch/x86/mm/numa_64.c
    @@ -178,11 +178,8 @@ static void * __init early_node_mem(int nodeid, unsigned long start,

    /* extend the search scope */
    end = max_pfn_mapped << PAGE_SHIFT;
    - if (end > (MAX_DMA32_PFN<<PAGE_SHIFT))
    - start = MAX_DMA32_PFN<<PAGE_SHIFT;
    - else
    - start = MAX_DMA_PFN<<PAGE_SHIFT;
    - mem = memblock_x86_find_in_range_node(nodeid, start, end, size, align);
    + start = MAX_DMA_PFN << PAGE_SHIFT;
    + mem = memblock_find_in_range(start, end, size, align);
    if (mem != MEMBLOCK_ERROR)
    return __va(mem);


    \
     
     \ /
      Last update: 2010-10-29 01:43    [W:0.024 / U:33.580 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site