lkml.org 
[lkml]   [2012]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRE: [PATCH] [RESEND] arm: limit memblock base address for early_pte_alloc
    Date
    > From: Minchan Kim [mailto:minchan@kernel.org]
    > Sent: Tuesday, June 05, 2012 4:12 PM
    >
    > If we do arm_memblock_steal with a page which is not aligned with section
    > size, panic can happen during boot by page fault in map_lowmem.
    >
    > Detail:
    >
    > 1) mdesc->reserve can steal a page which is allocated at 0x1ffff000 by
    > memblock
    > which prefers tail pages of regions.
    > 2) map_lowmem maps 0x00000000 - 0x1fe00000
    > 3) map_lowmem try to map 0x1fe00000 but it's not aligned by section due to
    1.
    > 4) calling alloc_init_pte allocates a new page for new pte by
    memblock_alloc
    > 5) allocated memory for pte is 0x1fffe000 -> it's not mapped yet.
    > 6) memset(ptr, 0, sz) in early_alloc_aligned got PANICed!

    May I suggest another simple approach? The first continuous couples of
    sections are always safely section-mapped inside alloc_init_section funtion.
    So, by limiting memblock_alloc to the end of the first continuous couples of
    sections at the start of map_lowmem, map_lowmem can safely memblock_alloc &
    memset even if we have one or more section-unaligned memory regions. The
    limit can be extended back to arm_lowmem_limit after the map_lowmem is done.

    diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
    index e5dad60..edf1e2d 100644
    --- a/arch/arm/mm/mmu.c
    +++ b/arch/arm/mm/mmu.c
    @@ -1094,6 +1094,11 @@ static void __init kmap_init(void)
    static void __init map_lowmem(void)
    {
    struct memblock_region *reg;
    + phys_addr_t pmd_map_end;
    +
    + pmd_map_end = (memblock.memory.regions[0].base +
    + memblock.memory.regions[0].size) & PMD_MASK;
    + memblock_set_current_limit(pmd_map_end);

    /* Map all the lowmem memory banks. */
    for_each_memblock(memory, reg) {
    @@ -1113,6 +1118,8 @@ static void __init map_lowmem(void)

    create_mapping(&map);
    }
    +
    + memblock_set_current_limit(arm_lowmem_limit);
    }

    /*
    @@ -1123,8 +1130,6 @@ void __init paging_init(struct machine_desc *mdesc)
    {
    void *zero_page;

    - memblock_set_current_limit(arm_lowmem_limit);
    -
    build_mem_type_table();
    prepare_page_table();
    map_lowmem();



    \
     
     \ /
      Last update: 2012-06-08 17:41    [W:2.112 / U:0.104 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site