lkml.org 
[lkml]   [2012]   [Jan]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH 4/5] mm: do not reset cached_hole_size when vma is unmapped
    In current code, cached_hole_size is set to the maximal value if the unmapped
    vma is under free_area_cache, next search will search from the base addr

    Actually, we can keep cached_hole_size so that if next required size is more
    that cached_hole_size, it can search from free_area_cache

    Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
    ---
    mm/mmap.c | 4 +---
    1 files changed, 1 insertions(+), 3 deletions(-)

    diff --git a/mm/mmap.c b/mm/mmap.c
    index 3f758c7..970f572 100644
    --- a/mm/mmap.c
    +++ b/mm/mmap.c
    @@ -1423,10 +1423,8 @@ void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
    /*
    * Is this a new hole at the lowest possible address?
    */
    - if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache) {
    + if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache)
    mm->free_area_cache = addr;
    - mm->cached_hole_size = ~0UL;
    - }
    }

    /*
    --
    1.7.7.5


    \
     
     \ /
      Last update: 2012-01-13 12:53    [W:0.048 / U:30.352 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site