[lkml]   [2012]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    Subject[PATCH -V9 05/15] hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb
    From: "Aneesh Kumar K.V" <>

    i_mmap_mutex lock was added in unmap_single_vma by 502717f4e ("hugetlb:
    fix linked list corruption in unmap_hugepage_range()") but we don't use
    page->lru in unmap_hugepage_range any more. Also the lock was taken
    higher up in the stack in some code path. That would result in deadlock.

    unmap_mapping_range (i_mmap_mutex)
    -> unmap_mapping_range_tree
    -> unmap_mapping_range_vma
    -> zap_page_range_single
    -> unmap_single_vma
    -> unmap_hugepage_range (i_mmap_mutex)

    For shared pagetable support for huge pages, since pagetable pages are ref
    counted we don't need any lock during huge_pmd_unshare. We do take
    i_mmap_mutex in huge_pmd_share while walking the vma_prio_tree in mapping.
    (39dde65c9940c97f ("shared page table for hugetlb page")).

    Signed-off-by: Aneesh Kumar K.V <>
    mm/memory.c | 5 +----
    1 file changed, 1 insertion(+), 4 deletions(-)

    diff --git a/mm/memory.c b/mm/memory.c
    index 545e18a..f6bc04f 100644
    --- a/mm/memory.c
    +++ b/mm/memory.c
    @@ -1326,11 +1326,8 @@ static void unmap_single_vma(struct mmu_gather *tlb,
    * Since no pte has actually been setup, it is
    * safe to do nothing in this case.
    - if (vma->vm_file) {
    - mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
    + if (vma->vm_file)
    __unmap_hugepage_range(tlb, vma, start, end, NULL);
    - mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
    - }
    } else
    unmap_page_range(tlb, vma, start, end, details);

     \ /
      Last update: 2012-06-13 13:21    [W:0.038 / U:12.780 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site