lkml.org 
[lkml]   [2012]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] mm: hugetlbfs: Close race during teardown of hugetlbfs shared page tables
Just a nit

On Fri 27-07-12 11:46:05, Mel Gorman wrote:
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index fd1d530..8c6e5a5 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2429,6 +2429,25 @@ again:
> tlb_end_vma(tlb, vma);
> }
>

I would welcome a comment here. Something like:
/*
* Called when the VMA is on the way out and page tables will be freed
* by free_pagetables.
* i_mmap_mutex has to be held when calling this function
*/

> +void __unmap_hugepage_range_final(struct mmu_gather *tlb,
> + struct vm_area_struct *vma, unsigned long start,
> + unsigned long end, struct page *ref_page)
> +{
> + __unmap_hugepage_range(tlb, vma, start, end, ref_page);
> +
> + /*
> + * Clear this flag so that x86's huge_pmd_share page_table_shareable
> + * test will fail on a vma being torn down, and not grab a page table
> + * on its way out. We're lucky that the flag has such an appropriate
> + * name, and can in fact be safely cleared here. We could clear it
> + * before the __unmap_hugepage_range above, but all that's necessary
> + * is to clear it before releasing the i_mmap_mutex. This works
> + * because in the context this is called, the VMA is about to be
> + * destroyed and the i_mmap_mutex is held.
> + */
> + vma->vm_flags &= ~VM_MAYSHARE;
> +}
> +

--
Michal Hocko
SUSE Labs


\
 
 \ /
  Last update: 2012-07-27 14:01    [W:1.278 / U:0.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site