[lkml]   [2012]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH -alternative] mm: hugetlbfs: Close race during teardown of hugetlbfs shared page tables V2 (resend)
    On 07/26/2012 01:42 PM, Rik van Riel wrote:
    > On 07/23/2012 12:04 AM, Hugh Dickins wrote:
    >> Please don't be upset if I say that I don't like either of your patches.
    >> Mainly for obvious reasons - I don't like Mel's because anything with
    >> trylock retries and nested spinlocks worries me before I can even start
    >> to think about it; and I don't like Michal's for the same reason as Mel,
    >> that it spreads more change around in common paths than we would like.
    > I have a naive question.
    > In huge_pmd_share, we protect ourselves by taking
    > the mapping->i_mmap_mutex.
    > Is there any reason we could not take the i_mmap_mutex
    > in the huge_pmd_unshare path?

    I think it is already taken on every path into huge_pmd_unshare().

    > I see that hugetlb_change_protection already takes that
    > lock. Is there something preventing __unmap_hugepage_range
    > from also taking mapping->i_mmap_mutex?
    > That way the sharing and the unsharing code are
    > protected by the same, per shm segment, lock.

     \ /
      Last update: 2012-07-26 20:41    [W:0.027 / U:2.232 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site