lkml.org 
[lkml]   [2013]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH] THP: Use explicit memory barrier
    Date
    __do_huge_pmd_anonymous_page depends on page_add_new_anon_rmap's
    spinlock for making sure that clear_huge_page write become visible
    after set set_pmd_at() write.

    But lru_cache_add_lru uses pagevec so it could miss spinlock
    easily so above rule was broken so user may see inconsistent data.

    This patch fixes it with using explict barrier rather than depending
    on lru spinlock.

    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Hugh Dickins <hughd@google.com>
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    ---
    mm/huge_memory.c | 7 +++----
    1 file changed, 3 insertions(+), 4 deletions(-)

    diff --git a/mm/huge_memory.c b/mm/huge_memory.c
    index bfa142e..fad800e 100644
    --- a/mm/huge_memory.c
    +++ b/mm/huge_memory.c
    @@ -725,11 +725,10 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
    pmd_t entry;
    entry = mk_huge_pmd(page, vma);
    /*
    - * The spinlocking to take the lru_lock inside
    - * page_add_new_anon_rmap() acts as a full memory
    - * barrier to be sure clear_huge_page writes become
    - * visible after the set_pmd_at() write.
    + * clear_huge_page write become visible after the
    + * set_pmd_at() write.
    */
    + smp_wmb();
    page_add_new_anon_rmap(page, vma, haddr);
    set_pmd_at(mm, haddr, pmd, entry);
    pgtable_trans_huge_deposit(mm, pgtable);
    --
    1.8.2


    \
     
     \ /
      Last update: 2013-04-01 02:21    [W:0.033 / U:333.256 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site