lkml.org 
[lkml]   [2018]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.9 02/50] mm/huge_memory.c: reorder operations in __split_huge_page_tail()
    Date
    4.9-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    commit 605ca5ede7643a01f4c4a15913f9714ac297f8a6 upstream.

    THP split makes non-atomic change of tail page flags. This is almost ok
    because tail pages are locked and isolated but this breaks recent
    changes in page locking: non-atomic operation could clear bit
    PG_waiters.

    As a result concurrent sequence get_page_unless_zero() -> lock_page()
    might block forever. Especially if this page was truncated later.

    Fix is trivial: clone flags before unfreezing page reference counter.

    This race exists since commit 62906027091f ("mm: add PageWaiters
    indicating tasks are waiting for a page bit") while unsave unfreeze
    itself was added in commit 8df651c7059e ("thp: cleanup
    split_huge_page()").

    clear_compound_head() also must be called before unfreezing page
    reference because after successful get_page_unless_zero() might follow
    put_page() which needs correct compound_head().

    And replace page_ref_inc()/page_ref_add() with page_ref_unfreeze() which
    is made especially for that and has semantic of smp_store_release().

    Link: http://lkml.kernel.org/r/151844393341.210639.13162088407980624477.stgit@buzz
    Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
    Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    mm/huge_memory.c | 36 +++++++++++++++---------------------
    1 file changed, 15 insertions(+), 21 deletions(-)

    diff --git a/mm/huge_memory.c b/mm/huge_memory.c
    index 583ad61cc2f1..c14aec110e90 100644
    --- a/mm/huge_memory.c
    +++ b/mm/huge_memory.c
    @@ -1876,26 +1876,13 @@ static void __split_huge_page_tail(struct page *head, int tail,
    struct page *page_tail = head + tail;

    VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail);
    - VM_BUG_ON_PAGE(page_ref_count(page_tail) != 0, page_tail);

    /*
    - * tail_page->_refcount is zero and not changing from under us. But
    - * get_page_unless_zero() may be running from under us on the
    - * tail_page. If we used atomic_set() below instead of atomic_inc() or
    - * atomic_add(), we would then run atomic_set() concurrently with
    - * get_page_unless_zero(), and atomic_set() is implemented in C not
    - * using locked ops. spin_unlock on x86 sometime uses locked ops
    - * because of PPro errata 66, 92, so unless somebody can guarantee
    - * atomic_set() here would be safe on all archs (and not only on x86),
    - * it's safer to use atomic_inc()/atomic_add().
    + * Clone page flags before unfreezing refcount.
    + *
    + * After successful get_page_unless_zero() might follow flags change,
    + * for exmaple lock_page() which set PG_waiters.
    */
    - if (PageAnon(head)) {
    - page_ref_inc(page_tail);
    - } else {
    - /* Additional pin to radix tree */
    - page_ref_add(page_tail, 2);
    - }
    -
    page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
    page_tail->flags |= (head->flags &
    ((1L << PG_referenced) |
    @@ -1907,14 +1894,21 @@ static void __split_huge_page_tail(struct page *head, int tail,
    (1L << PG_unevictable) |
    (1L << PG_dirty)));

    - /*
    - * After clearing PageTail the gup refcount can be released.
    - * Page flags also must be visible before we make the page non-compound.
    - */
    + /* Page flags must be visible before we make the page non-compound. */
    smp_wmb();

    + /*
    + * Clear PageTail before unfreezing page refcount.
    + *
    + * After successful get_page_unless_zero() might follow put_page()
    + * which needs correct compound_head().
    + */
    clear_compound_head(page_tail);

    + /* Finally unfreeze refcount. Additional reference from page cache. */
    + page_ref_unfreeze(page_tail, 1 + (!PageAnon(head) ||
    + PageSwapCache(head)));
    +
    if (page_is_young(head))
    set_page_young(page_tail);
    if (page_is_idle(head))
    --
    2.17.1


    \
     
     \ /
      Last update: 2018-12-04 12:09    [W:7.259 / U:0.520 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site