lkml.org 
[lkml]   [2018]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.19 003/139] mm/huge_memory: fix lockdep complaint on 32-bit i_size_read()
    Date
    4.19-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    commit 006d3ff27e884f80bd7d306b041afc415f63598f upstream.

    Huge tmpfs testing, on 32-bit kernel with lockdep enabled, showed that
    __split_huge_page() was using i_size_read() while holding the irq-safe
    lru_lock and page tree lock, but the 32-bit i_size_read() uses an
    irq-unsafe seqlock which should not be nested inside them.

    Instead, read the i_size earlier in split_huge_page_to_list(), and pass
    the end offset down to __split_huge_page(): all while holding head page
    lock, which is enough to prevent truncation of that extent before the
    page tree lock has been taken.

    Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261520070.2275@eggly.anvils
    Fixes: baa355fd33142 ("thp: file pages support for split_huge_page()")
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Jerome Glisse <jglisse@redhat.com>
    Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: <stable@vger.kernel.org> [4.8+]
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    mm/huge_memory.c | 19 +++++++++++++------
    1 file changed, 13 insertions(+), 6 deletions(-)

    diff --git a/mm/huge_memory.c b/mm/huge_memory.c
    index c12b441a99f9..15310f14c25e 100644
    --- a/mm/huge_memory.c
    +++ b/mm/huge_memory.c
    @@ -2410,12 +2410,11 @@ static void __split_huge_page_tail(struct page *head, int tail,
    }

    static void __split_huge_page(struct page *page, struct list_head *list,
    - unsigned long flags)
    + pgoff_t end, unsigned long flags)
    {
    struct page *head = compound_head(page);
    struct zone *zone = page_zone(head);
    struct lruvec *lruvec;
    - pgoff_t end = -1;
    int i;

    lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat);
    @@ -2423,9 +2422,6 @@ static void __split_huge_page(struct page *page, struct list_head *list,
    /* complete memcg works before add pages to LRU */
    mem_cgroup_split_huge_fixup(head);

    - if (!PageAnon(page))
    - end = DIV_ROUND_UP(i_size_read(head->mapping->host), PAGE_SIZE);
    -
    for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {
    __split_huge_page_tail(head, i, lruvec, list);
    /* Some pages can be beyond i_size: drop them from page cache */
    @@ -2597,6 +2593,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
    int count, mapcount, extra_pins, ret;
    bool mlocked;
    unsigned long flags;
    + pgoff_t end;

    VM_BUG_ON_PAGE(is_huge_zero_page(page), page);
    VM_BUG_ON_PAGE(!PageLocked(page), page);
    @@ -2619,6 +2616,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
    ret = -EBUSY;
    goto out;
    }
    + end = -1;
    mapping = NULL;
    anon_vma_lock_write(anon_vma);
    } else {
    @@ -2632,6 +2630,15 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)

    anon_vma = NULL;
    i_mmap_lock_read(mapping);
    +
    + /*
    + *__split_huge_page() may need to trim off pages beyond EOF:
    + * but on 32-bit, i_size_read() takes an irq-unsafe seqlock,
    + * which cannot be nested inside the page tree lock. So note
    + * end now: i_size itself may be changed at any moment, but
    + * head page lock is good enough to serialize the trimming.
    + */
    + end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);
    }

    /*
    @@ -2681,7 +2688,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
    if (mapping)
    __dec_node_page_state(page, NR_SHMEM_THPS);
    spin_unlock(&pgdata->split_queue_lock);
    - __split_huge_page(page, list, flags);
    + __split_huge_page(page, list, end, flags);
    if (PageSwapCache(head)) {
    swp_entry_t entry = { .val = page_private(head) };

    --
    2.17.1


    \
     
     \ /
      Last update: 2018-12-04 12:33    [W:4.277 / U:0.912 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site