lkml.org 
[lkml]   [2019]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH 3/4] mm/thp: allow drop THP from page cache
    On Wed, Oct 16, 2019 at 12:37:30AM -0700, Song Liu wrote:
    > From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    >
    > Once a THP is added to the page cache, it cannot be dropped via
    > /proc/sys/vm/drop_caches. Fix this issue with proper handling in
    > invalidate_mapping_pages() and __remove_mapping().
    >
    > Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
    > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    > Tested-by: Song Liu <songliubraving@fb.com>
    > Signed-off-by: Song Liu <songliubraving@fb.com>
    > ---
    > mm/truncate.c | 12 ++++++++++++
    > mm/vmscan.c | 3 ++-
    > 2 files changed, 14 insertions(+), 1 deletion(-)
    >
    > diff --git a/mm/vmscan.c b/mm/vmscan.c
    > index c6659bb758a4..1d80a188ad4a 100644
    > --- a/mm/vmscan.c
    > +++ b/mm/vmscan.c
    > @@ -932,7 +932,8 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
    > * Note that if SetPageDirty is always performed via set_page_dirty,
    > * and thus under the i_pages lock, then this ordering is not required.
    > */
    > - if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
    > + if (unlikely(PageTransHuge(page)) &&
    > + (PageSwapCache(page) || !PageSwapBacked(page)))
    > refcount = 1 + HPAGE_PMD_NR;
    > else
    > refcount = 2;

    Kirill suggests that this patch would be better (for this part of the patch;
    the part in truncate.c should remain as it is)

    commit ddcee327f96d57cb9d5310486d21e43892b7a368
    Author: William Kucharski <william.kucharski@oracle.com>
    Date: Fri Sep 20 16:14:51 2019 -0400

    mm: Support removing arbitrary sized pages from mapping

    __remove_mapping() assumes that pages can only be either base pages
    or HPAGE_PMD_SIZE. Further, it assumes that large pages are
    swap-backed. Support all kinds of pages by unconditionally asking how
    many pages this page references.

    Signed-off-by: William Kucharski <william.kucharski@oracle.com>
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

    diff --git a/mm/vmscan.c b/mm/vmscan.c
    index c6659bb758a4..f870da1f4bb7 100644
    --- a/mm/vmscan.c
    +++ b/mm/vmscan.c
    @@ -932,10 +932,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
    * Note that if SetPageDirty is always performed via set_page_dirty,
    * and thus under the i_pages lock, then this ordering is not required.
    */
    - if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
    - refcount = 1 + HPAGE_PMD_NR;
    - else
    - refcount = 2;
    + refcount = 1 + compound_nr(page);
    if (!page_ref_freeze(page, refcount))
    goto cannot_free;
    /* note: atomic_cmpxchg in page_ref_freeze provides the smp_rmb */
    \
     
     \ /
      Last update: 2019-10-17 18:13    [W:2.192 / U:0.156 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site