lkml.org 
[lkml]   [2020]   [May]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [v2 linux-next PATCH 2/2] mm: khugepaged: don't have to put being freed page back to lru
    On Fri, May 01, 2020 at 04:41:19AM +0800, Yang Shi wrote:
    > When khugepaged successfully isolated and copied data from old page to
    > collapsed THP, the old page is about to be freed if its last mapcount
    > is gone. So putting the page back to lru sounds not that productive in
    > this case since the page might be isolated by vmscan but it can't be
    > reclaimed by vmscan since it can't be unmapped by try_to_unmap() at all.
    >
    > Actually if khugepaged is the last user of this page so it can be freed
    > directly. So, clearing active and unevictable flags, unlocking and
    > dropping refcount from isolate instead of calling putback_lru_page().

    Any reason putback_lru_page() cannot do it internally? I mean if it is
    page_count() == 1, free the page.
    >
    > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    > Cc: Hugh Dickins <hughd@google.com>
    > Cc: Andrea Arcangeli <aarcange@redhat.com>
    > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
    > ---
    > v2: Check mapcount and skip putback lru if the last mapcount is gone
    >
    > mm/khugepaged.c | 20 ++++++++++++++------
    > 1 file changed, 14 insertions(+), 6 deletions(-)
    >
    > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
    > index 0c8d30b..1fdd677 100644
    > --- a/mm/khugepaged.c
    > +++ b/mm/khugepaged.c
    > @@ -559,10 +559,18 @@ void __khugepaged_exit(struct mm_struct *mm)
    > static void release_pte_page(struct page *page)
    > {
    > mod_node_page_state(page_pgdat(page),
    > - NR_ISOLATED_ANON + page_is_file_lru(page),
    > - -compound_nr(page));
    > - unlock_page(page);
    > - putback_lru_page(page);
    > + NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
    > +
    > + if (total_mapcount(page)) {
    > + unlock_page(page);
    > + putback_lru_page(page);
    > + } else {
    > + ClearPageActive(page);
    > + ClearPageUnevictable(page);
    > + unlock_page(page);
    > + /* Drop refcount from isolate */
    > + put_page(page);
    > + }
    > }
    >
    > static void release_pte_pages(pte_t *pte, pte_t *_pte,
    > @@ -771,8 +779,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
    > } else {
    > src_page = pte_page(pteval);
    > copy_user_highpage(page, src_page, address, vma);
    > - if (!PageCompound(src_page))
    > - release_pte_page(src_page);
    > /*
    > * ptl mostly unnecessary, but preempt has to
    > * be disabled to update the per-cpu stats
    > @@ -786,6 +792,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
    > pte_clear(vma->vm_mm, address, _pte);
    > page_remove_rmap(src_page, false);
    > spin_unlock(ptl);
    > + if (!PageCompound(src_page))
    > + release_pte_page(src_page);
    > free_page_and_swap_cache(src_page);
    > }
    > }
    > --
    > 1.8.3.1
    >
    >

    --
    Kirill A. Shutemov

    \
     
     \ /
      Last update: 2020-05-01 09:05    [W:4.609 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site