lkml.org 
[lkml]   [2019]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH v12 5/6] khugepaged: enable collapse pmd for pte-mapped THP
    Date


    > On Aug 9, 2019, at 9:30 AM, Song Liu <songliubraving@fb.com> wrote:
    >
    >
    >
    >> On Aug 9, 2019, at 8:24 AM, Oleg Nesterov <oleg@redhat.com> wrote:
    >>
    >> On 08/08, Song Liu wrote:
    >>>
    >>>> On Aug 8, 2019, at 9:33 AM, Oleg Nesterov <oleg@redhat.com> wrote:
    >>>>
    >>>>> + for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
    >>>>> + pte_t *pte = pte_offset_map(pmd, addr);
    >>>>> + struct page *page;
    >>>>> +
    >>>>> + if (pte_none(*pte))
    >>>>> + continue;
    >>>>> +
    >>>>> + page = vm_normal_page(vma, addr, *pte);
    >>
    >> just noticed... shouldn't you also check pte_present() before
    >> vm_normal_page() ?
    >
    > Good catch! Let me fix this.
    >
    >>
    >>>>> + if (!page || !PageCompound(page))
    >>>>> + return;
    >>>>> +
    >>>>> + if (!hpage) {
    >>>>> + hpage = compound_head(page);
    >>>>
    >>>> OK,
    >>>>
    >>>>> + if (hpage->mapping != vma->vm_file->f_mapping)
    >>>>> + return;
    >>>>
    >>>> is it really possible? May be WARN_ON(hpage->mapping != vm_file->f_mapping)
    >>>> makes more sense ?
    >>>
    >>> I haven't found code paths lead to this,
    >>
    >> Neither me, that is why I asked. I think this should not be possible,
    >> but again this is not my area.
    >>
    >>> but this is technically possible.
    >>> This pmd could contain subpages from different THPs.
    >>
    >> Then please explain how this can happen ?
    >>
    >>> The __replace_page()
    >>> function in uprobes.c creates similar pmd.
    >>
    >> No it doesn't,
    >>
    >>> Current uprobe code won't really create this problem, because
    >>> !PageCompound() check above is sufficient. But it won't be difficult to
    >>> modify uprobe code to break this.
    >>
    >> I bet it will be a) difficult and b) the very idea to do this would be wrong.
    >>
    >>> For this code to be accurate and safe,
    >>> I think both this check and the one below are necessary.
    >>
    >> I didn't suggest to remove these checks.
    >>
    >>> Also, this code
    >>> is not on any critical path, so the overhead should be negligible.
    >>
    >> I do not care about overhead. But I do care about a poor reader like me
    >> who will try to understand this code.
    >>
    >> If you too do not understand how a THP page can have a different mapping
    >> then use VM_WARN or at least add a comment to explain that this is not
    >> supposed to happen!
    >
    > Fair enough. I will add WARN and more comments.
    >
    > Thanks,
    > Song

    To reduce spamming, I attached updated 5/6 here.

    Thanks,
    Song

    ====================== 8< =============================

    From 3fb735e03b149bf8a90918dd383a3a31b3f9008a Mon Sep 17 00:00:00 2001
    From: Song Liu <songliubraving@fb.com>
    Date: Sun, 28 Jul 2019 03:43:48 -0700
    Subject: [PATCH v13 5/6] khugepaged: enable collapse pmd for pte-mapped THP

    khugepaged needs exclusive mmap_sem to access page table. When it fails
    to lock mmap_sem, the page will fault in as pte-mapped THP. As the page
    is already a THP, khugepaged will not handle this pmd again.

    This patch enables the khugepaged to retry collapse the page table.

    struct mm_slot (in khugepaged.c) is extended with an array, containing
    addresses of pte-mapped THPs. We use array here for simplicity. We can
    easily replace it with more advanced data structures when needed.

    In khugepaged_scan_mm_slot(), if the mm contains pte-mapped THP, we try
    to collapse the page table.

    Since collapse may happen at an later time, some pages may already fault
    in. collapse_pte_mapped_thp() is added to properly handle these pages.
    collapse_pte_mapped_thp() also double checks whether all ptes in this pmd
    are mapping to the same THP. This is necessary because some subpage of
    the THP may be replaced, for example by uprobe. In such cases, it is not
    possible to collapse the pmd.

    Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Signed-off-by: Song Liu <songliubraving@fb.com>
    ---
    include/linux/khugepaged.h | 12 +++
    mm/khugepaged.c | 154 ++++++++++++++++++++++++++++++++++++-
    2 files changed, 165 insertions(+), 1 deletion(-)

    diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
    index 082d1d2a5216..bc45ea1efbf7 100644
    --- a/include/linux/khugepaged.h
    +++ b/include/linux/khugepaged.h
    @@ -15,6 +15,14 @@ extern int __khugepaged_enter(struct mm_struct *mm);
    extern void __khugepaged_exit(struct mm_struct *mm);
    extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
    unsigned long vm_flags);
    +#ifdef CONFIG_SHMEM
    +extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr);
    +#else
    +static inline void collapse_pte_mapped_thp(struct mm_struct *mm,
    + unsigned long addr)
    +{
    +}
    +#endif

    #define khugepaged_enabled() \
    (transparent_hugepage_flags & \
    @@ -73,6 +81,10 @@ static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
    {
    return 0;
    }
    +static inline void collapse_pte_mapped_thp(struct mm_struct *mm,
    + unsigned long addr)
    +{
    +}
    #endif /* CONFIG_TRANSPARENT_HUGEPAGE */

    #endif /* _LINUX_KHUGEPAGED_H */
    diff --git a/mm/khugepaged.c b/mm/khugepaged.c
    index 40c25ddf29e4..3e722065e909 100644
    --- a/mm/khugepaged.c
    +++ b/mm/khugepaged.c
    @@ -77,6 +77,8 @@ static __read_mostly DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);

    static struct kmem_cache *mm_slot_cache __read_mostly;

    +#define MAX_PTE_MAPPED_THP 8
    +
    /**
    * struct mm_slot - hash lookup from mm to mm_slot
    * @hash: hash collision list
    @@ -87,6 +89,10 @@ struct mm_slot {
    struct hlist_node hash;
    struct list_head mm_node;
    struct mm_struct *mm;
    +
    + /* pte-mapped THP in this mm */
    + int nr_pte_mapped_thp;
    + unsigned long pte_mapped_thp[MAX_PTE_MAPPED_THP];
    };

    /**
    @@ -1254,6 +1260,145 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
    }

    #if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE)
    +/*
    + * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
    + * khugepaged should try to collapse the page table.
    + */
    +static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
    + unsigned long addr)
    +{
    + struct mm_slot *mm_slot;
    +
    + VM_BUG_ON(addr & ~HPAGE_PMD_MASK);
    +
    + spin_lock(&khugepaged_mm_lock);
    + mm_slot = get_mm_slot(mm);
    + if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP))
    + mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr;
    + spin_unlock(&khugepaged_mm_lock);
    + return 0;
    +}
    +
    +/**
    + * Try to collapse a pte-mapped THP for mm at address haddr.
    + *
    + * This function checks whether all the PTEs in the PMD are pointing to the
    + * right THP. If so, retract the page table so the THP can refault in with
    + * as pmd-mapped.
    + */
    +void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
    +{
    + unsigned long haddr = addr & HPAGE_PMD_MASK;
    + struct vm_area_struct *vma = find_vma(mm, haddr);
    + struct page *hpage = NULL;
    + pmd_t *pmd, _pmd;
    + spinlock_t *ptl;
    + int count = 0;
    + int i;
    +
    + if (!vma || !vma->vm_file ||
    + vma->vm_start > haddr || vma->vm_end < haddr + HPAGE_PMD_SIZE)
    + return;
    +
    + /*
    + * This vm_flags may not have VM_HUGEPAGE if the page was not
    + * collapsed by this mm. But we can still collapse if the page is
    + * the valid THP. Add extra VM_HUGEPAGE so hugepage_vma_check()
    + * will not fail the vma for missing VM_HUGEPAGE
    + */
    + if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
    + return;
    +
    + pmd = mm_find_pmd(mm, haddr);
    + if (!pmd)
    + return;
    +
    + /* step 1: check all mapped PTEs are to the right huge page */
    + for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
    + pte_t *pte = pte_offset_map(pmd, addr);
    + struct page *page;
    +
    + if (pte_none(*pte) || !pte_present(*pte))
    + continue;
    +
    + page = vm_normal_page(vma, addr, *pte);
    +
    + if (!page || !PageCompound(page))
    + return;
    +
    + if (!hpage) {
    + hpage = compound_head(page);
    + /*
    + * The mapping of the THP should not change.
    + *
    + * Note that uprobe may change the page table, but
    + * the new page installed by uprobe will not pass
    + * PageCompound() check.
    + */
    + if (VM_WARN_ON(hpage->mapping != vma->vm_file->f_mapping))
    + return;
    + }
    +
    + /*
    + * Confirm the page maps to the correct subpage.
    + *
    + * Note that uprobe may change the page table, but the new
    + * page installed by uprobe will not pass PageCompound()
    + * check.
    + */
    + if (VM_WARN_ON(hpage + i != page))
    + return;
    + count++;
    + }
    +
    + /* step 2: adjust rmap */
    + for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
    + pte_t *pte = pte_offset_map(pmd, addr);
    + struct page *page;
    +
    + if (pte_none(*pte))
    + continue;
    + page = vm_normal_page(vma, addr, *pte);
    + page_remove_rmap(page, false);
    + }
    +
    + /* step 3: set proper refcount and mm_counters. */
    + if (hpage) {
    + page_ref_sub(hpage, count);
    + add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count);
    + }
    +
    + /* step 4: collapse pmd */
    + ptl = pmd_lock(vma->vm_mm, pmd);
    + _pmd = pmdp_collapse_flush(vma, addr, pmd);
    + spin_unlock(ptl);
    + mm_dec_nr_ptes(mm);
    + pte_free(mm, pmd_pgtable(_pmd));
    +}
    +
    +static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
    +{
    + struct mm_struct *mm = mm_slot->mm;
    + int i;
    +
    + if (likely(mm_slot->nr_pte_mapped_thp == 0))
    + return 0;
    +
    + if (!down_write_trylock(&mm->mmap_sem))
    + return -EBUSY;
    +
    + if (unlikely(khugepaged_test_exit(mm)))
    + goto out;
    +
    + for (i = 0; i < mm_slot->nr_pte_mapped_thp; i++)
    + collapse_pte_mapped_thp(mm, mm_slot->pte_mapped_thp[i]);
    +
    +out:
    + mm_slot->nr_pte_mapped_thp = 0;
    + up_write(&mm->mmap_sem);
    + return 0;
    +}
    +
    static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
    {
    struct vm_area_struct *vma;
    @@ -1287,7 +1432,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
    up_write(&vma->vm_mm->mmap_sem);
    mm_dec_nr_ptes(vma->vm_mm);
    pte_free(vma->vm_mm, pmd_pgtable(_pmd));
    - }
    + } else
    + khugepaged_add_pte_mapped_thp(vma->vm_mm, addr);
    }
    i_mmap_unlock_write(mapping);
    }
    @@ -1709,6 +1855,11 @@ static void khugepaged_scan_file(struct mm_struct *mm,
    {
    BUILD_BUG();
    }
    +
    +static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
    +{
    + return 0;
    +}
    #endif

    static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
    @@ -1733,6 +1884,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
    khugepaged_scan.mm_slot = mm_slot;
    }
    spin_unlock(&khugepaged_mm_lock);
    + khugepaged_collapse_pte_mapped_thps(mm_slot);

    mm = mm_slot->mm;
    /*
    --
    2.17.1



    \
     
     \ /
      Last update: 2019-08-09 20:02    [W:3.408 / U:0.444 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site