lkml.org 
[lkml]   [2015]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCHv4 12/24] thp: PMD splitting without splitting compound page
    Date
    "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> writes:

    > Current split_huge_page() combines two operations: splitting PMDs into
    > tables of PTEs and splitting underlying compound page. This patch
    > changes split_huge_pmd() implementation to split the given PMD without
    > splitting other PMDs this page mapped with or underlying compound page.
    >
    > In order to do this we have to get rid of tail page refcounting, which
    > uses _mapcount of tail pages. Tail page refcounting is needed to be able
    > to split THP page at any point: we always know which of tail pages is
    > pinned (i.e. by get_user_pages()) and can distribute page count
    > correctly.
    >
    > We can avoid this by allowing split_huge_page() to fail if the compound
    > page is pinned. This patch removes all infrastructure for tail page
    > refcounting and make split_huge_page() to always return -EBUSY. All
    > split_huge_page() users already know how to handle its fail. Proper
    > implementation will be added later.
    >
    > Without tail page refcounting, implementation of split_huge_pmd() is
    > pretty straight-forward.
    >
    > Memory cgroup is not yet ready for new refcouting. Let's disable it on
    > Kconfig level.
    >
    .....
    ......

    >
    > spin_lock(ptl);
    > if (page)
    > - put_user_huge_page(page);
    > + put_page(page);
    > if (unlikely(!pmd_same(*pmd, orig_pmd))) {
    > spin_unlock(ptl);
    > mem_cgroup_cancel_charge(new_page, memcg);
    > @@ -1662,51 +1631,78 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
    > put_huge_zero_page();
    > }
    >
    > -void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
    > - pmd_t *pmd)
    > +
    > +static void __split_huge_pmd_locked(struct vm_area_struct *vma,
    > + pmd_t *pmd, unsigned long address)
    > {
    > - spinlock_t *ptl;
    > + unsigned long haddr = address & HPAGE_PMD_MASK;
    > struct page *page;
    > struct mm_struct *mm = vma->vm_mm;
    > - unsigned long haddr = address & HPAGE_PMD_MASK;
    > - unsigned long mmun_start; /* For mmu_notifiers */
    > - unsigned long mmun_end; /* For mmu_notifiers */
    > + pgtable_t pgtable;
    > + pmd_t _pmd;
    > + bool young, write;
    > + int i;
    >
    > - BUG_ON(vma->vm_start > haddr || vma->vm_end < haddr + HPAGE_PMD_SIZE);
    > + VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
    > + VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
    > +
    > + if (is_huge_zero_pmd(*pmd))
    > + return __split_huge_zero_page_pmd(vma, haddr, pmd);
    > +
    > + page = pmd_page(*pmd);
    > + VM_BUG_ON_PAGE(!page_count(page), page);
    > + atomic_add(HPAGE_PMD_NR - 1, &page->_count);
    > +
    > + write = pmd_write(*pmd);
    > + young = pmd_young(*pmd);
    > +
    > + /* leave pmd empty until pte is filled */
    > + pmdp_clear_flush_notify(vma, haddr, pmd);
    > +

    So we now mark pmd none, while we go ahead and split the pmd. But then what
    happens to a parallel fault ? We don't hold mmap_sem here right ?

    > + pgtable = pgtable_trans_huge_withdraw(mm, pmd);
    > + pmd_populate(mm, &_pmd, pgtable);
    > +
    > + for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
    > + pte_t entry, *pte;
    > + /*
    > + * Note that NUMA hinting access restrictions are not
    > + * transferred to avoid any possibility of altering
    > + * permissions across VMAs.
    > + */
    > + entry = mk_pte(page + i, vma->vm_page_prot);
    > + entry = maybe_mkwrite(pte_mkdirty(entry), vma);
    > + if (!write)
    > + entry = pte_wrprotect(entry);
    > + if (!young)
    > + entry = pte_mkold(entry);
    > + pte = pte_offset_map(&_pmd, haddr);
    > + BUG_ON(!pte_none(*pte));
    > + atomic_inc(&page[i]._mapcount);
    > + set_pte_at(mm, haddr, pte, entry);
    > + pte_unmap(pte);
    > + }
    > + smp_wmb(); /* make pte visible before pmd */
    > + pmd_populate(mm, pmd, pgtable);
    > + atomic_dec(compound_mapcount_ptr(page));
    > +}
    > +

    -aneesh



    \
     
     \ /
      Last update: 2015-03-29 19:01    [W:4.058 / U:0.076 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site