lkml.org 
[lkml]   [2020]   [Nov]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [External] Re: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
On Tue, Nov 10, 2020 at 2:51 AM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Nov 08, 2020 at 10:11:01PM +0800, Muchun Song wrote:
> > +static inline int freed_vmemmap_hpage(struct page *page)
> > +{
> > + return atomic_read(&page->_mapcount) + 1;
> > +}
> > +
> > +static inline int freed_vmemmap_hpage_inc(struct page *page)
> > +{
> > + return atomic_inc_return_relaxed(&page->_mapcount) + 1;
> > +}
> > +
> > +static inline int freed_vmemmap_hpage_dec(struct page *page)
> > +{
> > + return atomic_dec_return_relaxed(&page->_mapcount) + 1;
> > +}
>
> Are these relaxed any different that the normal ones on x86_64?
> I got confused following the macros.

A PTE table can contain 64 HugeTLB(2MB) page's struct page structures.
So I use the freed_vmemmap_hpage to indicate how many HugeTLB pages
that it's vmemmap pages are already freed to buddy.

Once vmemmap pages of a HugeTLB page are freed, we call the
freed_vmemmap_hpage_inc, when freeing a HugeTLB to the buddy,
we should call freed_vmemmap_hpage_dec.

If the freed_vmemmap_hpage hit zero when free HugeTLB, we try to merge
the PTE table to PMD(now only support gigantic pages). This can refer to

[PATCH v3 19/21] mm/hugetlb: Merge pte to huge pmd only for gigantic

Thanks.

>
> > +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
> > + unsigned long start,
> > + unsigned int nr_free,
> > + struct list_head *free_pages)
> > +{
> > + /* Make the tail pages are mapped read-only. */
> > + pgprot_t pgprot = PAGE_KERNEL_RO;
> > + pte_t entry = mk_pte(reuse, pgprot);
> > + unsigned long addr;
> > + unsigned long end = start + (nr_free << PAGE_SHIFT);
>
> See below.
>
> > +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
> > + unsigned long addr,
> > + struct list_head *free_pages)
> > +{
> > + unsigned long next;
> > + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
> > + unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
> > + struct page *reuse = NULL;
> > +
> > + addr = start;
> > + do {
> > + unsigned int nr_pages;
> > + pte_t *ptep;
> > +
> > + ptep = pte_offset_kernel(pmd, addr);
> > + if (!reuse)
> > + reuse = pte_page(ptep[-1]);
>
> Can we define a proper name for that instead of -1?
>
> e.g: TAIL_PAGE_REUSE or something like that.

OK, will do.

>
> > +
> > + next = vmemmap_hpage_addr_end(addr, end);
> > + nr_pages = (next - addr) >> PAGE_SHIFT;
> > + __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
> > + free_pages);
>
> Why not passing next instead of nr_pages? I think it makes more sense.
> As a bonus we can kill the variable.

Good catch. We can pass next instead of nr_pages. Thanks.


>
> > +static void split_vmemmap_huge_page(struct hstate *h, struct page *head,
> > + pmd_t *pmd)
> > +{
> > + pgtable_t pgtable;
> > + unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
> > + unsigned long addr = start;
> > + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> > +
> > + while (nr-- && (pgtable = vmemmap_pgtable_withdraw(head))) {
>
> The same with previous patches, I would scrap "nr" and its use.
>
> > + VM_BUG_ON(freed_vmemmap_hpage(pgtable));
>
> I guess here we want to check whether we already call free_huge_page_vmemmap
> on this range?
> For this to have happened, the locking should have failed, right?

Only the first HugeTLB page should split the PMD to PTE. The other 63
HugeTLB pages
do not need to split. Here I want to make sure we are the first.

>
> > +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > + pmd_t *pmd;
> > + spinlock_t *ptl;
> > + LIST_HEAD(free_pages);
> > +
> > + if (!free_vmemmap_pages_per_hpage(h))
> > + return;
> > +
> > + pmd = vmemmap_to_pmd(head);
> > + ptl = vmemmap_pmd_lock(pmd);
> > + if (vmemmap_pmd_huge(pmd)) {
> > + VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h));
>
> I think that checking for free_vmemmap_pages_per_hpage is enough.
> In the end, pgtable_pages_to_prealloc_per_hpage uses free_vmemmap_pages_per_hpage.

The free_vmemmap_pages_per_hpage is not enough. See the comments above.

Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



--
Yours,
Muchun

\
 
 \ /
  Last update: 2020-11-10 07:42    [W:0.122 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site