lkml.org 
[lkml]   [2019]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: thp: clear PageDoubleMap flag when the last PMD map gone
On Fri, Oct 25, 2019 at 07:32:33PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 25, 2019 at 08:58:22AM -0700, Yang Shi wrote:
> >
> >
> > On 10/25/19 8:36 AM, Kirill A. Shutemov wrote:
> > > On Fri, Oct 25, 2019 at 01:27:46AM +0800, Yang Shi wrote:
> > > > File THP sets PageDoubleMap flag when the first it gets PTE mapped, but
> > > > the flag is never cleared until the THP is freed. This result in
> > > > unbalanced state although it is not a big deal.
> > > >
> > > > Clear the flag when the last compound_mapcount is gone. It should be
> > > > cleared when all the PTE maps are gone (become PMD mapped only) as well,
> > > > but this needs check all subpage's _mapcount every time any subpage's
> > > > rmap is removed, the overhead may be not worth. The anonymous THP also
> > > > just clears PageDoubleMap flag when the last PMD map is gone.
> > > NAK, sorry.
> > >
> > > The key difference with anon THP that file THP can be mapped again with
> > > PMD after all PMD (or all) mappings are gone.
> > >
> > > Your patch breaks the case when you map the page with PMD again while the
> > > page is still mapped with PTEs. Who would set PageDoubleMap() in this
> > > case?
> >
> > Aha, yes, you are right. I missed that point. However, I'm wondering we
> > might move this up a little bit like this:
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index d17cbf3..ac046fd 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1230,15 +1230,17 @@ static void page_remove_file_rmap(struct page *page,
> > bool compound)
> >                         if (atomic_add_negative(-1, &page[i]._mapcount))
> >                                 nr++;
> >                 }
> > +
> > +               /* No PTE map anymore */
> > +               if (nr == HPAGE_PMD_NR)
> > +                       ClearPageDoubleMap(compound_head(page));
> > +
> >                 if (!atomic_add_negative(-1, compound_mapcount_ptr(page)))
> >                         goto out;
> >                 if (PageSwapBacked(page))
> >                         __dec_node_page_state(page, NR_SHMEM_PMDMAPPED);
> >                 else
> >                         __dec_node_page_state(page, NR_FILE_PMDMAPPED);
> > -
> > -               /* The last PMD map is gone */
> > -               ClearPageDoubleMap(compound_head(page));
> >         } else {
> >                 if (!atomic_add_negative(-1, &page->_mapcount))
> >                         goto out;
> >
> >
> > This should guarantee no PTE map anymore, it should be safe to clear the
> > flag.
>
> At first glance looks safe, but let me think more about it. I didn't
> expect it be that easy :P

How do you protect from races? What prevents other thread/process to map
the page as PTE after you've calculated 'nr'?

I don't remember the code that well, but I believe we don't require
PageLock for all cases... Or do we?

--
Kirill A. Shutemov

\
 
 \ /
  Last update: 2019-10-25 18:40    [W:0.040 / U:1.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site