lkml.org 
[lkml]   [2021]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] mm: mempolicy: don't have to split pmd for huge zero page
On Mon, Jun 7, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 07-06-21 15:02:39, Yang Shi wrote:
> > On Mon, Jun 7, 2021 at 11:55 AM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Mon 07-06-21 10:00:01, Yang Shi wrote:
> > > > On Sun, Jun 6, 2021 at 11:21 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > >
> > > > > On Fri 04-06-21 13:35:13, Yang Shi wrote:
> > > > > > When trying to migrate pages to obey mempolicy, the huge zero page is
> > > > > > split then the page table walk at PTE level just skips zero page. So it
> > > > > > seems pointless to split huge zero page, it could be just skipped like
> > > > > > base zero page.
> > > > >
> > > > > My THP knowledge is not the best but this is incorrect AIACS. Huge zero
> > > > > page is not split. We do split the pmd which is mapping the said page. I
> > > > > suspect you refer to vm_normal_page when talking about a zero page but
> > > > > please be aware that huge zero page is not a normal zero page. It is
> > > > > allocated dynamically (see get_huge_zero_page).
> > > >
> > > > For a normal huge page, yes, split_huge_pmd() just splits pmd. But
> > > > actually the base zero pfn will be inserted to PTEs when splitting
> > > > huge zero pmd. Please check __split_huge_zero_page_pmd() out.
> > >
> > > My bad. I didn't have a look all the way down there. The naming
> > > suggested that this is purely page table operations and I have suspected
> > > that ptes just point to the offset of the THP.
> > >
> > > But I am obviously wrong here. Sorry about that.
> > >
> > > > I should make this point clearer in the commit log. Sorry for the confusion.
> > > >
> > > > >
> > > > > So in the end you patch disables mbind of zero pages to a target node
> > > > > and that is a regression.
> > > >
> > > > Do we really migrate zero page? IIUC zero page is just skipped by
> > > > vm_normal_page() check in queue_pages_pte_range(), isn't it?
> > >
> > > Yeah, normal zero pages are skipped indeed. I haven't studied why this
> > > is the case yet. It surely sounds a bit suspicious because this is an
> > > explicit request to migrate memory and if the zero page is misplaced it
> > > should be moved. On the hand this would increase RSS so maybe this is
> > > the point.
> >
> > The zero page is a global shared page, I don't think "misplace"
> > applies to it. It doesn't make too much sense to migrate a shared
> > page. Actually there is page mapcount check in migrate_page_add() to
> > skip shared normal pages as well.
>
> I didn't really mean to migrate zero page itself. What I meant was to
> instanciate a new page when the global one is on a different NUMA node
> than the bind() requests. This can be either done by having per NUMA
> zero page or simply allocate a new page for the exclusive mapping.

IMHO, isn't it too overkilling?

>
> > > > > Have you tested the patch?
> > > >
> > > > No, just build test. I thought this change was straightforward.
> > > >
> > > > >
> > > > > > Set ACTION_CONTINUE to prevent the walk_page_range() split the pmd for
> > > > > > this case.
> > > > >
> > > > > Btw. this changelog is missing a problem statement. I suspect there is
> > > > > no actual problem that it should fix and it is likely driven by reading
> > > > > the code. Right?
> > > >
> > > > The actual problem is it is pointless to split a huge zero pmd. Yes,
> > > > it is driven by visual inspection.
> > >
> > > Is there any actual workload that cares? This is quite a subtle area so
> > > I would be careful to do changes just because...
> >
> > I'm not sure whether there is measurable improvement for actual
> > workloads, but I believe this change does eliminate some unnecessary
> > work.
>
> I can see why being consistent here is a good argument. On the other
> hand it would be imho better to look for reasons why zero pages are left
> misplaced before making the code consistent. From a very quick git

Typically the zero page is created from kernel's bss section, for
example, x86. I'm supposed kernel image itself is loaded on node #0
always.

> archeology it seems that vm_normal_page has been used since MPOL_MF_MOVE
> was introduced. At the time (dc9aa5b9d65fd) vm_normal_page hasn't
> skipped through zero page AFAICS. I do not remember all the details
> about zero page (wrt. pte special) handling though so it might be hidden
> at some other place.

I did some archeology, the findings are:

The zero page has PageReserved flag set, it was skipped by the
explicit PageReserved check in mempolicy.c since commit f4598c8b3678
("[PATCH] migration: make sure there is no attempt to migrate reserved
pages."). The zero page was not used anymore by do_anonymous_page()
since 2.6.24 by commit 557ed1fa2620 ("remove ZERO_PAGE"), then
reinstated by commit a13ea5b759645 ("mm: reinstate ZERO_PAGE") and
this commit added zero page check in vm_normal_page(), so mempolicy
doesn't depend on PageReserved check to skip zero page anymore since
then.

So the zero page is skipped by mempolicy.c since 2.6.16.

>
> In any case the existing code doesn't really work properly. The question
> is whether anybody actually cares but this is definitely something worth
> looking into IMHO.
>
> > I think the test shown in the previous email gives us some confidence
> > that the change doesn't have regression.
>
> Yes, this is true.
> --
> Michal Hocko
> SUSE Labs

\
 
 \ /
  Last update: 2021-06-08 19:17    [W:0.146 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site