lkml.org 
[lkml]   [2017]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/6] mm: make movable onlining suck less
On Wed 05-04-17 20:15:02, Michal Hocko wrote:
> On Wed 05-04-17 12:32:49, Reza Arbab wrote:
> > On Wed, Apr 05, 2017 at 05:42:59PM +0200, Michal Hocko wrote:
> > >But one thing that is really bugging me is how could you see low pfns in
> > >the previous oops. Please drop the last patch and sprinkle printks down
> > >the remove_memory path to see where this all go south. I believe that
> > >there is something in the initialization code lurking in my code. Please
> > >also scratch the pfn_valid check in online_pages diff. It will not help
> > >here.
> >
> > Got it.
> >
> > shrink_pgdat_span: start_pfn=0x10000, end_pfn=0x10100, pgdat_start_pfn=0x0, pgdat_end_pfn=0x20000
> >
> > The problem is that pgdat_start_pfn here should be 0x10000. As you
> > suspected, it never got set. This fixes things for me.
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 623507f..37c1b63 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -884,7 +884,7 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon
> > {
> > unsigned long old_end_pfn = pgdat_end_pfn(pgdat);
> >
> > - if (start_pfn < pgdat->node_start_pfn)
> > + if (!pgdat->node_spanned_pages || start_pfn < pgdat->node_start_pfn)
> > pgdat->node_start_pfn = start_pfn;
>
> Dang! You are absolutely right. This explains the issue during the
> remove_memory. I still fail to see how this makes any difference for the
> sysfs file registration though. If anything the pgdat will be larger and
> so try_offline_node would check also unrelated node0 but the code will
> handle that and eventually offline the node1 anyway. /me still confused.

OK, I was staring into the code and I guess I finally understand what is
going on here. Looking at arch_add_memory->...->register_mem_sect_under_node
was just misleading. I am still not 100% sure why but we try to do the
same thing later from register_one_node->link_mem_sections for nodes
which were offline. I should have noticed this path before. And here
is the difference from the previous code. We are past arch_add_memory
and that path used to do __add_zone which among other things will also
resize node boundaries. I am not doing that anymore because I postpone
that to the onlining phase. Jeez this code is so convoluted my head
spins.

I am not really sure how to fix this. I suspect register_mem_sect_under_node
should just ignore the online state of the node. But I wouldn't
be all that surprised if this had some subtle reason as well. An
alternative would be to actually move register_mem_sect_under_node out
of register_new_memory and move it up the call stack, most probably to
add_memory_resource. We have the range and can map it to the memblock
and so will not rely on the node range. I will sleep over it and
hopefully come up with something tomorrow.
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-04-05 23:02    [W:0.180 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site