lkml.org 
[lkml]   [2015]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] mm: vmscan: do not throttle based on pfmemalloc reserves if node has no reclaimable pages
On 03.04.2015 [20:24:45 +0200], Michal Hocko wrote:
> On Fri 03-04-15 10:43:57, Nishanth Aravamudan wrote:
> > On 31.03.2015 [11:48:29 +0200], Michal Hocko wrote:
> [...]
> > > I would expect kswapd would be looping endlessly because the zone
> > > wouldn't be balanced obviously. But I would be wrong... because
> > > pgdat_balanced is doing this:
> > > /*
> > > * A special case here:
> > > *
> > > * balance_pgdat() skips over all_unreclaimable after
> > > * DEF_PRIORITY. Effectively, it considers them balanced so
> > > * they must be considered balanced here as well!
> > > */
> > > if (!zone_reclaimable(zone)) {
> > > balanced_pages += zone->managed_pages;
> > > continue;
> > > }
> > >
> > > and zone_reclaimable is false for you as you didn't have any
> > > zone_reclaimable_pages(). But wakeup_kswapd doesn't do this check so it
> > > would see !zone_balanced() AFAICS (build_zonelists doesn't ignore those
> > > zones right?) and so the kswapd would be woken up easily. So it looks
> > > like a mess.
> >
> > My understanding, and I could easily be wrong, is that kswapd2 (node 2
> > is the exhausted one) spins endlessly, because the reclaim logic sees
> > that we are reclaiming from somewhere but the allocation request for
> > node 2 (which is __GFP_THISNODE for hugepages, not GFP_THISNODE) will
> > never complete, so we just continue to reclaim.
>
> __GFP_THISNODE would be waking up kswapd2 again and again, that is true.

Right, one idea I had for this was ensuring that we perform reclaim with
somehow some knowledge of __GFP_THISNODE -- that is it needs to be
somewhat targetted in order to actually help satisfy the current
allocation. But it got pretty hairy fast and I didn't want to break the
world :)

> I am just wondering whether we will have any __GFP_THISNODE allocations
> for a node without CPUs (numa_node_id() shouldn't return such a node
> AFAICS). Maybe if somebody is bound to Node2 explicitly but I would
> consider this as a misconfiguration.

Right, I'd need to check what happens if in our setup you taskset to
node2 and tried to force memory to be local -- I think you'd either be
killed immediately, or the kernel will just disagree with your binding
since it's invalid (e.g., that will happen if you try to bind to a
memoryless node, I think).

Keep in mind that although in my config node2 had no CPUs, that's not a
hard & fast requirement. I do believe in a previous iteration of this
bug, the exhausted node had no free memory but did have cpus assigned to
it.

-Nish



\
 
 \ /
  Last update: 2015-04-03 21:01    [W:0.051 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site