lkml.org 
[lkml]   [2013]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] mm: page_alloc: exclude unreclaimable allocations from zone fairness policy
On Wed, Dec 11, 2013 at 01:09:16PM -0500, Johannes Weiner wrote:
> Dave Hansen noted a regression in a microbenchmark that loops around
> open() and close() on an 8-node NUMA machine and bisected it down to
> 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy"). That
> change forces the slab allocations of the file descriptor to spread
> out to all 8 nodes, causing remote references in the page allocator
> and slab.
>

The original patch was primarily concerned with the fair aging of LRU pages
of zones within a node. This patch uses GFP_MOVABLE_MASK which includes
__GFP_RECLAIMABLE meaning any slab created with SLAB_RECLAIM_ACCOUNT is still
getting the round-robin treatment. Those pages have a different lifecycle
to LRU pages and the shrinkers are only node aware, not zone aware.
While I get this patch probably helps this specific benchmark, was the
use of GFP_MOVABLE_MASK intentional or did you mean to use __GFP_MOVABLE?

Looking at the original patch again I think I made a major mistake when
reviewing it. Considering the effect of the following for NUMA machines

for_each_zone_zonelist_nodemask(zone, z, zonelist,
high_zoneidx, nodemask) {
....
if (alloc_flags & ALLOC_WMARK_LOW) {
if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
continue;
if (zone_reclaim_mode &&
!zone_local(preferred_zone, zone))
continue;
}


Enabling zone_reclaim_mode sucks badly for workloads that are not paritioned
to fit within NUMA nodes. Consequently, I expect the common case it that
it's disabled by default due to small NUMA distances or manually disabled.

However, the effect of that block is that we allocate NR_ALLOC_BATCH
from local zones then fallback to batch allocating remote nodes! I bet
the numa_hit stats in /proc/vmstat have sucked recently. The original
problem was because the page allocator would try allocating from the
highest zone while kswapd reclaimed from it causing LRU-aging problems.
The problem is not the same between nodes. How do you feel about dropping
the zone_reclaim_mode check above and only round-robin in batches between
zones on the local node?

--
Mel Gorman
SUSE Labs


\
 
 \ /
  Last update: 2013-12-12 00:01    [W:0.031 / U:3.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site