lkml.org 
[lkml]   [2012]   [Apr]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] Removal of lumpy reclaim V2
On 04/11/2012 12:38 PM, Mel Gorman wrote:

> Success rates are completely hosed for 3.4-rc2 which is almost certainly
> due to [fe2c2a10: vmscan: reclaim at order 0 when compaction is enabled]. I
> expected this would happen for kswapd and impair allocation success rates
> (https://lkml.org/lkml/2012/1/25/166) but I did not anticipate this much
> a difference: 80% less scanning, 37% less reclaim by kswapd

Also, no gratuitous pageouts of anonymous memory.
That was what really made a difference on a somewhat
heavily loaded desktop + kvm workload.

> In comparison, reclaim/compaction is not aggressive and gives up easily
> which is the intended behaviour. hugetlbfs uses __GFP_REPEAT and would be
> much more aggressive about reclaim/compaction than THP allocations are. The
> stress test above is allocating like neither THP or hugetlbfs but is much
> closer to THP.

Next step: get rid of __GFP_NO_KSWAPD for THP, first
in the -mm kernel

> Mainline is now impaired in terms of high order allocation under heavy load
> although I do not know to what degree as I did not test with __GFP_REPEAT.
> Keep this in mind for bugs related to hugepage pool resizing, THP allocation
> and high order atomic allocation failures from network devices.

This might be due to smaller allocations not bumping
the compaction deferring code, when we have deferred
compaction for a higher order allocation.

I wonder if the compaction deferring code is simply
too defer-happy, now that we ignore compaction at
lower orders than where compaction failed?


\
 
 \ /
  Last update: 2012-04-11 19:19    [W:0.092 / U:1.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site