lkml.org 
[lkml]   [2020]   [Feb]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v17 8/9] mm/page_reporting: Add budget limit on how many pages can be reported per pass
On Tue, Feb 11, 2020 at 02:47:19PM -0800, Alexander Duyck wrote:
> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>
> In order to keep ourselves from reporting pages that are just going to be
> reused again in the case of heavy churn we can put a limit on how many
> total pages we will process per pass. Doing this will allow the worker
> thread to go into idle much more quickly so that we avoid competing with
> other threads that might be allocating or freeing pages.
>
> The logic added here will limit the worker thread to no more than one
> sixteenth of the total free pages in a given area per list. Once that limit
> is reached it will update the state so that at the end of the pass we will
> reschedule the worker to try again in 2 seconds when the memory churn has
> hopefully settled down.
>
> Again this optimization doesn't show much of a benefit in the standard case
> as the memory churn is minmal. However with page allocator shuffling
> enabled the gain is quite noticeable. Below are the results with a THP
> enabled version of the will-it-scale page_fault1 test showing the
> improvement in iterations for 16 processes or threads.
>
> Without:
> tasks processes processes_idle threads threads_idle
> 16 8283274.75 0.17 5594261.00 38.15
>
> With:
> tasks processes processes_idle threads threads_idle
> 16 8767010.50 0.21 5791312.75 36.98
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

Seems fair. The test case you used would have been pounding on the zone
lock at fairly high frequency so it represents a worst-case scenario but
not necessarily an unrealistic one

Acked-by: Mel Gorman <mgorman@techsingularity.net>

--
Mel Gorman
SUSE Labs

\
 
 \ /
  Last update: 2020-02-19 16:02    [W:0.254 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site