lkml.org 
[lkml]   [2018]   [Feb]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 1/3] mm/free_pcppages_bulk: update pcp->count inside
On Mon, 26 Feb 2018, Aaron Lu wrote:

> Matthew Wilcox found that all callers of free_pcppages_bulk() currently
> update pcp->count immediately after so it's natural to do it inside
> free_pcppages_bulk().
>
> No functionality or performance change is expected from this patch.
>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Aaron Lu <aaron.lu@intel.com>
> ---
> mm/page_alloc.c | 10 +++-------
> 1 file changed, 3 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cb416723538f..3154859cccd6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1117,6 +1117,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> int batch_free = 0;
> bool isolated_pageblocks;
>
> + pcp->count -= count;
> spin_lock(&zone->lock);
> isolated_pageblocks = has_isolate_pageblock(zone);
>

Why modify pcp->count before the pages have actually been freed?

I doubt that it matters too much, but at least /proc/zoneinfo uses
zone->lock. I think it should be done after the lock is dropped.

Otherwise, looks good.

\
 
 \ /
  Last update: 2018-02-26 22:49    [W:1.772 / U:0.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site