lkml.org 
[lkml]   [2012]   [Sep]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3 v2] mm: Batch page reclamation under shink_page_list
On Mon, 10 Sep 2012 09:19:20 -0700
Tim Chen <tim.c.chen@linux.intel.com> wrote:

> This is the second version of the patch series. Thanks to Matthew Wilcox
> for many valuable suggestions on improving the patches.
>
> To do page reclamation in shrink_page_list function, there are two
> locks taken on a page by page basis. One is the tree lock protecting
> the radix tree of the page mapping and the other is the
> mapping->i_mmap_mutex protecting the mapped
> pages. I try to batch the operations on pages sharing the same lock
> to reduce lock contentions. The first patch batch the operations protected by
> tree lock while the second and third patch batch the operations protected by
> the i_mmap_mutex.
>
> I managed to get 14% throughput improvement when with a workload putting
> heavy pressure of page cache by reading many large mmaped files
> simultaneously on a 8 socket Westmere server.

That sounds good, although more details on the performance changes
would be appreciated - after all, that's the entire point of the
patchset.

And we shouldn't only test for improvements - we should also test for
degradation. What workloads might be harmed by this change? I'd suggest

- a single process which opens N files and reads one page from each
one, then repeats. So there are no contiguous LRU pages which share
the same ->mapping. Get some page reclaim happening, measure the
impact.

- The batching means that we now do multiple passes over pageframes
where we used to do things in a single pass. Walking all those new
page lists will be expensive if they are lengthy enough to cause L1
cache evictions.

What would be a test for this? A simple, single-threaded walk
through a file, I guess?

Mel's review comments were useful, thanks.


\
 
 \ /
  Last update: 2012-09-12 22:21    [W:0.097 / U:1.736 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site