lkml.org 
[lkml]   [2011]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Reduce vm_stat cacheline contention in __vm_enough_memory
On Wed, 12 Oct 2011 11:02:02 -0500
Dimitri Sivanich <sivanich@sgi.com> wrote:

> Tmpfs I/O throughput testing on UV systems has shown writeback contention
> between multiple writer threads (even when each thread writes to a separate
> tmpfs mount point).
>
> A large part of this is caused by cacheline contention reading the vm_stat
> array in the __vm_enough_memory check.
>
> The attached test patch illustrates a possible avenue for improvement in this
> area. By locally caching the values read from vm_stat (and refreshing the
> values after 2 seconds), I was able to improve tmpfs writeback performance from
> ~300 MB/sec to ~700 MB/sec with 120 threads writing data simultaneously to
> files on separate tmpfs mount points (tested on 3.1.0-rc9).
>
> Note that this patch is simply to illustrate the gains that can be made here.
> What I'm looking for is some guidance on an acceptable way to accomplish the
> task of reducing contention in this area, either by caching these values in a
> way similar to the attached patch, or by some other mechanism if this is
> unacceptable.

Yes, the global vm_stat[] array is a problem - I'm surprised it's hung
around for this long. Altering the sysctl_overcommit_memory mode will
hide the problem, but that's no good.

I think we've discussed switching vm_stat[] to a contention-avoiding
counter scheme. Simply using <percpu_counter.h> would be the simplest
approach. They'll introduce inaccuracies but hopefully any problems
from that will be minor for the global page counters.

otoh, I think we've been round this loop before and I don't recall why
nothing happened.


\
 
 \ /
  Last update: 2011-10-12 21:03    [from the cache]
©2003-2011 Jasper Spaans