lkml.org 
[lkml]   [2013]   [Oct]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v8 0/9] rwsem performance optimizations

* Tim Chen <tim.c.chen@linux.intel.com> wrote:

> For version 8 of the patchset, we included the patch from Waiman to
> streamline wakeup operations and also optimize the MCS lock used in
> rwsem and mutex.

I'd be feeling a lot easier about this patch series if you also had
performance figures that show how mmap_sem is affected.

These:

> Tim got the following improvement for exim mail server
> workload on 40 core system:
>
> Alex+Tim's patchset: +4.8%
> Alex+Tim+Waiman's patchset: +5.3%

appear to be mostly related to the anon_vma->rwsem. But once that lock is
changed to an rwlock_t, this measurement falls away.

Peter Zijlstra suggested the following testcase:

===============================>
In fact, try something like this from userspace:

n-threads:

pthread_mutex_lock(&mutex);
foo = mmap();
pthread_mutex_lock(&mutex);

/* work */

pthread_mutex_unlock(&mutex);
munma(foo);
pthread_mutex_unlock(&mutex);

vs

n-threads:

foo = mmap();
/* work */
munmap(foo);

I've had reports that the former was significantly faster than the
latter.
<===============================

this could be put into a standalone testcase, or you could add it as a new
subcommand of 'perf bench', which already has some pthread code, see for
example in tools/perf/bench/sched-messaging.c. Adding:

perf bench mm threads

or so would be a natural thing to have.

Thanks,

Ingo


\
 
 \ /
  Last update: 2013-10-03 10:01    [W:3.040 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site