lkml.org 
[lkml]   [2011]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: REGRESSION: Performance regressions from switching anon_vma->lock to mutex
    On Fri, Jun 17, 2011 at 09:46:00AM -0700, Linus Torvalds wrote:
    > On Fri, Jun 17, 2011 at 4:28 AM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
    > >
    > > Something like so? Compiles and runs the benchmark in question.
    >
    > Oh, and can you do this with a commit log and sign-off, and I'll put
    > it in my "anon_vma-locking" branch that I have. I'm not going to
    > actually merge that branch into mainline until I've seen a few more
    > acks or more testing by Tim.
    >
    > But if Tim's numbers hold up (-32% to +15% performance by just the
    > first one, and +15% isn't actually an improvement since tmpfs
    > read-ahead should have gotten us to +66%), I think we have to do this
    > just to avoid the performance regression.

    You could also add the mutex "optimize caching protocol"
    patch I posted earlier to that branch.

    It didn't actually improve Tim's throughput number, but it made the CPU
    consumption of the mutex go down.

    -Andi

    ---
    From 34d4c1e579b3dfbc9a01967185835f5829bd52f0 Mon Sep 17 00:00:00 2001
    From: Andi Kleen <ak@linux.intel.com>
    Date: Tue, 14 Jun 2011 16:27:54 -0700
    Subject: [PATCH] mutex: while spinning read count before attempting cmpxchg

    Under heavy contention it's better to read first before trying
    to do an atomic operation on the interconnect.

    This gives a few percent improvement for the mutex CPU time
    under heavy contention and likely saves some power too.

    Signed-off-by: Andi Kleen <ak@linux.intel.com>

    diff --git a/kernel/mutex.c b/kernel/mutex.c
    index d607ed5..1abffa9 100644
    --- a/kernel/mutex.c
    +++ b/kernel/mutex.c
    @@ -170,7 +170,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
    if (owner && !mutex_spin_on_owner(lock, owner))
    break;

    - if (atomic_cmpxchg(&lock->count, 1, 0) == 1) {
    + if (atomic_read(&lock->count) == 1 &&
    + atomic_cmpxchg(&lock->count, 1, 0) == 1) {
    lock_acquired(&lock->dep_map, ip);
    mutex_set_owner(lock);
    preempt_enable();


    --
    ak@linux.intel.com -- Speaking for myself only


    \
     
     \ /
      Last update: 2011-06-17 21:45    [W:0.023 / U:154.804 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site