lkml.org 
[lkml]   [2018]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 28/72] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound
    Date
    4.14-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    commit b247be3fe89b6aba928bf80f4453d1c4ba8d2063 upstream.

    On x86, atomic_cond_read_relaxed will busy-wait with a cpu_relax() loop,
    so it is desirable to increase the number of times we spin on the qspinlock
    lockword when it is found to be transitioning from pending to locked.

    According to Waiman Long:

    | Ideally, the spinning times should be at least a few times the typical
    | cacheline load time from memory which I think can be down to 100ns or
    | so for each cacheline load with the newest systems or up to several
    | hundreds ns for older systems.

    which in his benchmarking corresponded to 512 iterations.

    Suggested-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Waiman Long <longman@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: boqun.feng@gmail.com
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: paulmck@linux.vnet.ibm.com
    Link: http://lkml.kernel.org/r/1524738868-31318-5-git-send-email-will.deacon@arm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/x86/include/asm/qspinlock.h | 2 ++
    1 file changed, 2 insertions(+)

    diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
    index cf4cdf508ef4..2cb6624acaec 100644
    --- a/arch/x86/include/asm/qspinlock.h
    +++ b/arch/x86/include/asm/qspinlock.h
    @@ -6,6 +6,8 @@
    #include <asm-generic/qspinlock_types.h>
    #include <asm/paravirt.h>

    +#define _Q_PENDING_LOOPS (1 << 9)
    +
    #define queued_spin_unlock queued_spin_unlock
    /**
    * queued_spin_unlock - release a queued spinlock
    --
    2.19.1


    \
     
     \ /
      Last update: 2018-12-20 10:37    [W:4.319 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site