lkml.org 
[lkml]   [2016]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH] MCS spinlock: Use smp_cond_load_acquire()
From
Date
Hi Peter,

This patch applies on top of the "smp_cond_load_acquire + cmpwait"
series.

---
For qspinlocks on ARM64, we would like use WFE instead of
purely spinning. Qspinlocks internally have lock
contenders spin on an MCS lock.

Update arch_mcs_spin_lock_contended() such that it uses
the new smp_cond_load_acquire() so that ARM64 can also
override this spin loop with its own implementation using WFE.

On x86, it can also cheaper to use this than spinning on
smp_load_acquire().

Signed-off-by: Jason Low <jason.low2@hp.com>
---
kernel/locking/mcs_spinlock.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index c835270..5f21f23 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -22,13 +22,13 @@ struct mcs_spinlock {

#ifndef arch_mcs_spin_lock_contended
/*
- * Using smp_load_acquire() provides a memory barrier that ensures
- * subsequent operations happen after the lock is acquired.
+ * Using smp_cond_load_acquire() provides the acquire semantics
+ * required so that subsequent operations happen after the
+ * lock is acquired.
*/
#define arch_mcs_spin_lock_contended(l) \
do { \
- while (!(smp_load_acquire(l))) \
- cpu_relax_lowlatency(); \
+ smp_cond_load_acquire(&l, VAL); \
} while (0)
#endif

--
2.1.4


\
 
 \ /
  Last update: 2016-04-12 23:41    [W:0.068 / U:0.648 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site