lkml.org 
[lkml]   [2018]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/2] locking/qspinlock: Limit # of spins in _Q_PENDING_VAL wait loop
Date
A locker in the pending code path is doing an infinite number of spins
when waiting for the _Q_PENDING_VAL to _Q_LOCK_VAL transition. There
is a concern that lock starvation can happen concurrent lockers are
able to take the lock in the cmpxchg loop without queuing and pass it
around amongst themselves.

To ensure forward progress while still taking advantage of using
the pending code path without queuing, the code is now modified
to do a limited number of spins before aborting the effort and
going to queue itself.

Ideally, the spinning times should be at least a few times the typical
cacheline load time from memory which I think can be down to 100ns or
so for each cacheline load with the newest systems or up to several
hundreds ns for older systems.

Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/locking/qspinlock.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 634a49b..35367cc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -82,6 +82,15 @@
#endif

/*
+ * The pending bit spinning loop count.
+ * This parameter can be overridden by another architecture specific
+ * constant. Default is 512.
+ */
+#ifndef _Q_PENDING_LOOP
+#define _Q_PENDING_LOOP (1 << 9)
+#endif
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
@@ -311,13 +320,19 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
return;

/*
- * wait for in-progress pending->locked hand-overs
+ * wait for in-progress pending->locked hand-overs with a
+ * limited number of spins.
*
* 0,1,0 -> 0,0,1
*/
if (val == _Q_PENDING_VAL) {
- while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL)
+ int cnt = _Q_PENDING_LOOP;
+
+ while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) {
+ if (!--cnt)
+ goto queue;
cpu_relax();
+ }
}

/*
--
1.8.3.1
\
 
 \ /
  Last update: 2018-04-09 20:09    [W:0.182 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site