Messages in this thread | | | Date | Tue, 20 Oct 2015 11:17:14 +0800 | From | Boqun Feng <> | Subject | Re: [PATCH tip/locking/core v8 1/5] locking/qspinlock: Use _acquire/_release versions of cmpxchg & xchg |
| |
Hi Waiman,
On Thu, Oct 15, 2015 at 06:51:03PM -0400, Waiman Long wrote: > This patch replaces the cmpxchg() and xchg() calls in the native > qspinlock code with the more relaxed _acquire or _release versions of > those calls to enable other architectures to adopt queued spinlocks > with less memory barrier performance overhead. > > Signed-off-by: Waiman Long <Waiman.Long@hpe.com> > --- > include/asm-generic/qspinlock.h | 9 +++++---- > kernel/locking/qspinlock.c | 29 ++++++++++++++++++++++++----- > 2 files changed, 29 insertions(+), 9 deletions(-) > > diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h > index e2aadbc..39e1cb2 100644 > --- a/include/asm-generic/qspinlock.h > +++ b/include/asm-generic/qspinlock.h > @@ -12,8 +12,9 @@ > * GNU General Public License for more details. > * > * (C) Copyright 2013-2015 Hewlett-Packard Development Company, L.P. > + * (C) Copyright 2015 Hewlett-Packard Enterprise Development LP > * > - * Authors: Waiman Long <waiman.long@hp.com> > + * Authors: Waiman Long <waiman.long@hpe.com> > */ > #ifndef __ASM_GENERIC_QSPINLOCK_H > #define __ASM_GENERIC_QSPINLOCK_H > @@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock) > static __always_inline int queued_spin_trylock(struct qspinlock *lock) > { > if (!atomic_read(&lock->val) && > - (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) > + (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0)) > return 1; > return 0; > } > @@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) > { > u32 val; > > - val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > + val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL); > if (likely(val == 0)) > return; > queued_spin_lock_slowpath(lock, val); > @@ -93,7 +94,7 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock) > /* > * smp_mb__before_atomic() in order to guarantee release semantics > */ > - smp_mb__before_atomic_dec(); > + smp_mb__before_atomic(); > atomic_sub(_Q_LOCKED_VAL, &lock->val);
Just be curious, you don't use atomic_sub_release() here on purpose?
Regards, Boqun
> } > #endif [unhandled content-type:application/pgp-signature] | |