Messages in this thread | | | Date | Tue, 25 Jan 2011 12:49:16 +1100 | Subject | Re: [PATCH 3/6] x86/ticketlock: Use C for __ticket_spin_unlock | From | Nick Piggin <> |
| |
On Tue, Jan 25, 2011 at 12:42 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote: > On 01/24/2011 05:13 PM, Nick Piggin wrote: >> On Tue, Jan 25, 2011 at 10:41 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote: >>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> >>> >>> If we don't need to use a locked inc for unlock, then implement it in C. >>> >>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> >>> --- >>> arch/x86/include/asm/spinlock.h | 32 +++++++++++++++++--------------- >>> 1 files changed, 17 insertions(+), 15 deletions(-) >>> >>> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h >>> index f48a6e3..0170ba9 100644 >>> --- a/arch/x86/include/asm/spinlock.h >>> +++ b/arch/x86/include/asm/spinlock.h >>> @@ -33,9 +33,21 @@ >>> * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock >>> * (PPro errata 66, 92) >>> */ >>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX >>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock) >>> +{ >>> + if (sizeof(lock->tickets.head) == sizeof(u8)) >>> + asm (LOCK_PREFIX "incb %0" >>> + : "+m" (lock->tickets.head) : : "memory"); >>> + else >>> + asm (LOCK_PREFIX "incw %0" >>> + : "+m" (lock->tickets.head) : : "memory"); >>> + >>> +} >>> #else >>> -# define UNLOCK_LOCK_PREFIX >>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock) >>> +{ >>> + lock->tickets.head++; >>> +} >>> #endif >>> >>> /* >>> @@ -93,14 +105,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock) >>> >>> return tmp; >>> } >>> - >>> -static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock) >>> -{ >>> - asm volatile(UNLOCK_LOCK_PREFIX "incb %0" >>> - : "+m" (lock->slock) >>> - : >>> - : "memory", "cc"); >>> -} >>> #else >>> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock) >>> { >>> @@ -144,15 +148,13 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock) >>> >>> return tmp; >>> } >>> +#endif >>> >>> static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock) >>> { >>> - asm volatile(UNLOCK_LOCK_PREFIX "incw %0" >>> - : "+m" (lock->slock) >>> - : >>> - : "memory", "cc"); >>> + __ticket_unlock_release(lock); >>> + barrier(); /* prevent reordering into locked region */ >>> } >>> -#endif >> The barrier is wrong. > > In what way? Do you think it should be on the other side?
Well the other side is where the locked region is :)
>> What makes me a tiny bit uneasy is that gcc is allowed to implement >> this any way it wishes. OK there may be a NULL intersection of possible >> valid assembly which is a buggy unlock... but relying on gcc to implement >> lock primitives is scary. Does this really help in a way that can't be done >> with the assembly versions? > > We rely on C/gcc for plenty of other subtle ordering things. Spinlocks > aren't particularly special in this regard.
Well probably not orderings, but we do rely on it to do atomic stores and loads to <= sizeof(long) data types, unfortunately. We also rely on it not to speculatively store into data on not taken side of branches and things like that.
I guess it's OK, but it makes me cringe a little bit to see unlock just do head++ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |