lkml.org 
[lkml]   [2010]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 15/20] x86/ticketlock: prevent compiler reordering into locked region
Date
From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Add a barrier() at the end of __raw_spin_unlock() to prevent instructions
from after the locked region from being reordered into it. In theory doing
so shouldn't cause any problems, but in practice, the system locks up
under load...

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/include/asm/spinlock.h | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index d6de5e7..158b330 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -189,6 +189,8 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
next = lock->tickets.head + 1;
__ticket_unlock_release(lock);
__ticket_unlock_kick(lock, next);
+
+ barrier(); /* prevent reordering into locked region */
}

static inline int arch_spin_is_locked(arch_spinlock_t *lock)
--
1.7.2.3


\
 
 \ /
  Last update: 2010-11-03 16:05    [W:0.379 / U:0.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site