Messages in this thread Patch in this message | | | Date | Wed, 3 Jan 2007 11:11:30 -0800 | From | Ravikiran G Thirumalai <> | Subject | Re: [rfc] [patch 1/2] spin_lock_irq: Enable interrupts while spinning -- preperatory patch |
| |
On Wed, Jan 03, 2007 at 12:16:35AM -0800, Andrew Morton wrote: > On Tue, 2 Jan 2007 23:59:23 -0800 > Ravikiran G Thirumalai <kiran@scalex86.org> wrote: > > > The following patches do just that. The first patch is preparatory in nature > > and the second one changes the x86_64 implementation of spin_lock_irq. > > Patch passed overnight runs of kernbench and dbench on 4 way x86_64 smp. > > The end result of this is, I think, that i386 enables irqs while spinning > in spin_lock_irqsave() but not while spinning in spin_lock_irq(). And > x86_64 does the opposite.
No, right now we have on mainline (non PREEMPT case);
i386 x86_64 ----------------------------------------------------------------------------- spin_lock_irq cli when spin cli when spin spin_lock_irqsave spin with intr enabled spin with intr enabled
The posted patchset changed this to:
i386 x86_64 ----------------------------------------------------------------------------- spin_lock_irq cli when spin spin with intr enabled spin_lock_irqsave spin with intr enabled spin with intr enabled
> > Odd, isn't it?
Well we just implemented the x86_64 part. Here goes the i386 part as well for spin_lock_irq.
i386: Enable interrupts while spinning for a lock with spin_lock_irq
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Index: linux-2.6.20-rc1/include/asm-i386/spinlock.h =================================================================== --- linux-2.6.20-rc1.orig/include/asm-i386/spinlock.h 2006-12-28 17:18:32.142775000 -0800 +++ linux-2.6.20-rc1/include/asm-i386/spinlock.h 2007-01-03 10:18:32.243662000 -0800 @@ -82,7 +82,22 @@ static inline void __raw_spin_lock_flags CLI_STI_INPUT_ARGS : "memory" CLI_STI_CLOBBERS); } -# define __raw_spin_lock_irq(lock) __raw_spin_lock(lock) + +static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) +{ + asm volatile("\n1:\t" + LOCK_PREFIX " ; decb %0\n\t" + "jns 3f\n" + STI_STRING "\n" + "2:\t" + "rep;nop\n\t" + "cmpb $0,%0\n\t" + "jle 2b\n\t" + CLI_STRING "\n" + "jmp 1b\n" + "3:\n\t" + : "+m" (lock->slock) : : "memory"); +} #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |