lkml.org 
[lkml]   [2007]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [patch 2/2] x86_64: ticket lock spinlock
Date
On Thursday 09 August 2007 03:42:54 Nick Piggin wrote:
> On Wed, Aug 08, 2007 at 12:26:55PM +0200, Andi Kleen wrote:
> >
> > > *
> > > * (the type definitions are in asm/spinlock_types.h)
> > > */
> > >
> > > +#if (NR_CPUS > 256)
> > > +#error spinlock supports a maximum of 256 CPUs
> > > +#endif
> > > +
> > > static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
> > > {
> > > - return *(volatile signed int *)(&(lock)->slock) <= 0;
> > > + int tmp = *(volatile signed int *)(&(lock)->slock);
> >
> > Why is slock not volatile signed int in the first place?
>
> Don't know really. Why does spin_is_locked need it to be volatile?

I suppose in case a caller doesn't have a memory barrier
(they should in theory, but might not). Without any barrier
or volatile gcc might optimize it away.

The other accesses in spinlocks hopefully all have barriers.

Ok anyways the patches look good.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-08-09 11:57    [W:0.038 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site