lkml.org 
[lkml]   [2017]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -v2 3/4] locking: Introduce smp_mb__after_spinlock().
On Thu, 3 Aug 2017 16:28:20 +0100
Will Deacon <will.deacon@arm.com> wrote:

> On Wed, Aug 02, 2017 at 01:38:40PM +0200, Peter Zijlstra wrote:
> > Since its inception, our understanding of ACQUIRE, esp. as applied to
> > spinlocks, has changed somewhat. Also, I wonder if, with a simple
> > change, we cannot make it provide more.
> >
> > The problem with the comment is that the STORE done by spin_lock isn't
> > itself ordered by the ACQUIRE, and therefore a later LOAD can pass over
> > it and cross with any prior STORE, rendering the default WMB
> > insufficient (pointed out by Alan).
> >
> > Now, this is only really a problem on PowerPC and ARM64, both of
> > which already defined smp_mb__before_spinlock() as a smp_mb().
> >
> > At the same time, we can get a much stronger construct if we place
> > that same barrier _inside_ the spin_lock(). In that case we upgrade
> > the RCpc spinlock to an RCsc. That would make all schedule() calls
> > fully transitive against one another.
> >
> > Cc: Alan Stern <stern@rowland.harvard.edu>
> > Cc: Nicholas Piggin <npiggin@gmail.com>
> > Cc: Ingo Molnar <mingo@kernel.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > Cc: Michael Ellerman <mpe@ellerman.id.au>
> > Cc: Oleg Nesterov <oleg@redhat.com>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > arch/arm64/include/asm/spinlock.h | 2 ++
> > arch/powerpc/include/asm/spinlock.h | 3 +++
> > include/linux/atomic.h | 3 +++
> > include/linux/spinlock.h | 36 ++++++++++++++++++++++++++++++++++++
> > kernel/sched/core.c | 4 ++--
> > 5 files changed, 46 insertions(+), 2 deletions(-)
> >
> > --- a/arch/arm64/include/asm/spinlock.h
> > +++ b/arch/arm64/include/asm/spinlock.h
> > @@ -367,5 +367,7 @@ static inline int arch_read_trylock(arch
> > * smp_mb__before_spinlock() can restore the required ordering.
> > */
> > #define smp_mb__before_spinlock() smp_mb()
> > +/* See include/linux/spinlock.h */
> > +#define smp_mb__after_spinlock() smp_mb()
> >
> > #endif /* __ASM_SPINLOCK_H */
>
> Acked-by: Will Deacon <will.deacon@arm.com>

Yeah this looks good to me. I don't think there would ever be a reason
to use smp_mb__before_spinlock() rather than smp_mb__after_spinlock().

\
 
 \ /
  Last update: 2017-08-03 17:44    [W:0.273 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site