lkml.org 
[lkml]   [2015]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/4] locking: Introduce smp_cond_acquire()
On Thu, Nov 12, 2015 at 10:59:06AM -0800, Paul E. McKenney wrote:
> On Thu, Nov 12, 2015 at 08:33:02PM +0100, Oleg Nesterov wrote:
> > On 11/12, Peter Zijlstra wrote:
> > >
> > > On Thu, Nov 12, 2015 at 07:38:07PM +0100, Oleg Nesterov wrote:
> > > > It seems that PPC needs to define smp_mb__before_spinlock() as full mb(),
> > > > no?
> > >
> > > It does:
> > >
> > > arch/powerpc/include/asm/barrier.h:#define smp_mb__before_spinlock() smp_mb()
> >
> > Ah, indeed, thanks.
> >
> > And given that it also defines smp_mb__after_unlock_lock() as smp_mb(),
> > I am starting to understand how it can help to avoid the races with
> > spin_unlock_wait() in (for example) do_exit().
> >
> > But as Boqun has already mentioned, this means that mb__after_unlock_lock()
> > has the new meaning which should be documented.
> >
> > Hmm. And 12d560f4 "Privatize smp_mb__after_unlock_lock()" should be reverted
> > then ;)
>
> Surprisingly, this reverts cleanly against today's mainline, please see
> the patch below. Against my -rcu stack, not so much, but so it goes. ;-)

I think we ended up concluding that smp_mb__after_unlock_lock is indeed
required, but I don't think we should just resurrect the old definition,
which doesn't keep UNLOCK -> LOCK distinct from RELEASE -> ACQUIRE. I'm
still working on documenting the different types of transitivity that we
identified in that thread, but it's slow going.

Also, as far as spin_unlock_wait is concerned, it is neither a LOCK or
an UNLOCK and this barrier doesn't offer us anything. Sure, it might
work because PPC defines it as smp_mb(), but it doesn't help on arm64
and defining the macro is overkill for us in most places (i.e. RCU).

If we decide that the current usage of spin_unlock_wait is valid, then I
would much rather implement a version of it in the arm64 backend that
does something like:

1: ldrex r1, [&lock]
if r1 indicates that lock is taken, branch back to 1b
strex r1, [&lock]
if store failed, branch back to 1b

i.e. we don't just test the lock, but we also write it back atomically
if we discover that it's free. That would then clear the exclusive monitor
on any cores in the process of taking the lock and restore the ordering
that we need.

Will


\
 
 \ /
  Last update: 2015-11-12 22:41    [W:0.133 / U:0.656 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site