lkml.org 
[lkml]   [2016]   [Sep]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Question on smp_mb__before_spinlock
On Mon, Sep 05, 2016 at 11:10:22AM +0100, Will Deacon wrote:

> > The second issue I wondered about is spinlock transitivity. All except
> > powerpc have RCsc locks, and since Power already does a full mb, would
> > it not make sense to put it _after_ the spin_lock(), which would provide
> > the same guarantee, but also upgrades the section to RCsc.
> >
> > That would make all schedule() calls fully transitive against one
> > another.
>
> It would also match the way in which the arm64 atomic_*_return ops
> are implemented, since full barrier semantics are required there.

Hmm, are you sure; the way I read arch/arm64/include/asm/atomic_ll_sc.h
is that you do ll/sc-rel + mb.

> > That is, would something like the below make sense?
>
> Works for me, but I'll do a fix to smp_mb__before_spinlock anyway for
> the stable tree.

Indeed, thanks!

>
> The only slight annoyance is that, on arm64 anyway, a store-release
> appearing in program order before the LOCK operation will be observed
> in order, so if the write of CONDITION=1 in the try_to_wake_up case
> used smp_store_release, we wouldn't need this barrier at all.

Right, but this is because your load-acquire and store-release are much
stronger than Linux's. Not only are they RCsc, they are also globally
ordered irrespective of the variable (iirc).

This wouldn't work for PPC (even if we could find all such prior
stores).

OK, I suppose I'll go stare what we can do about the mm_types.h use and
spin a patch with Changelog.

\
 
 \ /
  Last update: 2016-09-17 09:58    [W:0.180 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site