lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 tip/core/locking 5/7] Documentation/memory-barriers.txt: Downgrade UNLOCK+LOCK
On Mon, Dec 09, 2013 at 05:32:31PM -0800, Josh Triplett wrote:
> On Mon, Dec 09, 2013 at 05:28:01PM -0800, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >
> > Historically, an UNLOCK+LOCK pair executed by one CPU, by one task,
> > or on a given lock variable has implied a full memory barrier. In a
> > recent LKML thread, the wisdom of this historical approach was called
> > into question: http://www.spinics.net/lists/linux-mm/msg65653.html,
> > in part due to the memory-order complexities of low-handoff-overhead
> > queued locks on x86 systems.
> >
> > This patch therefore removes this guarantee from the documentation, and
> > further documents how to restore it via a new smp_mb__after_unlock_lock()
> > primitive.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Oleg Nesterov <oleg@redhat.com>
> > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: Tim Chen <tim.c.chen@linux.intel.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Waiman Long <waiman.long@hp.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Andi Kleen <andi@firstfloor.org>
> > Cc: Michel Lespinasse <walken@google.com>
> > Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
> > Cc: Rik van Riel <riel@redhat.com>
> > Cc: Peter Hurley <peter@hurleysoftware.com>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > ---
> > Documentation/memory-barriers.txt | 51 +++++++++++++++++++++++++++++++++------
> > 1 file changed, 44 insertions(+), 7 deletions(-)
> >
> > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> > index a0763db314ff..efb791d33e5a 100644
> > --- a/Documentation/memory-barriers.txt
> > +++ b/Documentation/memory-barriers.txt
> > @@ -1626,7 +1626,10 @@ for each construct. These operations all imply certain barriers:
> > operation has completed.
> >
> > Memory operations issued before the LOCK may be completed after the LOCK
> > - operation has completed.
> > + operation has completed. An smp_mb__before_spinlock(), combined
> > + with a following LOCK, acts as an smp_wmb(). Note the "w",
> > + this is smp_wmb(), not smp_mb(). The smp_mb__before_spinlock()
> > + primitive is free on many architectures.
>
> Gah. That seems highly error-prone; why isn't that
> "smp_wmb__before_spinlock()"?

I must confess that I wondered that myself. I didn't create it, I am
just documenting it.

Might be worth a change, though.

Thanx, Paul



\
 
 \ /
  Last update: 2013-12-10 06:41    [W:0.110 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site