lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 tip/core/locking 5/7] Documentation/memory-barriers.txt: Downgrade UNLOCK+LOCK
On Tue, Dec 10, 2013 at 05:44:37PM +0100, Oleg Nesterov wrote:
> On 12/09, Paul E. McKenney wrote:
> >
> > @@ -1626,7 +1626,10 @@ for each construct. These operations all imply certain barriers:
> > operation has completed.
> >
> > Memory operations issued before the LOCK may be completed after the LOCK
> > - operation has completed.
> > + operation has completed. An smp_mb__before_spinlock(), combined
> > + with a following LOCK, acts as an smp_wmb(). Note the "w",
> > + this is smp_wmb(), not smp_mb().
>
> Well, but smp_mb__before_spinlock + LOCK is not wmb... But it is not
> the full barrier. It should guarantee that, say,
>
> CONDITION = true; // 1
>
> // try_to_wake_up
> smp_mb__before_spinlock();
> spin_lock(&task->pi_lock);
>
> if (!(p->state & state)) // 2
> return;
>
> can't race with with set_current_state() + check(CONDITION), this means
> that 1 and 2 above must not be reordered.
>
> But a LOAD before before spin_lock() can leak into the critical section.
>
> Perhaps this should be clarified somehow, or perhaps it should actually
> imply mb (if combined with LOCK).

If we leave the implementation the same, does the following capture the
constraints?

Memory operations issued before the LOCK may be completed after
the LOCK operation has completed. An smp_mb__before_spinlock(),
combined with a following LOCK, orders prior loads against
subsequent stores and stores and prior stores against
subsequent stores. Note that this is weaker than smp_mb()! The
smp_mb__before_spinlock() primitive is free on many architectures.

Thanx, Paul



\
 
 \ /
  Last update: 2013-12-10 18:41    [W:0.086 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site