lkml.org 
[lkml]   [2013]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file
From
Date
On Fri, 2013-09-27 at 08:29 -0700, Paul E. McKenney wrote:
> On Wed, Sep 25, 2013 at 03:10:49PM -0700, Tim Chen wrote:
> > We will need the MCS lock code for doing optimistic spinning for rwsem.
> > Extracting the MCS code from mutex.c and put into its own file allow us
> > to reuse this code easily for rwsem.
> >
> > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> > Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
> > ---
> > include/linux/mcslock.h | 58 +++++++++++++++++++++++++++++++++++++++++++++++
> > kernel/mutex.c | 58 +++++-----------------------------------------
> > 2 files changed, 65 insertions(+), 51 deletions(-)
> > create mode 100644 include/linux/mcslock.h
> >
> > diff --git a/include/linux/mcslock.h b/include/linux/mcslock.h
> > new file mode 100644
> > index 0000000..20fd3f0
> > --- /dev/null
> > +++ b/include/linux/mcslock.h
> > @@ -0,0 +1,58 @@
> > +/*
> > + * MCS lock defines
> > + *
> > + * This file contains the main data structure and API definitions of MCS lock.
> > + */
> > +#ifndef __LINUX_MCSLOCK_H
> > +#define __LINUX_MCSLOCK_H
> > +
> > +struct mcs_spin_node {
> > + struct mcs_spin_node *next;
> > + int locked; /* 1 if lock acquired */
> > +};
> > +
> > +/*
> > + * We don't inline mcs_spin_lock() so that perf can correctly account for the
> > + * time spent in this lock function.
> > + */
> > +static noinline
> > +void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node)
> > +{
> > + struct mcs_spin_node *prev;
> > +
> > + /* Init node */
> > + node->locked = 0;
> > + node->next = NULL;
> > +
> > + prev = xchg(lock, node);
> > + if (likely(prev == NULL)) {
> > + /* Lock acquired */
> > + node->locked = 1;
> > + return;
> > + }
> > + ACCESS_ONCE(prev->next) = node;
> > + smp_wmb();
> > + /* Wait until the lock holder passes the lock down */
> > + while (!ACCESS_ONCE(node->locked))
> > + arch_mutex_cpu_relax();
> > +}
> > +
> > +static void mcs_spin_unlock(struct mcs_spin_node **lock, struct mcs_spin_node *node)
> > +{
> > + struct mcs_spin_node *next = ACCESS_ONCE(node->next);
> > +
> > + if (likely(!next)) {
> > + /*
> > + * Release the lock by setting it to NULL
> > + */
> > + if (cmpxchg(lock, node, NULL) == node)
> > + return;
> > + /* Wait until the next pointer is set */
> > + while (!(next = ACCESS_ONCE(node->next)))
> > + arch_mutex_cpu_relax();
> > + }
> > + ACCESS_ONCE(next->locked) = 1;
> > + smp_wmb();
>
> Shouldn't the memory barrier precede the "ACCESS_ONCE(next->locked) = 1;"?
> Maybe in an "else" clause of the prior "if" statement, given that the
> cmpxchg() does it otherwise.
>
> Otherwise, in the case where the "if" conditionn is false, the critical
> section could bleed out past the unlock.

Yes, I agree with you that the smp_wmb should be moved before
ACCESS_ONCE to prevent critical section from bleeding. Copying Waiman
who is the original author of the mcs code to see if he has any comments
on things we may have missed.

Tim



\
 
 \ /
  Last update: 2013-09-27 20:21    [W:0.170 / U:0.772 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site