lkml.org 
[lkml]   [2013]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file
On 09/27/2013 02:09 PM, Tim Chen wrote:
> On Fri, 2013-09-27 at 08:29 -0700, Paul E. McKenney wrote:
>> On Wed, Sep 25, 2013 at 03:10:49PM -0700, Tim Chen wrote:
>>> We will need the MCS lock code for doing optimistic spinning for rwsem.
>>> Extracting the MCS code from mutex.c and put into its own file allow us
>>> to reuse this code easily for rwsem.
>>>
>>> Signed-off-by: Tim Chen<tim.c.chen@linux.intel.com>
>>> Signed-off-by: Davidlohr Bueso<davidlohr@hp.com>
>>> ---
>>> include/linux/mcslock.h | 58 +++++++++++++++++++++++++++++++++++++++++++++++
>>> kernel/mutex.c | 58 +++++-----------------------------------------
>>> 2 files changed, 65 insertions(+), 51 deletions(-)
>>> create mode 100644 include/linux/mcslock.h
>>>
>>> diff --git a/include/linux/mcslock.h b/include/linux/mcslock.h
>>> new file mode 100644
>>> index 0000000..20fd3f0
>>> --- /dev/null
>>> +++ b/include/linux/mcslock.h
>>> @@ -0,0 +1,58 @@
>>> +/*
>>> + * MCS lock defines
>>> + *
>>> + * This file contains the main data structure and API definitions of MCS lock.
>>> + */
>>> +#ifndef __LINUX_MCSLOCK_H
>>> +#define __LINUX_MCSLOCK_H
>>> +
>>> +struct mcs_spin_node {
>>> + struct mcs_spin_node *next;
>>> + int locked; /* 1 if lock acquired */
>>> +};
>>> +
>>> +/*
>>> + * We don't inline mcs_spin_lock() so that perf can correctly account for the
>>> + * time spent in this lock function.
>>> + */
>>> +static noinline
>>> +void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node)
>>> +{
>>> + struct mcs_spin_node *prev;
>>> +
>>> + /* Init node */
>>> + node->locked = 0;
>>> + node->next = NULL;
>>> +
>>> + prev = xchg(lock, node);
>>> + if (likely(prev == NULL)) {
>>> + /* Lock acquired */
>>> + node->locked = 1;
>>> + return;
>>> + }
>>> + ACCESS_ONCE(prev->next) = node;
>>> + smp_wmb();
>>> + /* Wait until the lock holder passes the lock down */
>>> + while (!ACCESS_ONCE(node->locked))
>>> + arch_mutex_cpu_relax();
>>> +}
>>> +
>>> +static void mcs_spin_unlock(struct mcs_spin_node **lock, struct mcs_spin_node *node)
>>> +{
>>> + struct mcs_spin_node *next = ACCESS_ONCE(node->next);
>>> +
>>> + if (likely(!next)) {
>>> + /*
>>> + * Release the lock by setting it to NULL
>>> + */
>>> + if (cmpxchg(lock, node, NULL) == node)
>>> + return;
>>> + /* Wait until the next pointer is set */
>>> + while (!(next = ACCESS_ONCE(node->next)))
>>> + arch_mutex_cpu_relax();
>>> + }
>>> + ACCESS_ONCE(next->locked) = 1;
>>> + smp_wmb();
>> Shouldn't the memory barrier precede the "ACCESS_ONCE(next->locked) = 1;"?
>> Maybe in an "else" clause of the prior "if" statement, given that the
>> cmpxchg() does it otherwise.
>>
>> Otherwise, in the case where the "if" conditionn is false, the critical
>> section could bleed out past the unlock.
> Yes, I agree with you that the smp_wmb should be moved before
> ACCESS_ONCE to prevent critical section from bleeding. Copying Waiman
> who is the original author of the mcs code to see if he has any comments
> on things we may have missed.
>
> Tim

As a more general lock/unlock mechanism, I also agreed that we should
move smp_wmb() before ACCESS_ONCE(). For the mutex case, it is used as a
queuing mechanism rather than guarding critical section, so it doesn't
really matter.

Regards,
Longman


\
 
 \ /
  Last update: 2013-09-28 05:21    [W:0.141 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site