lkml.org 
[lkml]   [2014]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] introduce atomic_pointer to fix a race condition in cancelable mcs spinlocks
On Mon, Jun 02, 2014 at 09:50:10AM -0700, Jason Low wrote:
> On Mon, 2014-06-02 at 12:00 -0400, Mikulas Patocka wrote:
> > If you write to some variable with ACCESS_ONCE and use cmpxchg or xchg at
> > the same time, you break it. ACCESS_ONCE doesn't take the hashed spinlock,
> > so, in this case, cmpxchg or xchg isn't really atomic at all.
>
> So if the problem is using ACCESS_ONCE writes with cmpxchg and xchg at
> the same time, would the below change address this problem?

And one could use cmpxchg() or atomic_add_return(..., 0) to read a value
out. Probably at the cost of some performance impact, though.

Thanx, Paul

> -----
> diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c
> index 838dc9e..8396721 100644
> --- a/kernel/locking/mcs_spinlock.c
> +++ b/kernel/locking/mcs_spinlock.c
> @@ -71,7 +71,7 @@ bool osq_lock(struct optimistic_spin_queue **lock)
> if (likely(prev == NULL))
> return true;
>
> - ACCESS_ONCE(prev->next) = node;
> + xchg(&prev->next, node);
>
> /*
> * Normally @prev is untouchable after the above store; because at that
> @@ -144,7 +144,7 @@ unqueue:
> */
>
> ACCESS_ONCE(next->prev) = prev;
> - ACCESS_ONCE(prev->next) = next;
> + xchg(&prev->next, next);
>
> return false;
> }
>
>



\
 
 \ /
  Last update: 2014-06-02 19:41    [W:0.196 / U:1.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site