lkml.org 
[lkml]   [2018]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next
On Wed, Jan 31, 2018 at 12:20:46PM +0000, Will Deacon wrote:
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 294294c71ba4..1ebbc366a31d 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -408,16 +408,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> */
> if (old & _Q_TAIL_MASK) {
> prev = decode_tail(old);
> +
> /*
> - * The above xchg_tail() is also a load of @lock which generates,
> - * through decode_tail(), a pointer.
> - *
> - * The address dependency matches the RELEASE of xchg_tail()
> - * such that the access to @prev must happen after.
> + * We must ensure that the stores to @node are observed before
> + * the write to prev->next. The address dependency on xchg_tail
> + * is not sufficient to ensure this because the read component
> + * of xchg_tail is unordered with respect to the initialisation
> + * of node.
> */
> - smp_read_barrier_depends();

Right, except you're patching old code here, please try again on a tree
that includes commit:

548095dea63f ("locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()")

> -
> - WRITE_ONCE(prev->next, node);
> + smp_store_release(prev->next, node);
>
> pv_wait_node(node, prev);
> arch_mcs_spin_lock_contended(&node->locked);
> --
> 2.1.4
>

\
 
 \ /
  Last update: 2018-01-31 13:40    [W:0.060 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site