lkml.org 
[lkml]   [2016]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: spin_lock implicit/explicit memory barrier
From
Date
On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
> Hi Benjamin, Hi Michael,
>
> regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to 
> arch_spin_is_locked()"):
>
> For the ipc/sem code, I would like to replace the spin_is_locked() with 
> a smp_load_acquire(), see:
>
> http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
>
> http://www.ozlabs.org/~akpm/mmots/broken-out/ipc-semc-fix-complex_count-vs-simple-op-race.patch
>
> To my understanding, I must now add a smp_mb(), otherwise it would be 
> broken on PowerPC:
>
> The approach that the memory barrier is added into spin_is_locked() 
> doesn't work because the code doesn't use spin_is_locked().
>
> Correct?

Right, otherwise you aren't properly ordered. The current powerpc locks provide
good protection between what's inside vs. what's outside the lock but not vs.
the lock *value* itself, so if, like you do in the sem code, use the lock
value as something that is relevant in term of ordering, you probably need
an explicit full barrier.

Adding Paul McKenney.

Cheers,
Ben.

\
 
 \ /
  Last update: 2016-08-10 02:41    [W:3.156 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site