lkml.org 
[lkml]   [2013]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:core/locking] powerpc: Full barrier for smp_mb__after_unlock_lock()
    Commit-ID:  919fc6e34831d1c2b58bfb5ae261dc3facc9b269
    Gitweb: http://git.kernel.org/tip/919fc6e34831d1c2b58bfb5ae261dc3facc9b269
    Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    AuthorDate: Wed, 11 Dec 2013 13:59:11 -0800
    Committer: Ingo Molnar <mingo@kernel.org>
    CommitDate: Mon, 16 Dec 2013 11:36:18 +0100

    powerpc: Full barrier for smp_mb__after_unlock_lock()

    The powerpc lock acquisition sequence is as follows:

    lwarx; cmpwi; bne; stwcx.; lwsync;

    Lock release is as follows:

    lwsync; stw;

    If CPU 0 does a store (say, x=1) then a lock release, and CPU 1
    does a lock acquisition then a load (say, r1=y), then there is
    no guarantee of a full memory barrier between the store to 'x'
    and the load from 'y'. To see this, suppose that CPUs 0 and 1
    are hardware threads in the same core that share a store buffer,
    and that CPU 2 is in some other core, and that CPU 2 does the
    following:

    y = 1; sync; r2 = x;

    If 'x' and 'y' are both initially zero, then the lock
    acquisition and release sequences above can result in r1 and r2
    both being equal to zero, which could not happen if unlock+lock
    was a full barrier.

    This commit therefore makes powerpc's
    smp_mb__after_unlock_lock() be a full barrier.

    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: linuxppc-dev@lists.ozlabs.org
    Cc: <linux-arch@vger.kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Link: http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paulmck@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    ---
    arch/powerpc/include/asm/spinlock.h | 2 ++
    1 file changed, 2 insertions(+)

    diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
    index 5f54a74..f6e78d6 100644
    --- a/arch/powerpc/include/asm/spinlock.h
    +++ b/arch/powerpc/include/asm/spinlock.h
    @@ -28,6 +28,8 @@
    #include <asm/synch.h>
    #include <asm/ppc-opcode.h>

    +#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
    +
    #define arch_spin_is_locked(x) ((x)->slock != 0)

    #ifdef CONFIG_PPC64

    \
     
     \ /
      Last update: 2013-12-16 12:41    [W:2.547 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site