lkml.org 
[lkml]   [2018]   [May]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.9 124/329] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
    Date
    4.9-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Andrea Parri <parri.andrea@gmail.com>

    [ Upstream commit cb13b424e986aed68d74cbaec3449ea23c50e167 ]

    Continuing along with the fight against smp_read_barrier_depends() [1]
    (or rather, against its improper use), add an unconditional barrier to
    cmpxchg. This guarantees that dependency ordering is preserved when a
    dependency is headed by an unsuccessful cmpxchg. As it turns out, the
    change could enable further simplification of LKMM as proposed in [2].

    [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
    https://marc.info/?l=linux-kernel&m=150884946319353&w=2
    https://marc.info/?l=linux-kernel&m=151215810824468&w=2
    https://marc.info/?l=linux-kernel&m=151215816324484&w=2

    [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2

    Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
    Acked-by: Peter Zijlstra <peterz@infradead.org>
    Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Alan Stern <stern@rowland.harvard.edu>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Richard Henderson <rth@twiddle.net>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-alpha@vger.kernel.org
    Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    arch/alpha/include/asm/xchg.h | 15 +++++++--------
    1 file changed, 7 insertions(+), 8 deletions(-)

    --- a/arch/alpha/include/asm/xchg.h
    +++ b/arch/alpha/include/asm/xchg.h
    @@ -127,10 +127,9 @@ ____xchg(, volatile void *ptr, unsigned
    * store NEW in MEM. Return the initial value in MEM. Success is
    * indicated by comparing RETURN with OLD.
    *
    - * The memory barrier should be placed in SMP only when we actually
    - * make the change. If we don't change anything (so if the returned
    - * prev is equal to old) then we aren't acquiring anything new and
    - * we don't need any memory barrier as far I can tell.
    + * The memory barrier is placed in SMP unconditionally, in order to
    + * guarantee that dependency ordering is preserved when a dependency
    + * is headed by an unsuccessful operation.
    */

    static inline unsigned long
    @@ -149,8 +148,8 @@ ____cmpxchg(_u8, volatile char *m, unsig
    " or %1,%2,%2\n"
    " stq_c %2,0(%4)\n"
    " beq %2,3f\n"
    - __ASM__MB
    "2:\n"
    + __ASM__MB
    ".subsection 2\n"
    "3: br 1b\n"
    ".previous"
    @@ -176,8 +175,8 @@ ____cmpxchg(_u16, volatile short *m, uns
    " or %1,%2,%2\n"
    " stq_c %2,0(%4)\n"
    " beq %2,3f\n"
    - __ASM__MB
    "2:\n"
    + __ASM__MB
    ".subsection 2\n"
    "3: br 1b\n"
    ".previous"
    @@ -199,8 +198,8 @@ ____cmpxchg(_u32, volatile int *m, int o
    " mov %4,%1\n"
    " stl_c %1,%2\n"
    " beq %1,3f\n"
    - __ASM__MB
    "2:\n"
    + __ASM__MB
    ".subsection 2\n"
    "3: br 1b\n"
    ".previous"
    @@ -222,8 +221,8 @@ ____cmpxchg(_u64, volatile long *m, unsi
    " mov %4,%1\n"
    " stq_c %1,%2\n"
    " beq %1,3f\n"
    - __ASM__MB
    "2:\n"
    + __ASM__MB
    ".subsection 2\n"
    "3: br 1b\n"
    ".previous"

    \
     
     \ /
      Last update: 2018-05-28 16:38    [W:2.816 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site