lkml.org 
[lkml]   [2017]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 11/31] nds32: Atomic operations
    Hi Greentime,

    On Wed, Nov 08, 2017 at 01:54:59PM +0800, Greentime Hu wrote:
    > From: Greentime Hu <greentime@andestech.com>
    >
    > Signed-off-by: Vincent Chen <vincentc@andestech.com>
    > Signed-off-by: Greentime Hu <greentime@andestech.com>
    > ---
    > arch/nds32/include/asm/futex.h | 116 ++++++++++++++++++++++++
    > arch/nds32/include/asm/spinlock.h | 178 +++++++++++++++++++++++++++++++++++++
    > 2 files changed, 294 insertions(+)
    > create mode 100644 arch/nds32/include/asm/futex.h
    > create mode 100644 arch/nds32/include/asm/spinlock.h

    [...]

    > +static inline int
    > +futex_atomic_cmpxchg_inatomic(u32 * uval, u32 __user * uaddr,
    > + u32 oldval, u32 newval)
    > +{
    > + int ret = 0;
    > + u32 val, tmp, flags;
    > +
    > + if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
    > + return -EFAULT;
    > +
    > + smp_mb();
    > + asm volatile (" movi $ta, #0\n"
    > + "1: llw %1, [%6 + $ta]\n"
    > + " sub %3, %1, %4\n"
    > + " cmovz %2, %5, %3\n"
    > + " cmovn %2, %1, %3\n"
    > + "2: scw %2, [%6 + $ta]\n"
    > + " beqz %2, 1b\n"
    > + "3:\n " __futex_atomic_ex_table("%7")
    > + :"+&r"(ret), "=&r"(val), "=&r"(tmp), "=&r"(flags)
    > + :"r"(oldval), "r"(newval), "r"(uaddr), "i"(-EFAULT)
    > + :"$ta", "memory");
    > + smp_mb();
    > +
    > + *uval = val;
    > + return ret;
    > +}

    I see you rely on asm-generic/barrier.h for your barrier definitions, which
    suggests that you only need to prevent reordering by the compiler because
    you're not SMP. Is that right? If so, using smp_mb() is a little weird.

    What about DMA transactions? I imagine you might need some extra
    instructions for the mandatory barriers there.

    Also:

    > +static inline void arch_spin_lock(arch_spinlock_t * lock)
    > +{
    > + unsigned long tmp;
    > +
    > + __asm__ __volatile__("1:\n"
    > + "\tllw\t%0, [%1]\n"
    > + "\tbnez\t%0, 1b\n"
    > + "\tmovi\t%0, #0x1\n"
    > + "\tscw\t%0, [%1]\n"
    > + "\tbeqz\t%0, 1b\n"
    > + :"=&r"(tmp)
    > + :"r"(&lock->lock)
    > + :"memory");
    > +}

    Here it looks like you're eliding an explicit barrier here because you
    already have a "memory" clobber. Can't you do the same for the futex code
    above?

    Will

    \
     
     \ /
      Last update: 2017-11-20 15:29    [W:4.141 / U:0.576 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site