lkml.org 
[lkml]   [2016]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 18/31] locking,powerpc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}{,_relaxed,_acquire,_release}()
On Sat, Apr 23, 2016 at 12:41:57AM +0800, Boqun Feng wrote:
> > +#define ATOMIC_FETCH_OP_RELAXED(op, asm_op) \
> > +static inline int atomic_fetch_##op##_relaxed(int a, atomic_t *v) \
> > +{ \
> > + int res, t; \
> > + \
> > + __asm__ __volatile__( \
> > +"1: lwarx %0,0,%4 # atomic_fetch_" #op "_relaxed\n" \
> > + #asm_op " %1,%2,%0\n" \
>
> Should be
>
> #asm_op " %1,%3,%0\n"
>
> right? Because %2 is v->counter and %3 is @a.

Indeed, thanks!

\
 
 \ /
  Last update: 2016-04-23 04:41    [W:0.086 / U:0.592 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site