lkml.org 
[lkml]   [2013]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] atomic: improve atomic_inc_unless_negative/atomic_dec_unless_positive
On Tue, Mar 12, 2013 at 10:15:42AM +0800, Ming Lei wrote:
> On Tue, Mar 12, 2013 at 7:59 AM, Frederic Weisbecker <fweisbec@gmail.com> wrote:
> > 2013/3/9 Ming Lei <tom.leiming@gmail.com>:
> >> Generally, both atomic_inc_unless_negative() and
> >> atomic_dec_unless_positive() need at least two atomic_cmpxchg()
> >> to complete the atomic operation. In fact, the 1st atomic_cmpxchg()
> >> is just used to read current value of the atomic variable at most times.
> >>
> >> Considered memory barrier, bus lock, cache walking, etc. things may be
> >> involved in atomic_cmpxchg(), it is much expensive than atomic_read(),
> >> which is just the simple below:
> >>
> >> (*(volatile int *)&(v)->counter)
> >>
> >> so this patch can save one extra atomic_cmpxchg() for the two
> >> helpers under general situation, and should improve them a bit.
> >>
> >> Cc: Andrew Morton <akpm@linux-foundation.org>
> >> Cc: Shaohua Li <shli@kernel.org>
> >> Cc: Al Viro <viro@zeniv.linux.org.uk>
> >> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> >> ---
> >> include/linux/atomic.h | 28 ++++++++++++++++++----------
> >> 1 file changed, 18 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/include/linux/atomic.h b/include/linux/atomic.h
> >> index 5b08a85..aa951d8 100644
> >> --- a/include/linux/atomic.h
> >> +++ b/include/linux/atomic.h
> >> @@ -63,26 +63,34 @@ static inline int atomic_inc_not_zero_hint(atomic_t *v, int hint)
> >> #ifndef atomic_inc_unless_negative
> >> static inline int atomic_inc_unless_negative(atomic_t *p)
> >> {
> >> - int v, v1;
> >> - for (v = 0; v >= 0; v = v1) {
> >> - v1 = atomic_cmpxchg(p, v, v + 1);
> >> - if (likely(v1 == v))
> >> + int v, t;
> >> +
> >> + v = atomic_read(p);
> >> + while (1) {
> >> + if (unlikely(v < 0))
> >> + return 0;
> >
> > But atomic_read() lacks the full memory barrier that is needed for
> > proper atomicity here.
> >
> > For example if the initial value of p is -1 and another CPU just did
> > an atomic_inc() that resulted in the new value to be 0, the above
> > atomic_read() might return -1 because there is no guarantee it's
> > seeing the recent update on the remote CPU.
>
> Yes, you are right. Also looks memory barrier is needed around
> atomic_inc() too.

The atomic_inc() primitive makes no guarantees about ordering (see
Documentation/atomic_ops.txt), so, yes, if the caller needs ordering
around an atomic_inc(), the caller must supply the needed memory
barriers, for example, by using smp_mb__before_atomic_inc() or
smp_mb__after_atomic_inc().

> But I have a question, why a memory barrier can guarantee that
> remote CPU can see the recent update? I understand that memory
> barrier only orders consecutive memory access, and but here
> not see this kind of pattern. Sorry for a possibly stupid question.

Atomic operations that return a value are required to act as full memory
barriers. This means that code relying on ordering provided by these
atomic operations must also do ordering, either by using an explicit
memory barrier or by relying on guarantees from atomic operations.

For example:

CPU 0 CPU 1

X = 1; r1 = Z;
if (atomic_inc_unless_negative(&Y) smp_mb();
do_something();
Z = 1; r2 = X;

Assuming X and Z are initially zero, if r1==1, we are guaranteed
that r2==1. However, CPU 1 needs its smp_mb() in order to pair with
the barrier implicit in atomic_inc_unless_negative().

Make sense?

Thanx, Paul



\
 
 \ /
  Last update: 2013-03-12 05:21    [W:0.063 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site