[lkml]   [2003]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRE: [PATCH] udev enhancements to use kernel event queue

    Pual Mackerras is said to have opined:

    > Patrick Mochel writes:

    > > +static inline int atomic_inc_and_read(atomic_t *v)
    > > +{
    > > + __asm__ __volatile__(
    > > + LOCK "incl %0"
    > > + :"=m" (v->counter)
    > > + :"m" (v->counter));
    > > + return v->counter;
    > > +}

    > BZZZT. If another CPU is also doing atomic_inc_and_read you could end
    > up with both calls returning the same value.
    > You can't do atomic_inc_and_read on 386. You can on cpus that have
    > cmpxchg (e.g. later x86). You can also on machines with load-locked
    > and store-conditional instructions (alpha, ppc, probably most other
    > RISCs).

    You can also do it with a conditional move instruction, but it's kind of
    ugly. No help on a '386 though.

    There are ways to do it that work on a 386, but they are all basically
    equivalent to (or worse than) acquiring a spinlock, doing the deed, and then
    releasing it.

    You could also do (in pseudo-code):

    ret <- v->counter
    inc ret
    LOCK incl v->counter
    cmp v->counter, ret
    jz end
    LOCK decl v->counter
    jmp top:
    return ret

    This does not strictly guarantee in order return values, but that's
    meaningless without a lock anyway.


    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:36    [W:0.020 / U:3.712 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site