lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure
    On 16/08/2022 23.04, Will Deacon wrote:
    >> diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
    >> index 093cdaefdb37..d8b101c97031 100644
    >> --- a/Documentation/atomic_bitops.txt
    >> +++ b/Documentation/atomic_bitops.txt
    >> @@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
    >> - RMW operations that have a return value are fully ordered.
    >>
    >> - RMW operations that are conditional are unordered on FAILURE,
    >> - otherwise the above rules apply. In the case of test_and_{}_bit() operations,
    >> + otherwise the above rules apply. In the case of test_and_set_bit_lock(),
    >> if the bit in memory is unchanged by the operation then it is deemed to have
    >> failed.
    >
    > The next sentence is:
    >
    > | Except for a successful test_and_set_bit_lock() which has ACQUIRE
    > | semantics and clear_bit_unlock() which has RELEASE semantics.
    >
    > so I think it reads a bit strangely now. How about something like:
    >
    >
    > diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
    > index 093cdaefdb37..3b516729ec81 100644
    > --- a/Documentation/atomic_bitops.txt
    > +++ b/Documentation/atomic_bitops.txt
    > @@ -59,12 +59,15 @@ Like with atomic_t, the rule of thumb is:
    > - RMW operations that have a return value are fully ordered.
    >
    > - RMW operations that are conditional are unordered on FAILURE,
    > - otherwise the above rules apply. In the case of test_and_{}_bit() operations,
    > - if the bit in memory is unchanged by the operation then it is deemed to have
    > - failed.
    > + otherwise the above rules apply. For the purposes of ordering, the
    > + test_and_{}_bit() operations are treated as unconditional.
    >
    > -Except for a successful test_and_set_bit_lock() which has ACQUIRE semantics and
    > -clear_bit_unlock() which has RELEASE semantics.
    > +Except for:
    > +
    > + - test_and_set_bit_lock() which has ACQUIRE semantics on success and is
    > + unordered on failure;
    > +
    > + - clear_bit_unlock() which has RELEASE semantics.
    >
    > Since a platform only has a single means of achieving atomic operations
    > the same barriers as for atomic_t are used, see atomic_t.txt.

    Makes sense! I'll send a v2 with that in a couple of days if nothing
    else comes up.

    >> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
    >> index 3096f086b5a3..71ab4ba9c25d 100644
    >> --- a/include/asm-generic/bitops/atomic.h
    >> +++ b/include/asm-generic/bitops/atomic.h
    >> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
    >> unsigned long mask = BIT_MASK(nr);
    >>
    >> p += BIT_WORD(nr);
    >> - if (READ_ONCE(*p) & mask)
    >> - return 1;
    >> -
    >> old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
    >> return !!(old & mask);
    >> }
    >> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
    >> unsigned long mask = BIT_MASK(nr);
    >>
    >> p += BIT_WORD(nr);
    >> - if (!(READ_ONCE(*p) & mask))
    >> - return 0;
    >> -
    >> old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
    >> return !!(old & mask);
    >
    > I suppose one sad thing about this is that, on arm64, we could reasonably
    > keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return
    > but I don't think we can express that in the Linux memory model so we
    > end up in RmW territory every time.

    You'd need a barrier *before* the READ_ONCE(), since what we're trying
    to prevent is a consumer from writing to the value without being able to
    observe the writes that happened prior, while this side read the old
    value. A barrier after the READ_ONCE() doesn't do anything, as that read
    is the last memory operation in this thread (of the problematic sequence).

    At that point, I'm not sure DMB LD / early read / LSE atomic would be
    any faster than just always doing the LSE atomic?

    - Hector

    \
     
     \ /
      Last update: 2022-08-16 16:32    [W:2.970 / U:0.144 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site