Messages in this thread | | | From | Dmitry Vyukov <> | Date | Mon, 28 Jan 2019 09:33:58 +0100 | Subject | Re: [PATCH] kcov: convert kcov.refcount to refcount_t |
| |
On Sun, Jan 27, 2019 at 7:41 PM Reshetova, Elena <elena.reshetova@intel.com> wrote: > > > On Mon, Jan 21, 2019 at 11:05:03AM -0500, Alan Stern wrote: > > > On Mon, 21 Jan 2019, Peter Zijlstra wrote: > > > > > > Any additional ordering; like the one you have above; are not strictly > > > > required for the proper functioning of the refcount. Rather, you rely on > > > > additional ordering and will need to provide this explicitly: > > > > > > > > > > > > if (refcount_dec_and_text(&x->rc)) { > > > > /* > > > > * Comment that explains what we order against.... > > > > */ > > > > smp_mb__after_atomic(); > > > > BUG_ON(!x->done*); > > > > free(x); > > > > } > > > > > > > > > > > > Also; these patches explicitly mention that refcount_t is weaker, > > > > specifically to make people aware of this difference. > > > > > > > > A full smp_mb() (or two) would also be much more expensive on a number > > > > of platforms and in the vast majority of the cases it is not required. > > > > > > How about adding smp_rmb() into refcount_dec_and_test()? That would > > > give acq+rel semantics, which seems to be what people will expect. And > > >it wouldn't be nearly as expensive as a full smp_mb(). > > > > Yes, that's a very good suggestion. > > > > I suppose we can add smp_acquire__after_ctrl_dep() on the true branch. > > Then it reall does become rel_acq. > > > > A wee something like so (I couldn't find an arm64 refcount, even though > > I have distinct memories of talk about it). > > > > This isn't compiled, and obviously needs comment/documentation updates > > to go along with it. > > > > --- > > arch/x86/include/asm/refcount.h | 9 ++++++++- > > lib/refcount.c | 7 ++++++- > > 2 files changed, 14 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h > > index dbaed55c1c24..6f7a1eb345b4 100644 > > --- a/arch/x86/include/asm/refcount.h > > +++ b/arch/x86/include/asm/refcount.h > > @@ -74,9 +74,16 @@ bool refcount_sub_and_test(unsigned int i, refcount_t *r) > > > > static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) > > { > > - return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", > > + bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", > > > > REFCOUNT_CHECK_LT_ZERO, > > r- > > >refs.counter, e, "cx"); > > + > > + if (ret) { > > + smp_acquire__after_ctrl_dep(); > > + return true; > > + } > > + > > + return false; > > } > > Actually as I started to do this, any reason why the change here only for dec_and_test and not > for sub_and _test also? Should not the arch. specific logic follow the generic?
I would say these should be exactly the same wrt semantics. dec_and_test is just syntactic sugar for 1 decrement. If we change dec_and_test, we should change sub_and_test the same way.
| |