Messages in this thread | | | Date | Tue, 19 Mar 2019 10:46:57 -0700 | From | Jakub Kicinski <> | Subject | Re: [PATCH] locking/static_key: Fix false positive warnings on concurrent dec/inc |
| |
Thanks for looking at the patch!
On Tue, 19 Mar 2019 13:18:56 +0100, Peter Zijlstra wrote: > On Mon, Mar 18, 2019 at 02:58:14PM -0700, Jakub Kicinski wrote: > > Even though the atomic_dec_and_mutex_lock() in > > __static_key_slow_dec_cpuslocked() can never see a negative > > value in key->enabled the subsequent sanity check is re-reading > > key->enabled, which may have been set to -1 in the meantime by > > static_key_slow_inc_cpuslocked(). > > A little extra detail might not hurt, or a diagram or something.
Like this:
CPU A CPU B
__static_key_slow_dec_cpuslocked(): static_key_slow_inc_cpuslocked(): # enabled = 1 atomic_dec_and_mutex_lock() # enabled = 0 atomic_read() == 0 atomic_set(-1) # enabled = -1 val = atomic_read() # Oops - val == -1!
?
The test case is TCP's clean_acked_data_enable() / clean_acked_data_disable() as tickled by ktls (net/ktls). Which should probably use the delayed version in the first place, hopefully I can get to adding delayed version of static branches and converting at some point..
> > Instead of using -1 as a "enable in progress" constant use > > -0xffff, this way we can still treat smaller negative values > > as errors. > > Those offset games always hurt my brain, but see below. > > > Fixes: 4c5ea0a9cd02 ("locking/static_key: Fix concurrent static_key_slow_inc()") > > Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> > > --- > > kernel/jump_label.c | 21 ++++++++++----------- > > 1 file changed, 10 insertions(+), 11 deletions(-) > > > > diff --git a/kernel/jump_label.c b/kernel/jump_label.c > > index bad96b476eb6..4a227e70a8f3 100644 > > --- a/kernel/jump_label.c > > +++ b/kernel/jump_label.c > > @@ -89,7 +89,7 @@ static void jump_label_update(struct static_key *key); > > int static_key_count(struct static_key *key) > > { > > /* > > - * -1 means the first static_key_slow_inc() is in progress. > > + * -0xffff means the first static_key_slow_inc() is in progress. > > * static_key_enabled() must return true, so return 1 here. > > */ > > int n = atomic_read(&key->enabled); > > @@ -125,7 +125,10 @@ void static_key_slow_inc_cpuslocked(struct static_key *key) > > > > jump_label_lock(); > > if (atomic_read(&key->enabled) == 0) { > > - atomic_set(&key->enabled, -1); > > + /* Use a large enough negative number so we can still > > + * catch underflow bugs in static_key_slow_dec(). > > + */ > > Broken comment style.
Ah, sorry, netdev.
> > + atomic_set(&key->enabled, -0xffff); > > jump_label_update(key); > > /* > > * Ensure that if the above cmpxchg loop observes our positive > > @@ -158,7 +161,7 @@ void static_key_enable_cpuslocked(struct static_key *key) > > > > jump_label_lock(); > > if (atomic_read(&key->enabled) == 0) { > > - atomic_set(&key->enabled, -1); > > + atomic_set(&key->enabled, -0xffff); > > jump_label_update(key); > > /* > > * See static_key_slow_inc(). > > @@ -208,15 +211,11 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key, > > { > > lockdep_assert_cpus_held(); > > > > - /* > > - * The negative count check is valid even when a negative > > - * key->enabled is in use by static_key_slow_inc(); a > > - * __static_key_slow_dec() before the first static_key_slow_inc() > > - * returns is unbalanced, because all other static_key_slow_inc() > > - * instances block while the update is in progress. > > - */ > > if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) { > > - WARN(atomic_read(&key->enabled) < 0, > > + int v; > > + > > + v = atomic_read(&key->enabled); > > + WARN(v < 0 && v != -0xffff, > > "jump label: negative count!\n"); > > return; > > } > > > Alternatively we could implement atomic_dec_and_mutex_lock_return(). > > I think I like that better, something like:
That indeed looks far cleanest, thanks!
Tested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
> kernel/jump_label.c | 21 +++++++++++++-------- > 1 file changed, 13 insertions(+), 8 deletions(-) > > diff --git a/kernel/jump_label.c b/kernel/jump_label.c > index bad96b476eb6..a799b1ac6b2f 100644 > --- a/kernel/jump_label.c > +++ b/kernel/jump_label.c > @@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key, > unsigned long rate_limit, > struct delayed_work *work) > { > + int val; > + > lockdep_assert_cpus_held(); > > /* > @@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key, > * returns is unbalanced, because all other static_key_slow_inc() > * instances block while the update is in progress. > */ > - if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) { > - WARN(atomic_read(&key->enabled) < 0, > - "jump label: negative count!\n"); > + val = atomic_fetch_add_unless(&key->enabled, -1, 1); > + if (val != 1) { > + WARN(val < 0, "jump label: negative count!\n"); > return; > } > > - if (rate_limit) { > - atomic_inc(&key->enabled); > - schedule_delayed_work(work, rate_limit); > - } else { > - jump_label_update(key); > + jump_label_lock(); > + if (atomic_dec_and_test(&key->enabled)) { > + if (rate_limit) { > + atomic_inc(&key->enabled); > + schedule_delayed_work(work, rate_limit); > + } else { > + jump_label_update(key); > + } > } > jump_label_unlock(); > }
| |