Messages in this thread |  | | Date | Tue, 23 Feb 2021 15:21:18 +0100 | From | Sebastian Andrzej Siewior <> | Subject | Re: [RT v5.11-rt7] WARNING at include/linux/seqlock.h:271 nft_counter_eval |
| |
On 2021-02-23 14:53:40 [+0100], Juri Lelli wrote: > > So, I'm a bit confused and I'm very likely missing details (still > digesting the seqprop_ magic), but write_seqcount_being() has > > if (seqprop_preemptible(s)) > preempt_disable(); > > which in this case (no lock associated) is defined to return false, > while it should return true on RT (or in some occasions)? Or maybe this > is what you are saying already.
write_seqcount_begin() has seqprop_assert() at the very beginning which ends in __seqprop_assert() in your case (seqcount_t). Your warning.
> Also, the check for preemption been disabled happens before we can > actually potentially disable it, no?
That seqprop_preemptible() is true for !RT for mutex/ww_mutex locks. On RT it is always false since it does lock()+unlock() of the lock that is part of the seqcount.
But back to the original issue: at write_seqcount_begin() preemption is disabled !RT implicit by local_bh_disable(). Therefore no warning. On RT local_bh_disable() disables BH on the CPUs so locking wise (since it is a per-CPU seqcount it should work. Preemption remains enabled so we have a warning.
I have no idea what annotation would be best here. Having a local_bh_disable() type of a lock and the seqcount is not part of the data structure it protects is less than ideal. However, if I understand this correct then this nft_counter_percpu_priv exists once per nft rule. The seqcount exists once per-CPU since it is unlikely to modify two counters at once on a single CPU :) So there is that.
While looking at it, there is nft_counter_reset() which modifies the values without a seqcount write lock. This might be okay.
> Thanks for the quick reply! > > Best, > Juri
Sebastian
|  |