lkml.org 
[lkml]   [2021]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] rcutorture: Avoid problematic critical section nesting on RT
On Tue, Aug 17, 2021 at 04:40:18PM +0200, Sebastian Andrzej Siewior wrote:
> From: Scott Wood <swood@redhat.com>
>
> rcutorture was generating some nesting scenarios that are not
> reasonable. Constrain the state selection to avoid them.
>
> Example:
>
> 1. rcu_read_lock()
> 2. local_irq_disable()
> 3. rcu_read_unlock()
> 4. local_irq_enable()
>
> If the thread is preempted between steps 1 and 2,
> rcu_read_unlock_special.b.blocked will be set, but it won't be
> acted on in step 3 because IRQs are disabled. Thus, reporting of the
> quiescent state will be delayed beyond the local_irq_enable().
>
> For now, these scenarios will continue to be tested on non-PREEMPT_RT
> kernels, until debug checks are added to ensure that they are not
> happening elsewhere.
>
> Signed-off-by: Scott Wood <swood@redhat.com>
> [valentin.schneider@arm.com: Don't disable BH in atomic context]
> [bigeasy: remove 'preempt_disable(); local_bh_disable(); preempt_enable();
> local_bh_enable();' from the examples because this works on RT now. ]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

This looks close to being ready for mainline, actually.

One comment below.

Thanx, Paul

> ---
> I folded Valentin's bits.
> I removed the unbalanced preempt_disable()/migrate_disable() part from
> the description because it is supported now by the migrate disable
> implementation. I didn't find it explicit in code/ patch except as part
> of local_bh_disable().
>
>
> kernel/rcu/rcutorture.c | 94 ++++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 80 insertions(+), 14 deletions(-)
> ---
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -61,10 +61,13 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck
> #define RCUTORTURE_RDR_RBH 0x08 /* ... rcu_read_lock_bh(). */
> #define RCUTORTURE_RDR_SCHED 0x10 /* ... rcu_read_lock_sched(). */
> #define RCUTORTURE_RDR_RCU 0x20 /* ... entering another RCU reader. */
> -#define RCUTORTURE_RDR_NBITS 6 /* Number of bits defined above. */
> +#define RCUTORTURE_RDR_ATOM_BH 0x40 /* ... disabling bh while atomic */
> +#define RCUTORTURE_RDR_ATOM_RBH 0x80 /* ... RBH while atomic */
> +#define RCUTORTURE_RDR_NBITS 8 /* Number of bits defined above. */
> #define RCUTORTURE_MAX_EXTEND \
> (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_IRQ | RCUTORTURE_RDR_PREEMPT | \
> - RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED)
> + RCUTORTURE_RDR_RBH | RCUTORTURE_RDR_SCHED | \
> + RCUTORTURE_RDR_ATOM_BH | RCUTORTURE_RDR_ATOM_RBH)
> #define RCUTORTURE_RDR_MAX_LOOPS 0x7 /* Maximum reader extensions. */
> /* Must be power of two minus one. */
> #define RCUTORTURE_RDR_MAX_SEGS (RCUTORTURE_RDR_MAX_LOOPS + 3)
> @@ -1429,31 +1432,53 @@ static void rcutorture_one_extend(int *r
> WARN_ON_ONCE((idxold >> RCUTORTURE_RDR_SHIFT) > 1);
> rtrsp->rt_readstate = newstate;
>
> - /* First, put new protection in place to avoid critical-section gap. */
> + /*
> + * First, put new protection in place to avoid critical-section gap.
> + * Disable preemption around the ATOM disables to ensure that
> + * in_atomic() is true.
> + */
> if (statesnew & RCUTORTURE_RDR_BH)
> local_bh_disable();
> + if (statesnew & RCUTORTURE_RDR_RBH)
> + rcu_read_lock_bh();
> if (statesnew & RCUTORTURE_RDR_IRQ)
> local_irq_disable();
> if (statesnew & RCUTORTURE_RDR_PREEMPT)
> preempt_disable();
> - if (statesnew & RCUTORTURE_RDR_RBH)
> - rcu_read_lock_bh();
> if (statesnew & RCUTORTURE_RDR_SCHED)
> rcu_read_lock_sched();
> + preempt_disable();
> + if (statesnew & RCUTORTURE_RDR_ATOM_BH)
> + local_bh_disable();
> + if (statesnew & RCUTORTURE_RDR_ATOM_RBH)
> + rcu_read_lock_bh();
> + preempt_enable();
> if (statesnew & RCUTORTURE_RDR_RCU)
> idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
>
> - /* Next, remove old protection, irq first due to bh conflict. */
> + /*
> + * Next, remove old protection, in decreasing order of strength
> + * to avoid unlock paths that aren't safe in the stronger
> + * context. Disable preemption around the ATOM enables in
> + * case the context was only atomic due to IRQ disabling.
> + */
> + preempt_disable();
> if (statesold & RCUTORTURE_RDR_IRQ)
> local_irq_enable();
> - if (statesold & RCUTORTURE_RDR_BH)
> + if (statesold & RCUTORTURE_RDR_ATOM_BH)
> local_bh_enable();
> + if (statesold & RCUTORTURE_RDR_ATOM_RBH)
> + rcu_read_unlock_bh();
> + preempt_enable();

The addition of preempt_enable() here prevents rcutorture from covering
an important part of the mainline RCU state space, namely when an RCU
read-side section ends with just local_irq_enable(). This situation
is a challenge for RCU because it must indirectly detect the end of the
critical section.

Would it work for RT if the preempt_enable() and preempt_disable()
were executed only if either RT on the one hand or statesold has the
RCUTORTURE_RDR_ATOM_BH or RCUTORTURE_RDR_ATOM_RBH bit set on the other?

> if (statesold & RCUTORTURE_RDR_PREEMPT)
> preempt_enable();
> - if (statesold & RCUTORTURE_RDR_RBH)
> - rcu_read_unlock_bh();
> if (statesold & RCUTORTURE_RDR_SCHED)
> rcu_read_unlock_sched();
> + if (statesold & RCUTORTURE_RDR_BH)
> + local_bh_enable();
> + if (statesold & RCUTORTURE_RDR_RBH)
> + rcu_read_unlock_bh();
> +
> if (statesold & RCUTORTURE_RDR_RCU) {
> bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
>
> @@ -1496,6 +1521,12 @@ rcutorture_extend_mask(int oldmask, stru
> int mask = rcutorture_extend_mask_max();
> unsigned long randmask1 = torture_random(trsp) >> 8;
> unsigned long randmask2 = randmask1 >> 3;
> + unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
> + unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
> + unsigned long nonatomic_bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> + unsigned long atomic_bhs = RCUTORTURE_RDR_ATOM_BH |
> + RCUTORTURE_RDR_ATOM_RBH;
> + unsigned long tmp;
>
> WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
> /* Mostly only one bit (need preemption!), sometimes lots of bits. */
> @@ -1503,11 +1534,46 @@ rcutorture_extend_mask(int oldmask, stru
> mask = mask & randmask2;
> else
> mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
> - /* Can't enable bh w/irq disabled. */
> - if ((mask & RCUTORTURE_RDR_IRQ) &&
> - ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
> - (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
> - mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +
> + /*
> + * Can't enable bh w/irq disabled.
> + */
> + tmp = atomic_bhs | nonatomic_bhs;
> + if (mask & RCUTORTURE_RDR_IRQ)
> + mask |= oldmask & tmp;

This is more straightforward than my original, good!

> +
> + /*
> + * Ideally these sequences would be detected in debug builds
> + * (regardless of RT), but until then don't stop testing
> + * them on non-RT.
> + */
> + if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> + /*
> + * Can't disable bh in atomic context if bh was already
> + * disabled by another task on the same CPU. Instead of
> + * attempting to track this, just avoid disabling bh in atomic
> + * context.
> + */
> + mask &= ~atomic_bhs;

At some point, we will need to test disabling bh in atomic context,
correct? Or am I missing something here?

> + /*
> + * Can't release the outermost rcu lock in an irq disabled
> + * section without preemption also being disabled, if irqs
> + * had ever been enabled during this RCU critical section
> + * (could leak a special flag and delay reporting the qs).
> + */
> + if ((oldmask & RCUTORTURE_RDR_RCU) &&
> + (mask & RCUTORTURE_RDR_IRQ) &&
> + !(mask & preempts))
> + mask |= RCUTORTURE_RDR_RCU;
> +
> + /* Can't modify non-atomic bh in atomic context */
> + tmp = nonatomic_bhs;
> + if (oldmask & preempts_irq)
> + mask &= ~tmp;
> + if ((oldmask | mask) & preempts_irq)
> + mask |= oldmask & tmp;
> + }
> +
> return mask ?: RCUTORTURE_RDR_RCU;
> }
>

\
 
 \ /
  Last update: 2021-08-19 00:49    [W:0.098 / U:0.856 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site