lkml.org 
[lkml]   [2019]   [Aug]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC v2] rcu/tree: Try to invoke_rcu_core() if in_irq() during unlock
On Wed, Aug 21, 2019 at 10:38:41AM -0400, Joel Fernandes wrote:
> On Mon, Aug 19, 2019 at 08:41:43AM -0700, Paul E. McKenney wrote:
> > On Mon, Aug 19, 2019 at 07:33:14AM -0700, Paul E. McKenney wrote:
> > > On Mon, Aug 19, 2019 at 05:57:57AM -0700, Paul E. McKenney wrote:
> > > > On Sun, Aug 18, 2019 at 07:29:27PM -0700, Paul E. McKenney wrote:
> > > > > On Sun, Aug 18, 2019 at 09:46:23PM -0400, Joel Fernandes wrote:
> > > > > > On Sun, Aug 18, 2019 at 09:41:43PM -0400, Joel Fernandes wrote:
> > > > > > > On Sun, Aug 18, 2019 at 06:21:53PM -0700, Paul E. McKenney wrote:
> > > > > > [snip]
> > > > > > > > > > Also, your commit log's point #2 is "in_irq() implies in_interrupt()
> > > > > > > > > > which implies raising softirq will not do any wake ups." This mention
> > > > > > > > > > of softirq seems a bit odd, given that we are going to wake up a rcuc
> > > > > > > > > > kthread. Of course, this did nothing to quell my suspicions. ;-)
> > > > > > > > >
> > > > > > > > > Yes, I should delete this #2 from the changelog since it is not very relevant
> > > > > > > > > (I feel now). My point with #2 was that even if were to raise a softirq
> > > > > > > > > (which we are not), a scheduler wakeup of ksoftirqd is impossible in this
> > > > > > > > > path anyway since in_irq() implies in_interrupt().
> > > > > > > >
> > > > > > > > Please! Could you also add a first-principles explanation of why
> > > > > > > > the added condition is immune from scheduler deadlocks?
> > > > > > >
> > > > > > > Sure I can add an example in the change log, however I was thinking of this
> > > > > > > example which you mentioned:
> > > > > > > https://lore.kernel.org/lkml/20190627173831.GW26519@linux.ibm.com/
> > > > > > >
> > > > > > > previous_reader()
> > > > > > > {
> > > > > > > rcu_read_lock();
> > > > > > > do_something(); /* Preemption happened here. */
> > > > > > > local_irq_disable(); /* Cannot be the scheduler! */
> > > > > > > do_something_else();
> > > > > > > rcu_read_unlock(); /* Must defer QS, task still queued. */
> > > > > > > do_some_other_thing();
> > > > > > > local_irq_enable();
> > > > > > > }
> > > > > > >
> > > > > > > current_reader() /* QS from previous_reader() is still deferred. */
> > > > > > > {
> > > > > > > local_irq_disable(); /* Might be the scheduler. */
> > > > > > > do_whatever();
> > > > > > > rcu_read_lock();
> > > > > > > do_whatever_else();
> > > > > > > rcu_read_unlock(); /* Must still defer reporting QS. */
> > > > > > > do_whatever_comes_to_mind();
> > > > > > > local_irq_enable();
> > > > > > > }
> > > > > > >
> > > > > > > One modification of the example could be, previous_reader() could also do:
> > > > > > > previous_reader()
> > > > > > > {
> > > > > > > rcu_read_lock();
> > > > > > > do_something_that_takes_really_long(); /* causes need_qs in
> > > > > > > the unlock_special_union to be set */
> > > > > > > local_irq_disable(); /* Cannot be the scheduler! */
> > > > > > > do_something_else();
> > > > > > > rcu_read_unlock(); /* Must defer QS, task still queued. */
> > > > > > > do_some_other_thing();
> > > > > > > local_irq_enable();
> > > > > > > }
> > > > > >
> > > > > > The point you were making in that thread being, current_reader() ->
> > > > > > rcu_read_unlock() -> rcu_read_unlock_special() would not do any wakeups
> > > > > > because previous_reader() sets the deferred_qs bit.
> > > > > >
> > > > > > Anyway, I will add all of this into the changelog.
> > > > >
> > > > > Examples are good, but what makes it so that there are no examples of
> > > > > its being unsafe?
> > > > >
> > > > > And a few questions along the way, some quick quiz, some more serious.
> > > > > Would it be safe if it checked in_interrupt() instead of in_irq()?
> > > > > If not, should the in_interrupt() in the "if" condition preceding the
> > > > > added "else if" be changed to in_irq()? Would it make sense to add an
> > > > > "|| !irqs_were_disabled" do your new "else if" condition? Would the
> > > > > body of the "else if" actually be executed in current mainline?
> > > > >
> > > > > In an attempt to be at least a little constructive, I am doing some
> > > > > testing of this patch overnight, along with a WARN_ON_ONCE() to see if
> > > > > that invoke_rcu_core() is ever reached.
> > > >
> > > > And that WARN_ON_ONCE() never triggered in two-hour rcutorture runs of
> > > > TREE01, TREE02, TREE03, and TREE09. (These are the TREE variants in
> > > > CFLIST that have CONFIG_PREEMPT=y.)
> > > >
> > > > This of course raises other questions. But first, do you see that code
> > > > executing in your testing?
> > >
> > > Never mind! Idiot here forgot the "--bootargs rcutree.use_softirq"...
> >
> > So this time I ran the test this way:
> >
> > tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 8 --duration 10 --configs "TREE01 TREE02 TREE03 TREE09" --bootargs "rcutree.use_softirq=0"
> >
> > Still no splats. Though only 10-minute runs instead of the two-hour runs
> > I did last night. (Got other stuff I need to do, sorry!)
> >
> > My test version of your patch is shown below. Please let me know if I messed
> > something up.
>
> I think you also need to pass rcutorture.irqreader=1 ?
>
> Otherwise seems all readers happen in process context AFAICS.

Which is the default setting for that, so that's not the issue.

I think one reason could be, in_irq() is false when the timer callback
executes, since the timer callback is executing after a grace-period. The
stack is as follows:

Any reason why we cannot both test for call_rcu() and execute the RCU
callback from the timer hardirq handler?

In fact, I guess on use_nosoftirq systems, the callback will not even run
in softirq context.

[ 20.553361] => rcu_torture_timer_cb
[ 20.553361] => rcu_do_batch
[ 20.553361] => rcu_core
[ 20.553361] => __do_softirq
[ 20.553361] => do_softirq_own_stack
[ 20.553361] => do_softirq.part.16
[ 20.553361] => __local_bh_enable_ip
[ 20.553361] => rcutorture_one_extend
[ 20.553361] => rcu_torture_one_read
[ 20.553361] => rcu_torture_reader
[ 20.553361] => kthread
[ 20.553361] => ret_from_fork


thanks,

- Joel

\
 
 \ /
  Last update: 2019-08-21 16:56    [W:0.098 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site