lkml.org 
[lkml]   [2016]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17
On Sun, Mar 27, 2016 at 08:40:18AM -0700, Paul E. McKenney wrote:
> On Sun, Mar 27, 2016 at 01:48:55PM +0000, Mathieu Desnoyers wrote:
> > ----- On Mar 26, 2016, at 9:34 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com wrote:
> > > On Sat, Mar 26, 2016 at 10:22:57PM +0000, Mathieu Desnoyers wrote:
> > >> ----- On Mar 26, 2016, at 2:49 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com
> > >> wrote:
> > >> > On Sat, Mar 26, 2016 at 08:28:16AM -0700, Paul E. McKenney wrote:
> > >> >> On Sat, Mar 26, 2016 at 12:29:31PM +0000, Mathieu Desnoyers wrote:

[ . . . ]

> > >> >> > Perhaps we could try with those commits reverted ?
> > >> >> >
> > >> >> > commit e3baac47f0e82c4be632f4f97215bb93bf16b342
> > >> >> > Author: Peter Zijlstra <peterz@infradead.org>
> > >> >> > Date: Wed Jun 4 10:31:18 2014 -0700
> > >> >> >
> > >> >> > sched/idle: Optimize try-to-wake-up IPI
> > >> >> >
> > >> >> > commit fd99f91aa007ba255aac44fe6cf21c1db398243a
> > >> >> > Author: Peter Zijlstra <peterz@infradead.org>
> > >> >> > Date: Wed Apr 9 15:35:08 2014 +0200
> > >> >> >
> > >> >> > sched/idle: Avoid spurious wakeup IPIs
> > >> >> >
> > >> >> > They appeared in 3.16.
> > >> >>
> > >> >> At this point, I am up for trying pretty much anything. ;-)
> > >> >>
> > >> >> Will give it a go.
> > >> >
> > >> > And those certainly don't revert cleanly! Would patching the kernel
> > >> > to remove the definition of TIF_POLLING_NRFLAG be useful? Or, more
> > >> > to the point, is there some other course of action that would be more
> > >> > useful? At this point, the test times are measured in weeks...
> > >>
> > >> Indeed, patching the kernel to remove the TIF_POLLING_NRFLAG
> > >> definition would have an effect similar to reverting those two
> > >> commits.
> > >>
> > >> Since testing takes a while, we could take a more aggressive
> > >> approach towards reproducing a possible race condition: we
> > >> could re-implement the _TIF_POLLING_NRFLAG vs _TIF_NEED_RESCHED
> > >> dance, along with the ttwu pending lock-list queue, within
> > >> a dummy test module, with custom data structures, and
> > >> stress-test the invariants. We could also create a Promela
> > >> model of these ipi-skip optimisations trying to validate
> > >> progress: whenever a wakeup is requested, there should
> > >> always be a scheduling performed, even if no further wakeup
> > >> is encountered.
> > >>
> > >> Each of the two approaches proposed above might be a significant
> > >> endeavor, and would only validate my specific hunch. So it might
> > >> be a good idea to just let a test run for a few weeks with
> > >> TIF_POLLING_NRFLAG disabled meanwhile.
> > >
> > > This makes a lot of sense. I did some short runs, and nothing broke
> > > too badly. However, I left some diagnostic stuff in that obscured
> > > the outcome. I disabled the diagnostic stuff and am running overnight.
> > > I might need to go further and revert some of my diagnostic patches,
> > > but let's see where it is in the morning.
> >
> > Here is another idea that might help us reproduce this issue faster.
> > If you can afford it, you might want to just throw more similar hardware
> > at the problem. Assuming the problem shows up randomly, but its odds
> > of showing up make it happen only once per week, if we have 100 machines
> > idling in the same way in parallel, we should be able to reproduce it
> > within about 1-2 hours.
> >
> > Of course, if the problem really need each machine to "degrade" for
> > a week (e.g. memory fragmentation), that would not help. It's only for
> > races that appear to be showing up randomly.
>
> Certain rcutorture tests sometimes hit it within an hour (TREE03).
> Last night's TREE03 ran six hours without incident, which is unusual
> given that I didn't enable any tracepoints, but does not any significant
> level of statitstical confidence. The set will finish in a few hours,
> at which point I will start parallel batches of TREE03 to see what
> comes up.
>
> Feel free to take a look at kernel/rcu/waketorture.c for my (feeble
> thus far) attempt to speed things up. I am thinking that I need to
> push sleeping tasks onto idle CPUs to make it happen more often.
> My current approach to this is to run with CPU utilizations of about
> 40% and using hrtimer with a prime number of microseconds to avoid
> synchronization. That should in theory get me a 40% chance of hitting
> an idle CPU with a wakeup, and a reasonable chance of racing with a
> CPU-hotplug operation. But maybe the wakeup needs to be remote or
> some such, in which case waketorture also needs to move stuff around.
>
> Oh, and the patch I am running with is below. I am running x86, and so
> some other architectures would of course need the corresponding patch
> on that architecture.

And it passed a full set of six-hour runs. Unusual of late, but not
unheard of. Next step is to focus on TREE03 overnight.

Thanx, Paul

\
 
 \ /
  Last update: 2016-03-27 22:41    [W:1.720 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site