lkml.org 
[lkml]   [2017]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: WARNING: CPU: 1 PID: 15 at kernel/sched/sched.h:804 assert_clock_updated.isra.62.part.63+0x25/0x27
From
Date
On Mon, 2017-01-30 at 11:59 +0000, Matt Fleming wrote:
> On Sat, 28 Jan, at 08:21:05AM, Mike Galbraith wrote:
> > Running Steven's hotplug stress script in tip.today. Config is
> > NOPREEMPT, tune for maximum build time (enterprise default-ish).
> >
> > [ 75.268049] x86: Booting SMP configuration:
> > [ 75.268052] smpboot: Booting Node 0 Processor 1 APIC 0x2
> > [ 75.279994] smpboot: Booting Node 0 Processor 2 APIC 0x4
> > [ 75.294617] smpboot: Booting Node 0 Processor 4 APIC 0x1
> > [ 75.310698] smpboot: Booting Node 0 Processor 5 APIC 0x3
> > [ 75.359056] smpboot: CPU 3 is now offline
> > [ 75.415505] smpboot: CPU 4 is now offline
> > [ 75.479985] smpboot: CPU 5 is now offline
> > [ 75.550674] ------------[ cut here ]------------
> > [ 75.550678] WARNING: CPU: 1 PID: 15 at kernel/sched/sched.h:804
> > assert_clock_updated.isra.62.part.63+0x25/0x27
> > [ 75.550679] rq->clock_update_flags < RQCF_ACT_SKIP
>
> The following patch queued in tip/sched/core should fix this issue:

Weeell, I'll have to take your word for it, as tip g35669bb7fd46 grew
an early boot brick problem.

> ---->8----
>
> From 4d25b35ea3729affd37d69c78191ce6f92766e1a Mon Sep 17 00:00:00
> 2001
> From: Matt Fleming <matt@codeblueprint.co.uk>
> Date: Wed, 26 Oct 2016 16:15:44 +0100
> Subject: [PATCH] sched/fair: Restore previous rq_flags when migrating
> tasks in
> hotplug
>
> __migrate_task() can return with a different runqueue locked than the
> one we passed as an argument. So that we can repin the lock in
> migrate_tasks() (and keep the update_rq_clock() bit) we need to
> restore the old rq_flags before repinning.
>
> Note that it wouldn't be correct to change move_queued_task() to
> repin
> because of the change of runqueue and the fact that having an
> up-to-date clock on the initial rq doesn't mean the new rq has one
> too.
>
> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
> kernel/sched/core.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7f983e83a353..3b248b03ad8f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5608,7 +5608,7 @@ static void migrate_tasks(struct rq *dead_rq)
> {
> struct rq *rq = dead_rq;
> struct task_struct *next, *stop = rq->stop;
> - struct rq_flags rf;
> + struct rq_flags rf, old_rf;
> int dest_cpu;
>
> /*
> @@ -5669,6 +5669,13 @@ static void migrate_tasks(struct rq *dead_rq)
> continue;
> }
>
> + /*
> + * __migrate_task() may return with a different
> + * rq->lock held and a new cookie in 'rf', but we
> need
> + * to preserve rf::clock_update_flags for 'dead_rq'.
> + */
> + old_rf = rf;
> +
> /* Find suitable destination for @next, with force
> if needed. */
> dest_cpu = select_fallback_rq(dead_rq->cpu, next);
>
> @@ -5677,6 +5684,7 @@ static void migrate_tasks(struct rq *dead_rq)
> raw_spin_unlock(&rq->lock);
> rq = dead_rq;
> raw_spin_lock(&rq->lock);
> + rf = old_rf;
> }
> raw_spin_unlock(&next->pi_lock);
> }

\
 
 \ /
  Last update: 2017-01-31 07:23    [W:0.322 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site