Messages in this thread Patch in this message | | | Date | Mon, 3 Jun 2019 10:38:48 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH HACK RFC] cpu: Prevent late-arriving interrupts from disrupting offline |
| |
On Sat, Jun 01, 2019 at 06:12:53PM -0700, Paul E. McKenney wrote: > Scheduling-clock interrupts can arrive late in the CPU-offline process, > after idle entry and the subsequent call to cpuhp_report_idle_dead(). > Once execution passes the call to rcu_report_dead(), RCU is ignoring > the CPU, which results in lockdep complaints when the interrupt handler > uses RCU:
> diff --git a/kernel/cpu.c b/kernel/cpu.c > index 448efc06bb2d..3b33d83b793d 100644 > --- a/kernel/cpu.c > +++ b/kernel/cpu.c > @@ -930,6 +930,7 @@ void cpuhp_report_idle_dead(void) > struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state); > > BUG_ON(st->state != CPUHP_AP_OFFLINE); > + local_irq_disable(); > rcu_report_dead(smp_processor_id()); > st->state = CPUHP_AP_IDLE_DEAD; > udelay(1000);
Urgh... I'd almost suggest we do something like the below.
But then I started looking at the various arch_cpu_idle_dead() implementations and ran into arm's implementation, which is calling complete() where generic code already established this isn't possible (see for example cpuhp_report_idle_dead()).
And then there's powerpc which for some obscure reason thinks it needs to enable preemption when dying ?! pseries_cpu_die() actually calls msleep() ?!?!
Sparc64 agains things it should enable preemption when playing dead.
So clearly this isn't going to work well :/
--- include/linux/tick.h | 10 ---------- kernel/sched/idle.c | 5 +++-- 2 files changed, 3 insertions(+), 12 deletions(-)
diff --git a/include/linux/tick.h b/include/linux/tick.h index f92a10b5e112..196a0a7bfc4f 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -134,14 +134,6 @@ extern unsigned long tick_nohz_get_idle_calls(void); extern unsigned long tick_nohz_get_idle_calls_cpu(int cpu); extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); - -static inline void tick_nohz_idle_stop_tick_protected(void) -{ - local_irq_disable(); - tick_nohz_idle_stop_tick(); - local_irq_enable(); -} - #else /* !CONFIG_NO_HZ_COMMON */ #define tick_nohz_enabled (0) static inline int tick_nohz_tick_stopped(void) { return 0; } @@ -164,8 +156,6 @@ static inline ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next) } static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; } static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; } - -static inline void tick_nohz_idle_stop_tick_protected(void) { } #endif /* !CONFIG_NO_HZ_COMMON */ #ifdef CONFIG_NO_HZ_FULL diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 80940939b733..e4bc4aa739b8 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -241,13 +241,14 @@ static void do_idle(void) check_pgt_cache(); rmb(); + local_irq_disable(); + if (cpu_is_offline(cpu)) { - tick_nohz_idle_stop_tick_protected(); + tick_nohz_idle_stop_tick(); cpuhp_report_idle_dead(); arch_cpu_idle_dead(); } - local_irq_disable(); arch_cpu_idle_enter(); /*
| |