lkml.org 
[lkml]   [2019]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] sched: Avoid spurious lock dependencies
Date


> On Oct 1, 2019, at 5:18 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> Does the below adequately describe the situation?
>
> ---
> Subject: sched: Avoid spurious lock dependencies
>
> While seemingly harmless, __sched_fork() does hrtimer_init(), which,
> when DEBUG_OBJETS, can end up doing allocations.
>
> This then results in the following lock order:
>
> rq->lock
> zone->lock.rlock
> batched_entropy_u64.lock
>
> Which in turn causes deadlocks when we do wakeups while holding that
> batched_entropy lock -- as the random code does.
>
> Solve this by moving __sched_fork() out from under rq->lock. This is
> safe because nothing there relies on rq->lock, as also evident from the
> other __sched_fork() callsite.
>
> Fixes: b7d5dc21072c ("random: add a spinlock_t to struct batched_entropy")
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> kernel/sched/core.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7880f4f64d0e..1832fc0fbec5 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6039,10 +6039,11 @@ void init_idle(struct task_struct *idle, int cpu)
> struct rq *rq = cpu_rq(cpu);
> unsigned long flags;
>
> + __sched_fork(0, idle);
> +
> raw_spin_lock_irqsave(&idle->pi_lock, flags);
> raw_spin_lock(&rq->lock);
>
> - __sched_fork(0, idle);
> idle->state = TASK_RUNNING;
> idle->se.exec_start = sched_clock();
> idle->flags |= PF_IDLE;

It looks like this patch has been forgotten forever. Do you need to repost, so Ingo might have a better chance to pick it up?
\
 
 \ /
  Last update: 2019-10-29 12:13    [W:0.320 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site