Messages in this thread Patches in this message | | | Date | Wed, 14 May 2008 15:05:45 +0200 | From | "Dmitry Adamushko" <> | Subject | Re: [BUG] cpu hotplug vs scheduler |
| |
2008/5/14 Avi Kivity <avi@qumranet.com>:
> [ ... ] > > [4302727.900184] Call Trace: > [4302727.900184] [<ffffffff803249de>] spin_bug+0x9e/0xe9 > [4302727.900184] [<ffffffff80324af4>] _raw_spin_lock+0x41/0x123 > [4302727.900184] [<ffffffff80439638>] _spin_lock_irqsave+0x2f/0x37 > [4302727.900184] [<ffffffff8022ef7c>] print_cfs_rq+0xca/0x46a > [4302727.900184] [<ffffffff80231f97>] sched_debug_show+0x7a3/0xb8c > [4302727.900184] [<ffffffff8023238d>] sysrq_sched_debug_show+0xd/0xf > [4302727.900184] [<ffffffff802323ee>] pick_next_task_fair+0x5f/0x86
Err... sorry for the broken patch. The patch below on top of the previous one should address this issue (ugly, but should be ok for debugging). 'tasklist_lock' shouldn't cause a double lock, I guess.
Sorry for rather 'blind' attempts. If no, then I'll prepare/test/take-a-closer-look at it later today when I'm at home.
TIA,
------ kernel/sched_debug-prev.c 2008-05-14 14:53:28.000000000 +0200 +++ kernel/sched_debug.c 2008-05-14 14:58:12.000000000 +0200 @@ -125,6 +125,7 @@ void print_cfs_rq(struct seq_file *m, in char path[128] = ""; struct cgroup *cgroup = NULL; struct task_group *tg = cfs_rq->tg; + int was_locked;
if (tg) cgroup = tg->css.cgroup; @@ -138,7 +139,11 @@ void print_cfs_rq(struct seq_file *m, in SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "exec_clock", SPLIT_NS(cfs_rq->exec_clock));
- spin_lock_irqsave(&rq->lock, flags); + was_locked = spin_is_locked(&rq->lock); + + if (!was_locked) + spin_lock_irqsave(&rq->lock, flags); + if (cfs_rq->rb_leftmost) MIN_vruntime = (__pick_next_entity(cfs_rq))->vruntime; last = __pick_last_entity(cfs_rq); @@ -146,7 +151,10 @@ void print_cfs_rq(struct seq_file *m, in max_vruntime = last->vruntime; min_vruntime = rq->cfs.min_vruntime; rq0_min_vruntime = per_cpu(runqueues, 0).cfs.min_vruntime; - spin_unlock_irqrestore(&rq->lock, flags); + + if (!was_locked) + spin_unlock_irqrestore(&rq->lock, flags); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "MIN_vruntime", SPLIT_NS(MIN_vruntime)); SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "min_vruntime", ---
-- Best regards, Dmitry Adamushko --- kernel/sched_debug-prev.c 2008-05-14 14:53:28.000000000 +0200 +++ kernel/sched_debug.c 2008-05-14 14:58:12.000000000 +0200 @@ -125,6 +125,7 @@ void print_cfs_rq(struct seq_file *m, in char path[128] = ""; struct cgroup *cgroup = NULL; struct task_group *tg = cfs_rq->tg; + int was_locked; if (tg) cgroup = tg->css.cgroup; @@ -138,7 +139,11 @@ void print_cfs_rq(struct seq_file *m, in SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "exec_clock", SPLIT_NS(cfs_rq->exec_clock)); - spin_lock_irqsave(&rq->lock, flags); + was_locked = spin_is_locked(&rq->lock); + + if (!was_locked) + spin_lock_irqsave(&rq->lock, flags); + if (cfs_rq->rb_leftmost) MIN_vruntime = (__pick_next_entity(cfs_rq))->vruntime; last = __pick_last_entity(cfs_rq); @@ -146,7 +151,10 @@ void print_cfs_rq(struct seq_file *m, in max_vruntime = last->vruntime; min_vruntime = rq->cfs.min_vruntime; rq0_min_vruntime = per_cpu(runqueues, 0).cfs.min_vruntime; - spin_unlock_irqrestore(&rq->lock, flags); + + if (!was_locked) + spin_unlock_irqrestore(&rq->lock, flags); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "MIN_vruntime", SPLIT_NS(MIN_vruntime)); SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "min_vruntime", | |