lkml.org 
[lkml]   [2007]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] dynticks: don't unlock spinlock twice
During boot with dynticks enabled, we would sometimes get:

[ 35.271900] Switched to high resolution mode on CPU 0
[ 35.304468] BUG: spinlock already unlocked on CPU#0, swapper/1
[ 35.338099] lock: c06428a0, .magic: dead4ead, .owner: <none>/-1,
.owner_cpu:
-1
[ 35.373597] [<c04d7cf0>] _raw_spin_unlock+0x28/0x67
[ 35.406647] [<c05ba279>] _spin_unlock+0x5/0x23
[ 35.439369] [<c04255f7>] tick_sched_timer+0x4e/0xa7
[ 35.472388] [<c04255a9>] tick_sched_timer+0x0/0xa7
[ 35.504833] [<c0422528>] hrtimer_run_queues+0x199/0x1ec
[ 35.537617] [<c0416b72>] run_timer_softirq+0x12/0x166
[ 35.570019] [<c04144d9>] __do_softirq+0x40/0x85

[ 35.601542] [<c0405494>] do_softirq+0x53/0xa9
...

This appears to be caused by run_hrtimer_queue() (called by
hrtimer_run_queues) calling spin_unlock_irq(&cpu_base->lock) before
calling timer->function(timer). The callback function
(tick_sched_timer) expects cpu_base->lock to be held when it is called,
and attempts to unlock it. Since it doesn't seem like anything within
tick_sched_timer really needs to hold the lock (afaict), the attached
patch simply removes the lock handling from tick_sched_timer. Things
called by tick_sched_timer may grab the base->lock, but that's fine (and
their responsibility). Let me know if there's some reason why the lock
should be held, and I can rework this.

Signed-off-by: Andres Salomon <dilinger@debian.org>
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 51556b9..b43bccb 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -422,13 +422,12 @@ static inline void tick_nohz_switch_to_nohz(void) { }
#ifdef CONFIG_HIGH_RES_TIMERS
/*
* We rearm the timer until we get disabled by the idle code
- * Called with interrupts disabled and timer->base->cpu_base->lock held.
+ * Called with interrupts disabled and timer->base->cpu_base->lock *not* held.
*/
static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
{
struct tick_sched *ts =
container_of(timer, struct tick_sched, sched_timer);
- struct hrtimer_cpu_base *base = timer->base->cpu_base;
struct pt_regs *regs = get_irq_regs();
ktime_t now = ktime_get();

@@ -454,13 +453,12 @@ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
}
/*
* update_process_times() might take tasklist_lock, hence
- * drop the base lock. sched-tick hrtimers are per-CPU and
- * never accessible by userspace APIs, so this is safe to do.
+ * we don't attempt to grab the base lock here.
+ * sched-tick hrtimers are per-CPU and never accessible by
+ * userspace APIs, so this is safe to do.
*/
- spin_unlock(&base->lock);
update_process_times(user_mode(regs));
profile_tick(CPU_PROFILING);
- spin_lock(&base->lock);
}

/* Do not restart, when we are in the idle loop */
\
 
 \ /
  Last update: 2007-03-03 03:55    [W:0.077 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site