lkml.org 
[lkml]   [1999]   [May]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject2.2.x run_bottom_halves bug
Hello Linus & all,

I've been testing drivers ported from 2.0 to 2.2 and I'm noticing that
bottom halves are occasionally getting delayed until the next interrupt,
which usually turns out to be the timer going off, adding an unnecessary
10-20ms delay. The case I've encountered this with is when a bottom half
handler triggers another bottom half in my driver's packet reassemble
code. Looking closer, the implications are worse: this might be the cause
of sporadic PPP behaviour under 2.2 without unmask_irq (ie the serial
interrupt wakes up its bottom half while we're already in
run_bottom_halves), or lost/delayed timer ticks (this could easily happen
if, say, NET_BH is taking a long time and the timer interrupt occurs --
TIMER_BH might not run until the next timer tick comes in, bad). I think
this completely removes the race associated with triggering/running bottom
halves. It's tested on my UP box, but not under SMP, so folks please test
away and possibly include in the next release.

-ben


--- kernel/softirq.c.orig Thu May 6 14:02:58 1999
+++ kernel/softirq.c Thu May 6 15:09:29 1999
@@ -26,44 +26,48 @@
void (*bh_base[32])(void);

/*
- * This needs to make sure that only one bottom half handler
- * is ever active at a time. We do this without locking by
- * doing an atomic increment on the intr_count, and checking
- * (nonatomically) against 1. Only if it's 1 do we schedule
- * the bottom half.
- *
- * Note that the non-atomicity of the test (as opposed to the
- * actual update) means that the test may fail, and _nobody_
- * runs the handlers if there is a race that makes multiple
- * CPU's get here at the same time. That's ok, we'll run them
- * next time around.
+ * Repeatedly run over the bottom halves until there are no more. Otherwise
+ * if one bottom half triggers another, it might get delayed a long time. -bcrl
*/
static inline void run_bottom_halves(void)
{
unsigned long active;
void (**bh)(void);

- active = get_active_bhs();
- clear_active_bhs(active);
- bh = bh_base;
- do {
- if (active & 1)
- (*bh)();
- bh++;
- active >>= 1;
- } while (active);
+ while ((active = get_active_bhs())) {
+ clear_active_bhs(active);
+ bh = bh_base;
+ do {
+ if (active & 1)
+ (*bh)();
+ bh++;
+ active >>= 1;
+ } while (active);
+ }
}

asmlinkage void do_bottom_half(void)
{
int cpu = smp_processor_id();

+again:
if (softirq_trylock(cpu)) {
if (hardirq_trylock(cpu)) {
__sti();
run_bottom_halves();
__cli();
hardirq_endlock(cpu);
+ softirq_endlock(cpu);
+
+ /*
+ * Avoid the race by checking if any bottom halves
+ * are active after releasing all locks. This is a
+ * rare race, but should inexpensive to check. -bcrl
+ */
+ rmb();
+ if (get_active_bhs())
+ goto again;
+ return;
}
softirq_endlock(cpu);
}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:1.111 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site