Messages in this thread Patch in this message | | | Date | Thu, 6 May 1999 21:42:40 -0400 (EDT) | From | "Benjamin C.R. LaHaise" <> | Subject | Re: 2.2.x run_bottom_halves bug |
| |
On Fri, 7 May 1999 davem@redhat.com wrote:
> This was under discussion recently, and you simply cannot safely do > what your patch is trying to do.
Ahhhh, I haven't had time to read the list much lately.
> The consensus is that most of us do agree that we do want to relook at > the bits again for several cases, but certainly not endlessly. > > You have to put some kind of heuristic cork into the scheme or else > you end up with huge problems.
Okay then, how about the following which will only allow each bottom half to get run at most once during a run? It's not perfect, but it'll catch the case where a runaway bottom half keeps triggering it self. It also solves my concern of IMMED_BH/SERIAL_BH being able to do netif_rx();.
-ben
--- softirq.c.orig Sun Mar 21 10:22:00 1999 +++ softirq.c Thu May 6 21:30:47 1999 @@ -26,45 +26,50 @@ void (*bh_base[32])(void); /* - * This needs to make sure that only one bottom half handler - * is ever active at a time. We do this without locking by - * doing an atomic increment on the intr_count, and checking - * (nonatomically) against 1. Only if it's 1 do we schedule - * the bottom half. - * - * Note that the non-atomicity of the test (as opposed to the - * actual update) means that the test may fail, and _nobody_ - * runs the handlers if there is a race that makes multiple - * CPU's get here at the same time. That's ok, we'll run them - * next time around. + * Repeatedly run over the bottom halves until there are no more, but only run + * each bottom half at most once. If we don't loop, if one bottom half triggers + * another, it might get delayed a long time. If we loop indefinately, then the + * system will lock completely under a heavy irq load. -bcrl */ -static inline void run_bottom_halves(void) -{ - unsigned long active; - void (**bh)(void); - - active = get_active_bhs(); - clear_active_bhs(active); - bh = bh_base; - do { - if (active & 1) - (*bh)(); - bh++; - active >>= 1; - } while (active); -} - asmlinkage void do_bottom_half(void) { int cpu = smp_processor_id(); + unsigned long left = ~0UL; + unsigned long active; + void (**bh)(void); +again: if (softirq_trylock(cpu)) { if (hardirq_trylock(cpu)) { __sti(); - run_bottom_halves(); + + while ((active = left & get_active_bhs())) { + left &= ~active; + clear_active_bhs(active); + bh = bh_base; + do { + if (active & 1) + (*bh)(); + bh++; + active >>= 1; + } while (active); + } + __cli(); hardirq_endlock(cpu); + softirq_endlock(cpu); + + /* + * Avoid the race by checking if any bottom halves + * are active after releasing all locks. This is a + * rare race, but should inexpensive to check. -bcrl + */ + rmb(); + if (get_active_bhs() & left) + goto again; + return; } softirq_endlock(cpu); } } +
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |