lkml.org 
[lkml]   [1999]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] race-fix for bottom-half-functions
Um, I don't see why this locking is necessary.
You can avoid the locking by changing the meaning of bh_mask to
a *hint*. The bit is always set if the bottom half is enabled, but
is sometimes also set while bh_mask_count is non-zero.

If we change run_bottom_halves to test bh_mask_count against
zero and clear the bit itself if it finds it non-zero,
then everything will work fine without any spin_lock_irqsave.

To be rpecise, it looks like:


static inline void run_bottom_halves(void)
{
unsigned long active, mask=1;
void (**bh)(void);
atomic_t *countp = bh_mask_count;

active = get_active_bhs();
bh = bh_base;
do {
if (active & mask) {
if (*countp <= 0) {
clear_active_bhs(mask);
(*bh)();
} else {
bh_mask &= ~mask;
}
active &= ~mask;
}
bh++;
countp++;
mask <<= 1;
} while (active);
}

I'm assuming locking code in do_bottom_half which I don't undeerstand
prevents anyone from fiddling with the disable counts while this is
running. (If they could, we'd still have a race between the
get_active_bhs() call and the call to (*bh)().)

If it is possible, some retrying needs to be added to the
non-zero-counter case in case it is zeroed between the test of *countp
and the clearing of the bh_mask bit, and remove_bh() needs to be
fiddlied with considerably.
--
-Colin

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:49    [W:0.233 / U:0.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site