lkml.org 
[lkml]   [2004]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [patch] voluntary-preempt-2.6.9-rc3-mm2-T0
    On Tue, 5 Oct 2004, Ingo Molnar wrote:
    > On Tue, 5 Oct 2004, Rui Nuno Capela wrote:
    >
    > i think this is the clearest indication that there's something is
    > fundamentally wrong - ksoftirqd must never use that much CPU time on an
    > idle system.

    Please, would you try this patch below that I posted yesterday?
    At the time I thought the trylock was hardly used so not urgent.

    I've just now discovered that the standard SMP PREEMPT read_lock
    - as in do_wait's read_lock(&tasklist_lock) for example - uses it
    via one of those dreaded expansions that grep misses:
    if (likely(_raw_##op##_trylock(lock)))

    I've been suffering the occasional leftover zombie from multiple
    kernel builds precisely since the preempt-smp.patch went in; been
    hunting it unsuccessfully in spare moments, yesterday noticed that
    bug, today realize it's probably what I've been hunting - I'm
    about to start my own tests again, can't be sure until tomorrow.

    Hugh

    The i386 and x86_64 _raw_read_trylocks in preempt-smp.patch
    are too successful: atomic_read() returns a signed integer.

    Signed-off-by: Hugh Dickins <hugh@veritas.com>

    --- 2.6.9-rc3-mm2/include/asm-i386/spinlock.h 2004-10-04 12:00:14.000000000 +0100
    +++ linux/include/asm-i386/spinlock.h 2004-10-04 18:50:32.752864600 +0100
    @@ -235,7 +235,7 @@ static inline int _raw_read_trylock(rwlo
    {
    atomic_t *count = (atomic_t *)lock;
    atomic_dec(count);
    - if (atomic_read(count) < RW_LOCK_BIAS)
    + if (atomic_read(count) >= 0)
    return 1;
    atomic_inc(count);
    return 0;
    --- 2.6.9-rc3-mm2/include/asm-x86_64/spinlock.h 2004-10-04 12:00:15.000000000 +0100
    +++ linux/include/asm-x86_64/spinlock.h 2004-10-04 18:50:32.752864600 +0100
    @@ -236,7 +236,7 @@ static inline int _raw_read_trylock(rwlo
    {
    atomic_t *count = (atomic_t *)lock;
    atomic_dec(count);
    - if (atomic_read(count) < RW_LOCK_BIAS)
    + if (atomic_read(count) >= 0)
    return 1;
    atomic_inc(count);
    return 0;
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:06    [W:6.148 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site