lkml.org 
[lkml]   [2009]   [Jan]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3] softlockup: remove hung_task_check_count
    On Fri, Jan 23, 2009 at 05:55:14PM -0800, Mandeep Singh Baines wrote:
    > Frédéric Weisbecker (fweisbec@gmail.com) wrote:
    > > 2009/1/23 Ingo Molnar <mingo@elte.hu>:
    > > >
    > > > not sure i like the whole idea of removing the max iterations check. In
    > > > theory if there's a _ton_ of tasks, we could spend a lot of time looping
    > > > there. So it always looked prudent to limit it somewhat.
    > > >
    > >
    > > Which means we can loose several of them. Would it hurt to iterate as
    > > much as possible along the task list,
    > > keeping some care about writers starvation and latency?
    > > BTW I thought about the slow work framework, but I can't retrieve
    > > it.... But this thread has already a slow priority.
    > >
    > > Would it be interesting to provide a way for rwlocks to know if there
    > > is writer waiting for the lock?
    >
    > Would be cool if that API existed. You could release the CPU and/or lock as
    > soon as either was contended for. You'd have the benefits of fine-grained
    > locking without the overhead of locking and unlocking multiple time.
    >
    > Currently, there is no bit that can tell you there is a writer waiting. You'd
    > probably need to change the write_lock() implementation at a minimum. Maybe
    > if the first writer left the RW_LOCK_BIAS bit clear and then waited for the
    > readers to leave instead of re-trying? That would actually make write_lock()
    > more efficient for the 1-writer case since you'd only need to spin doing
    > a read in the failure case instead of an atomic_dec and atomic_inc.
    >


    This is already what is done in the slow path (in x86):

    /* rdi: pointer to rwlock_t */
    ENTRY(__write_lock_failed)
    CFI_STARTPROC
    LOCK_PREFIX
    addl $RW_LOCK_BIAS,(%rdi)
    1: rep
    nop
    cmpl $RW_LOCK_BIAS,(%rdi)
    jne 1b
    LOCK_PREFIX
    subl $RW_LOCK_BIAS,(%rdi)
    jnz __write_lock_failed
    ret
    CFI_ENDPROC
    END(__write_lock_failed)

    It spins lurking at the lock value and only if there are no writers nor readers that
    own the lock, it restarts its atomic_sub (and then atomic_add in fail case).

    And if an implementation of writers_waiting_for_lock() is needed, I guess this
    is the perfect place. One atomic_add on a "waiters_count" on entry and an atomic_sub
    on it on exit.

    Since this is the slow_path, I guess that wouldn't really impact the performances....

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2009-01-24 16:55    [W:0.025 / U:0.428 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site