lkml.org 
[lkml]   [2005]   [Dec]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch 05/15] Generic Mutex Subsystem, mutex-core.patch

    * Steven Rostedt <rostedt@goodmis.org> wrote:

    > How expensive is the xchg? Since __mutex_lock_common is called even
    > when it's going to wake up. Maybe it might be more efficient to add
    > something like:
    >
    > if (atomic_cmpxchg(&lock->count, 1, 0) {
    > debug_set_owner(lock, ti __IP__);
    > debug_unlock_irqrestore(&debug_lock, *flags, ti);
    > return 1;
    > }
    >
    > This way we save the overhead of grabbing another spinlock, adding the
    > task to the wait_list and changing it's state.

    in the first pass we definitely need to add ourselves to the list first
    - hence have to grab the lock. Even after the schedule(), we have to
    xchg it to -1, not 0. This is crutial to 'not drop the ball' property of
    one-waiter-in-flight logic - we must not lose the -1 'there are more
    waiters pending' property. Plus, we have the grab the lock because we
    remove ourselves from the wait-list after the schedule(). So i'm not
    sure your suggested optimization is possible.

    Ingo
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-12-19 17:54    [W:0.025 / U:0.064 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site