lkml.org 
[lkml]   [2009]   [Jan]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH -v8][RFC] mutex: implement adaptive spinning
    From
    Date
    On Mon, 2009-01-12 at 18:13 +0200, Avi Kivity wrote:

    > One thing that worries me here is that the spinners will spin on a
    > memory location in struct mutex, which means that the cacheline holding
    > the mutex (which is likely to be under write activity from the owner)
    > will be continuously shared by the spinners, slowing the owner down when
    > it needs to unshare it. One way out of this is to spin on a location in
    > struct mutex_waiter, and have the mutex owner touch it when it schedules
    > out.

    Yeah, that is what pure MCS locks do -- however I don't think its a
    feasible strategy for this spin/sleep hybrid.

    > So:
    > - each task_struct has an array of currently owned mutexes, appended to
    > by mutex_lock()

    That's not going to fly I think. Lockdep does this but its very
    expensive and has some issues. We're currently at 48 max owners, and
    still some code paths manage to exceed that.

    > - mutex waiters spin on mutex_waiter.wait, which they initialize to zero
    > - when switching out of a task, walk the mutex list, and for each mutex,
    > bump each waiter's wait variable, and clear the owner array

    Which is O(n).

    > - when unlocking a mutex, bump the nearest waiter's wait variable, and
    > remove from the owner array
    >
    > Something similar might be done to spinlocks to reduce cacheline
    > contention from spinners and the owner.

    Spinlocks can use 'pure' MCS locks.



    \
     
     \ /
      Last update: 2009-01-12 18:17    [W:2.658 / U:0.068 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site