lkml.org 
[lkml]   [2010]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 2/8] sched: __wake_up_locked() exported
    On Wed, Apr 07, 2010 at 07:11:05PM +0200, Michał Nazarewicz wrote:
    > >On Wed, Apr 07, 2010 at 03:41:29PM +0200, Michal Nazarewicz wrote:
    > >>The __wake_up_locked() function has been exported in case modules need it.
    >
    > On Wed, 07 Apr 2010 17:29:22 +0200, Greg KH <greg@kroah.com> wrote:
    > >What module needs it?
    >
    > FunctionFS (f_fs) uses it (thus FunctionFS Gadget (g_ffs) uses it).
    >
    > >Why is it needed?
    >
    > The FunctionFS uses wait_queue_head_t's spinlock to protect a data
    > structure (a queue) used by FunctionFS.
    >
    > In an earlier version of the code there was another spinlock for
    > the queue alone thus when waiting for an event the following had
    > to be used:
    >
    > #v+
    > spin_lock(&queue.lock);
    > while (queue.empty) {
    > spin_unlock(&queue.lock);
    > wait_event(&queue.wait_queue, !queue.empty);
    > spin_lock(&queue.lock);
    > }
    > ...
    > spin_unlock(&queue.lock);
    > #v-
    >
    > I disliked this code very much and at first hoped that there's some
    > "wait_event_holding_lock()" macro which would define the loop shown
    > above (similar to user-space condition variables; see
    > pthread_cond_wait(3) <http://linux.die.net/man/3/pthread_cond_wait>).
    >
    > What makes matter worse is that wait_event() calls prepare_to_wait()
    > which locks the wait_queue_head_t's spinlock so in the end we have
    > unlock one spinlock prior to locking another in situation where one
    > spinlock would suffice.
    >
    > In searching for a better solution I stumbled across fs/timerfd.c which
    > used the wait_queue_head_t's spinlock to lock it's own structures:
    >
    > * http://lxr.linux.no/#linux+v2.6.33/fs/timerfd.c#L39
    > * http://lxr.linux.no/#linux+v2.6.33/fs/timerfd.c#L106
    >
    > So, in the end, I decided to use the same approach in FunctionFS and
    > hence by using wait_queue_head_t's spinlock I removed one (unneeded)
    > spinlock and decreased number of spin_lock and spin_unlock operations,
    > ie. to something like (simplified code):
    >
    > #v+
    > spin_lock(&queue.wait_queue.lock);
    > my_wait_event_locked(&queue.wait_queue, !queue.empty);
    > ...
    > spin_unlock(&queue.lock);
    > #v-
    >
    > where my_wait_event_locked() is a function that does what wait_event()
    > does but assumes the wait_queue_head_t's lock is held when entering
    > and leaves it held when exiting.
    >
    > >Are you sure that you really need it?
    >
    > I could live without it but I strongly believe the code is somehow
    > cleaner and more optimised when __wake_up_locked() is used.

    Ok, thanks for the detailed description, that makes more sense.

    Perhaps you should put this into the patch description next time :)

    Oh, and maybe we should move this type of functionality into the core
    kernel so that the two users don't have to open-code it both times? If
    there are 2 users, odds are someone else will want to also do the same
    thing in the near future.

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2010-04-08 02:33    [W:0.030 / U:94.072 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site