lkml.org 
[lkml]   [2008]   [Jun]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] workqueues: implement flush_work()
On 06/14, Jarek Poplawski wrote:
>
> On Fri, Jun 13, 2008 at 06:28:01PM +0400, Oleg Nesterov wrote:
> > (on top of [PATCH] workqueues: insert_work: use "list_head *" instead of "int tail"
> > http://marc.info/?l=linux-kernel&m=121328944230175)
> >
> > Most of users of flush_workqueue() can be changed to use cancel_work_sync(),
> > but sometimes we really need to wait for the completion and cancelling is not
> > an option. schedule_on_each_cpu() is good example.
> >
> > Add the new helper, flush_work(work), which waits for the completion of the
> > specific work_struct.
>
> This all looks right and better than current flush_, but... the main
> problem is that probably in 90% cases cancel_ + self-running a work
> function (if cancelled) should be both more efficient and safer wrt
> locking (what you convince me to, BTW).

Yes, in most cases cancel_ is enough. And it is safer, note the limitations
of flush_work(). Basically, flush_work(work) should be used when this
work_struct can be queued only once.

> Another question is if schedule_on_each_cpu() is really such a good
> example here: it seems these "xxx && yyy" examples could be faster,
> but I've lost track of this earlier thread.

schedule_on_each_cpu() can't use cancel_ + ->func(), the code should
be executed on the remote CPU.

And note that flush_work() doesn't iterate over all CPUs, this is the
reason why it is limited, but this also means it is faster than
flush_work_sync() == flush_work() + wait_on_work().

> BTW, flush_work() probably needs a lockdep annotation similar to
> flush_workqueue().

Yes I know... but I'd prefer to send another patch, I'm a bit paranoid
when it comes to copy-and-pasting the code.

> Otherwise this all looks OK to me.

Thanks for review!

Oleg.



\
 
 \ /
  Last update: 2008-06-14 16:01    [W:0.060 / U:0.440 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site