lkml.org 
[lkml]   [2009]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] Convert libata pio task to slow-work
On Thu, Aug 27 2009, Tejun Heo wrote:
> Hello, Jens.
>
> Jens Axboe wrote:
> >> It would be nice if merging of this series and the lazy work can be
> >> held a bit but there's no harm in merging either. If the concurrency
> >> managed workqueue turns out to be a good idea, we can replace it then.
> >
> > It can wait, what you describe above sounds really cool and would
> > hopefully allow us to get rid of all workqueues (provided it scales well
> > and doesn't fall down on cache line contention with many different
> > instances pounding on it).
>
> Almost all operations are per-cpu so cache lines shouldn't bounce too
> much. The only part I worry about is the part which checks whether a
> work is currently executing on the current cpu which currently is
> implemeted as a hash table. The hash table is only 16 pointers long
> and will be mostly empty so hopefully it doesn't add any significant
> overhead.

OK, we'll let time and experimentation be the judge.

> > Care to post it? I know you don't think it's perfect yet, but it would
> > make a lot more sense to throw effort into this rather than waste time
> > on partial solutions.
>
> I have this printed out code with full of red markings from proof
> reading and flush implementation is mostly broken. Please give me a
> couple of days. I'll post a rough unsplit version which at least
> compiles with the planned changes applied by the end of the week. :-)

Alright, fair enough.

One question - do the 'exposed' workqueues (the ones that drivers
allocate/create) sitting in front of the global cpu queue allow more
than one thread per cpu, or is that property retained for the global cpu
queue (where it is a necessity)?

--
Jens Axboe



\
 
 \ /
  Last update: 2009-08-27 20:53    [W:0.037 / U:0.460 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site