lkml.org 
[lkml]   [2010]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHSET] workqueue: concurrency managed workqueue, take#6
On Mon, Jun 28, 2010 at 11:03:48PM +0200, Tejun Heo wrote:
> B. General documentation of Concurrency Managed Workqueue (cmwq)
> ================================================================


It would be nice to get this in Documentation/workqueue-design.txt,
as the design is complicated enough to deserve this file :)



> == B-4. Concurrency managed shared worker pool
>
> For any worker pool, managing the concurrency level (how many workers
> are executing simultaneously) is an important issue. cmwq tries to
> keep the concurrency at minimal but sufficient level.
>
> Concurrency management is implemented by hooking into the scheduler.
> The gcwq is notified whenever a busy worker wakes up or sleeps and
> keeps track of the level of concurrency. Generally, works aren't
> supposed to be cpu cycle hogs and maintaining just enough concurrency
> to prevent work processing from stalling is optimal. As long as
> there's one or more workers running on the cpu, no new worker is
> scheduled, but, when the last running worker blocks, the gcwq
> immediately schedules a new worker so that the cpu doesn't sit idle
> while there are pending works.
>
> This allows using minimal number of workers without losing execution
> bandwidth. Keeping idle workers around doesn't cost other than the
> memory space for kthreads, so cmwq holds onto idle ones for a while
> before killing them.
>
> As multiple execution contexts are available for each wq, deadlocks
> around execution contexts is much harder to create. The default wq,
> system_wq, has maximum concurrency level of 256 and unless there is a
> scenario which can result in a dependency loop involving more than 254
> workers, it won't deadlock.



Why this arbitrary limitation?

Thanks.



\
 
 \ /
  Last update: 2010-06-29 01:21    [W:0.276 / U:0.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site