lkml.org 
[lkml]   [2011]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectScrewing with the concurrency limit
First off, wild applause for cmwq. The limitations of the old workqueues 
were a major irritation, I think your new implementation is fabulous.

However, when merging bcache with mainline, I ran into a bit of a thorny
issue. Bcache relies heavily on workqueues, updates to the cache's btree
have to be done after every relevant IO completes. Additionally, btree
insertions can involve sleeping on IO while the root of the tree isn't
write locked - so we'd like to not block other work items from
completing if we don't have to.

So, one might expect the way to get the best performance would be
alloc_workqueue("bcache", WQ_HIGHPRI|WQ_MEM_RECLAIM, 0)

Trouble is, sometimes we do write lock the root of the btree, blocking
everything else from getting anything done - the end result is
root@moria:~# ps ax|grep kworker|wc -l
1550

(running dbench in a VM with disks in tmpfs). Performance is fine (I
think, haven't been trying to rigorously benchmark) but that's annoying.

I think the best way I can express it is that bcache normally wants a
concurrency limit of 1, except when we're blocking and we aren't write
locking the root of the btree.

So, do you think there might be some sane way of doing this with cmwq?
Some way to say "Don't count this work item I'm in right now count
against the workqueue's concurrency limit anymore". If such a thing
could be done, I think it'd be the perfect solution (and I'll owe you a
case of your choice of beer :)


\
 
 \ /
  Last update: 2011-01-08 15:59    [W:0.115 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site