lkml.org 
[lkml]   [2010]   [Jan]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 39/40] gfs2: use workqueue instead of slow-work
From
Date
Hi,

On Mon, 2010-01-18 at 20:24 +0900, Tejun Heo wrote:
> On 01/18/2010 06:45 PM, Steven Whitehouse wrote:
> > Hi,
> >
> > On Mon, 2010-01-18 at 09:57 +0900, Tejun Heo wrote:
> >> Workqueue can now handle high concurrency. Use system_long_wq instead
> >> of slow-work.
> >>
> >> Signed-off-by: Tejun Heo <tj@kernel.org>
> >> Cc: Steven Whitehouse <swhiteho@redhat.com>
> >
> > Acked-by: Steven Whitehouse <swhiteho@redhat.com> on two conditions:
> >
> > i) That scheduling work on this new workqueue will not require any
> > GFP_KERNEL allocations (even hidden ones such as starting new threads)
> > before the work runs. This is required since the recovery code must not
> > call into the fs until after its recovered.
>
> Oh, if that's the case, it needs its own wq with a rescuer. I thought
> the recovery path wasn't invoked during allocation. slow-work didn't
> guarantee such thing either. Anyways, changing that is pretty easy.
>
> Thanks.
>

Hmm, I thought I'd checked slow work pretty carefully before I decided
to use it :( Looking at it though, its pretty unlikely that it would
cause a problem. We can be 100% safe by just increasing the number of
slow work threads to one per mounted gfs2 fs (assuming no other slow
work users).

Even then it starts new threads by scheduling slow work and thus it
looks like recovery would run before the slow work to start a new
thread, so its much less likely to cause a problem than if the new
thread was started before the slow work item was executed. We haven't
seen a problem during testing so far.

Anyway, if its easy to solve that problem in the new code, thats all
good :-) Thanks for pointing out this issue,

Steve.




\
 
 \ /
  Last update: 2010-01-18 13:09    [W:0.196 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site