lkml.org 
[lkml]   [2018]   [May]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue
From
Date
On Wed, 2018-05-23 at 10:56 -0700, Tejun Heo wrote:

> The events leading to the lockup are...
>
> 1. A lot of cgwb_release_workfn() is queued at the same time and all
> system_wq kworkers are assigned to execute them.
>
> 2. They all end up calling synchronize_rcu_expedited(). One of them
> wins and tries to perform the expedited synchronization.
>
> 3. However, that invovles queueing rcu_exp_work to system_wq and
> waiting for it. Because #1 is holding all available kworkers on
> system_wq, rcu_exp_work can't be executed. cgwb_release_workfn()
> is waiting for synchronize_rcu_expedited() which in turn is
> waiting
> for cgwb_release_workfn() to free up some of the kworkers.
>
> We shouldn't be scheduling hundreds of cgwb_release_workfn() at the
> same time. There's nothing to be gained from that. This patch
> updates cgwb release path to use a dedicated percpu workqueue with
> @max_active of 1.

Dumb question. Does setting max_active to 1 mean
that every cgwb_release_workfn() ends up forcing
another RCU grace period on the whole system, while
today you might have a bunch of them waiting on the
same RCU grace period advance?

Would it be faster to have some number (up to 16?)
push RCU once, at the same time, instead of having
each of them push RCU into a next grace period one
after another?

I may be overlooking something fundamental here,
but I thought I'd at least ask the question, just
in case :)

--
All Rights Reversed.[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-05-24 00:03    [W:0.094 / U:1.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site