Messages in this thread | | | From | Vincent Guittot <> | Date | Thu, 4 May 2017 21:02:39 +0200 | Subject | Re: [PATCH 2/2] sched/fair: Always propagate runnable_load_avg |
| |
Hi Tejun,
On 4 May 2017 at 19:43, Tejun Heo <tj@kernel.org> wrote: > Hello, > > On Thu, May 04, 2017 at 10:19:46AM +0200, Vincent Guittot wrote: >> > schbench inside a cgroup and have some base load, it is actually >> > expected to show worse latency. You need to give higher weight to the >> > cgroup matching the number of active threads (to be accruate, scaled >> > by duty cycle but shouldn't matter too much in practice). >> >> I don't have to change anything cgroup weight with mainline to get >> good number which means that the base load which is quite close to >> null, is probably not the problem > > So, while that *could* be the case, it could also be the baseline > incorrectly favoring the nested cfs_rqs over other tasks because of > the nested runnables being inflated with blocked load avgs. I think > it'd be a good idea to test with matching weight to put things on the > even ground.
In the trace i have uploaded, you will see that regressions happen whereas there is no other runnable threads around so it's not a matter of background activities that disturbs schbench
Thanks Vincent
> > Thanks. > > -- > tejun
| |