Messages in this thread | | | Subject | Re: [PATCH v2 4/5] sched/pelt: Add a new runnable average signal | From | Valentin Schneider <> | Date | Tue, 18 Feb 2020 21:19:16 +0000 |
| |
On 14/02/2020 15:27, Vincent Guittot wrote: > Now that runnable_load_avg has been removed, we can replace it by a new > signal that will highlight the runnable pressure on a cfs_rq. This signal > track the waiting time of tasks on rq and can help to better define the > state of rqs. > > At now, only util_avg is used to define the state of a rq: > A rq with more that around 80% of utilization and more than 1 tasks is > considered as overloaded. > > But the util_avg signal of a rq can become temporaly low after that a task > migrated onto another rq which can bias the classification of the rq. > > When tasks compete for the same rq, their runnable average signal will be > higher than util_avg as it will include the waiting time and we can use > this signal to better classify cfs_rqs. > > The new runnable_avg will track the runnable time of a task which simply > adds the waiting time to the running time. The runnable _avg of cfs_rq > will be the /Sum of se's runnable_avg and the runnable_avg of group entity > will follow the one of the rq similarly to util_avg. >
I did a bit of playing around with tracepoints and it seems to be behaving fine. For instance, if I spawn 12 always runnable tasks (sysbench --test=cpu) on my Juno (6 CPUs), I get to a system-wide runnable value (\Sum cpu_runnable()) of about 12K. I've only eyeballed them, but migration of the signal values seem fine too.
I have a slight worry that the rq-wide runnable signal might be too easy to inflate, since we aggregate for *all* runnable tasks, and that may not play well with your group_is_overloaded() change (despite having the imbalance_pct on the "right" side).
In any case I'll need to convince myself of it with some messing around, and this concerns patch 5 more than patch 4. So FWIW for this one:
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
I also have one (two) more nit(s) below.
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> > --- > diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c > @@ -227,14 +231,14 @@ ___update_load_sum(u64 now, struct sched_avg *sa, > * Step 1: accumulate *_sum since last_update_time. If we haven't > * crossed period boundaries, finish. > */ > - if (!accumulate_sum(delta, sa, load, running)) > + if (!accumulate_sum(delta, sa, load, runnable, running)) > return 0; > > return 1; > } > > static __always_inline void > -___update_load_avg(struct sched_avg *sa, unsigned long load) > +___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runnable) > { > u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib; > > @@ -242,6 +246,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load) > * Step 2: update *_avg. > */ > sa->load_avg = div_u64(load * sa->load_sum, divider); > + sa->runnable_avg = div _u64(runnable * sa->runnable_sum, divider); ^^^^^^ ^^^^^^^^ a) b) a) That's a tab
b) The value being passed is always 1, do we really need it to expose it as a parameter?
| |