Messages in this thread Patch in this message | | | Date | Tue, 30 Jan 2024 18:38:22 +0100 | From | Vincent Guittot <> | Subject | Re: [PATCH v2 8/8] sched/pelt: Introduce PELT multiplier |
| |
Le vendredi 08 déc. 2023 à 00:23:42 (+0000), Qais Yousef a écrit : > From: Vincent Donnefort <vincent.donnefort@arm.com> > > The new sched_pelt_multiplier boot param allows a user to set a clock > multiplier to x2 or x4 (x1 being the default). This clock multiplier > artificially speeds up PELT ramp up/down similarly to use a faster > half-life than the default 32ms. > > - x1: 32ms half-life > - x2: 16ms half-life > - x4: 8ms half-life > > Internally, a new clock is created: rq->clock_task_mult. It sits in the > clock hierarchy between rq->clock_task and rq->clock_pelt. > > The param is set as read only and can only be changed at boot time via > > kernel.sched_pelt_multiplier=[1, 2, 4] > > PELT has a big impact on the overall system response and reactiveness to > change. Smaller PELT HF means it'll require less time to reach the > maximum performance point of the system when the system become fully > busy; and equally shorter time to go back to lowest performance point > when the system goes back to idle. > > This faster reaction impacts both dvfs response and migration time > between clusters in HMP system. > > Smaller PELT values are expected to give better performance at the cost > of more power. Under powered systems can particularly benefit from > smaller values. Powerful systems can still benefit from smaller values > if they want to be tuned towards perf more and power is not the major > concern for them. > > This combined with respone_time_ms from schedutil should give the user > and sysadmin a deterministic way to control the triangular power, perf > and thermals for their system. The default response_time_ms will half > as PELT HF halves. > > Update approximate_{util_avg, runtime}() to take into account the PELT > HALFLIFE multiplier. > > Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com> > Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > [Converted from sysctl to boot param and updated commit message] > Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io> > --- > kernel/sched/core.c | 2 +- > kernel/sched/pelt.c | 52 ++++++++++++++++++++++++++++++++++++++++++-- > kernel/sched/pelt.h | 42 +++++++++++++++++++++++++++++++---- > kernel/sched/sched.h | 1 + > 4 files changed, 90 insertions(+), 7 deletions(-) >
..
> +__read_mostly unsigned int sched_pelt_lshift; > +static unsigned int sched_pelt_multiplier = 1; > + > +static int set_sched_pelt_multiplier(const char *val, const struct kernel_param *kp) > +{ > + int ret; > + > + ret = param_set_int(val, kp); > + if (ret) > + goto error; > + > + switch (sched_pelt_multiplier) { > + case 1: > + fallthrough; > + case 2: > + fallthrough; > + case 4: > + WRITE_ONCE(sched_pelt_lshift, > + sched_pelt_multiplier >> 1); > + break; > + default: > + ret = -EINVAL; > + goto error; > + } > + > + return 0; > + > +error: > + sched_pelt_multiplier = 1; > + return ret; > +} > + > +static const struct kernel_param_ops sched_pelt_multiplier_ops = { > + .set = set_sched_pelt_multiplier, > + .get = param_get_int, > +}; > + > +#ifdef MODULE_PARAM_PREFIX > +#undef MODULE_PARAM_PREFIX > +#endif > +/* XXX: should we use sched as prefix? */ > +#define MODULE_PARAM_PREFIX "kernel." > +module_param_cb(sched_pelt_multiplier, &sched_pelt_multiplier_ops, &sched_pelt_multiplier, 0444); > +MODULE_PARM_DESC(sched_pelt_multiplier, "PELT HALFLIFE helps control the responsiveness of the system."); > +MODULE_PARM_DESC(sched_pelt_multiplier, "Accepted value: 1 32ms PELT HALIFE - roughly 200ms to go from 0 to max performance point (default)."); > +MODULE_PARM_DESC(sched_pelt_multiplier, " 2 16ms PELT HALIFE - roughly 100ms to go from 0 to max performance point."); > +MODULE_PARM_DESC(sched_pelt_multiplier, " 4 8ms PELT HALIFE - roughly 50ms to go from 0 to max performance point."); > + > /* > * Approximate the new util_avg value assuming an entity has continued to run > * for @delta us.
..
> + > static inline void > -update_rq_clock_pelt(struct rq *rq, s64 delta) { } > +update_rq_clock_task_mult(struct rq *rq, s64 delta) { } > > static inline void > update_idle_rq_clock_pelt(struct rq *rq) { } > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index bbece0eb053a..a7c89c623250 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1029,6 +1029,7 @@ struct rq { > u64 clock; > /* Ensure that all clocks are in the same cache line */ > u64 clock_task ____cacheline_aligned; > + u64 clock_task_mult;
I'm not sure that we want yet another clock and this doesn't apply for irq_avg.
What about the below is simpler and I think cover all cases ?
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index f951c44f1d52..5cdd147b7abe 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -180,6 +180,7 @@ static __always_inline int ___update_load_sum(u64 now, struct sched_avg *sa, unsigned long load, unsigned long runnable, int running) { + int time_shift; u64 delta;
delta = now - sa->last_update_time; @@ -195,12 +196,17 @@ ___update_load_sum(u64 now, struct sched_avg *sa, /* * Use 1024ns as the unit of measurement since it's a reasonable * approximation of 1us and fast to compute. + * On top of this, we can change the half-time period from the default + * 32ms to a shorter value. This is equivalent to left shifting the + * time. + * Merge both right and left shifts in one single right shift */ - delta >>= 10; + time_shift = 10 - sched_pelt_lshift; + delta >>= time_shift; if (!delta) return 0;
- sa->last_update_time += delta << 10; + sa->last_update_time += delta << time_shift;
/* * running is a subset of runnable (weight) so running can't be set if
> u64 clock_pelt; > unsigned long lost_idle_time; > u64 clock_pelt_idle; > -- > 2.34.1 >
| |