lkml.org 
[lkml]   [2019]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4] sched/freq: move call to cpufreq_update_util
On Fri, 15 Nov 2019 at 16:12, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Nov 15, 2019 at 02:37:27PM +0100, Vincent Guittot wrote:
> > On Fri, 15 Nov 2019 at 14:25, Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > Should not all 3 have their windows aligned and thus alway return the
> > > exact same value?
> >
> > rt and dl yes but not irq
> >
> > But having aligned window doesn't mean that they will all decay.
> > One can have been updated just before (during a dequeue as an example)
> > or at least less than 1ms before
>
> Now, the thing is, if that update happened in sched/rt, then it wouldn't
> have called cpufreq anyway. And once we're idle longer than a period,
> they'll all decay at once.
>
> Except indeed that IRQ stuff, which runs out of sync.
>
> That is, I'm just not convinced it matters much if we keep rq->cfs
> on the list forever (like UP). Because we'll only stop calling when
> update_blocked_averages() when everything hit 0, and up until that
> point, we'll get one update per period from rq->cfs.
>
> For good measure we can force an update when @done, at that point we
> know all 0s.
>
> How is something like this?
>
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 545bcb90b4de..a99ac2aa4a23 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3508,9 +3508,6 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
> cfs_rq->load_last_update_time_copy = sa->last_update_time;
> #endif
>
> - if (decayed)
> - cfs_rq_util_change(cfs_rq, 0);
> -
> return decayed;
> }
>
> @@ -3620,8 +3617,12 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
> attach_entity_load_avg(cfs_rq, se, SCHED_CPUFREQ_MIGRATION);
> update_tg_load_avg(cfs_rq, 0);
>
> - } else if (decayed && (flags & UPDATE_TG))
> - update_tg_load_avg(cfs_rq, 0);
> + } else if (decayed) {
> + cfs_rq_util_change(cfs_rq, 0);
> +
> + if (flags & UPDATE_TG)
> + update_tg_load_avg(cfs_rq, 0);
> + }
> }
>
> #ifndef CONFIG_64BIT
> @@ -7453,7 +7454,7 @@ static void update_blocked_averages(int cpu)
> struct cfs_rq *cfs_rq, *pos;
> const struct sched_class *curr_class;
> struct rq_flags rf;
> - bool done = true;
> + bool done = true, decayed = false;
>
> rq_lock_irqsave(rq, &rf);
> update_rq_clock(rq);
> @@ -7476,10 +7477,14 @@ static void update_blocked_averages(int cpu)
> * list_add_leaf_cfs_rq() for details.
> */
> for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) {
> + bool last = cfs_rq == &rq->cfs;
> struct sched_entity *se;
>
> - if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
> + if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> update_tg_load_avg(cfs_rq, 0);
> + if (last)

using this last make code more readable

> + decayed = true;
> + }
>
> /* Propagate pending load changes to the parent, if any: */
> se = cfs_rq->tg->se[cpu];
> @@ -7490,7 +7495,7 @@ static void update_blocked_averages(int cpu)
> * There can be a lot of idle CPU cgroups. Don't let fully
> * decayed cfs_rqs linger on the list.
> */
> - if (cfs_rq_is_decayed(cfs_rq))
> + if (!last && cfs_rq_is_decayed(cfs_rq))
> list_del_leaf_cfs_rq(cfs_rq);

Keeping root cfs in the list will not change anything now that
cfs_rq_util_change is in update_load_avg()
cfs_rq_util_change will not be called

>
> /* Don't need periodic decay once load/util_avg are null */
> @@ -7498,6 +7503,9 @@ static void update_blocked_averages(int cpu)
> done = false;
> }
>
> + if (decayed || done)

I'm not sure to get why you want to call cpufreq when done is true
which means that everything reaches 0
Why do you prefer to use done instead of ORing the decay of rt, dl,
irq and cfs ?

> + cpufreq_update_util(rq, 0);
> +
> update_blocked_load_status(rq, !done);
> rq_unlock_irqrestore(rq, &rf);
> }
> @@ -7555,6 +7563,7 @@ static inline void update_blocked_averages(int cpu)
> struct cfs_rq *cfs_rq = &rq->cfs;
> const struct sched_class *curr_class;
> struct rq_flags rf;
> + bool done, decayed;
>
> rq_lock_irqsave(rq, &rf);
> update_rq_clock(rq);
> @@ -7568,9 +7577,13 @@ static inline void update_blocked_averages(int cpu)
> update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
> update_irq_load_avg(rq, 0);
>
> - update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
> + decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
> + done = !(cfs_rq_has_blocked(cfs_rq) || others_have_blocked(rq));
>
> - update_blocked_load_status(rq, cfs_rq_has_blocked(cfs_rq) || others_have_blocked(rq));
> + if (decayed || done)
> + cpufreq_update_util(rq, 0);
> +
> + update_blocked_load_status(rq, !done);
> rq_unlock_irqrestore(rq, &rf);
> }
>

\
 
 \ /
  Last update: 2019-11-15 16:32    [W:0.120 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site