lkml.org 
[lkml]   [2018]   [Jun]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v6 04/11] cpufreq/schedutil: use rt utilization tracking
    On Mon, 18 Jun 2018 at 11:00, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
    >
    > On 06/08/2018 02:09 PM, Vincent Guittot wrote:
    > > Take into account rt utilization when selecting an OPP for cfs tasks in order
    > > to reflect the utilization of the CPU.
    >
    > The rt utilization signal is only tracked per-cpu, not per-entity. So it
    > is not aware of PELT migrations (attach/detach).
    >
    > IMHO, this patch deserves some explanation why the temporary
    > inflation/deflation of the OPP driving utilization signal in case an
    > rt-task migrates off/on (missing detach/attach for rt-signal) doesn't
    > harm performance or energy consumption.
    >
    > There was some talk (mainly on #sched irc) about ... (1) preempted cfs
    > tasks (with reduced demand (utilization id only running) signals) using
    > this remaining rt utilization of an rt task which migrated off and ...
    > (2) going to max when an rt tasks runs ... but a summary of all of that
    > in this patch would really help to understand.

    Ok. I will add more comments in the next version. I just wait a bit to
    let time to get more feedback before sending a new release

    >
    > > Cc: Ingo Molnar <mingo@redhat.com>
    > > Cc: Peter Zijlstra <peterz@infradead.org>
    > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
    > > ---
    > > kernel/sched/cpufreq_schedutil.c | 9 ++++++++-
    > > 1 file changed, 8 insertions(+), 1 deletion(-)
    > >
    > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
    > > index 28592b6..32f97fb 100644
    > > --- a/kernel/sched/cpufreq_schedutil.c
    > > +++ b/kernel/sched/cpufreq_schedutil.c
    > > @@ -56,6 +56,7 @@ struct sugov_cpu {
    > > /* The fields below are only needed when sharing a policy: */
    > > unsigned long util_cfs;
    > > unsigned long util_dl;
    > > + unsigned long util_rt;
    > > unsigned long max;
    > >
    > > /* The field below is for single-CPU policies only: */
    > > @@ -178,15 +179,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
    > > sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
    > > sg_cpu->util_cfs = cpu_util_cfs(rq);
    > > sg_cpu->util_dl = cpu_util_dl(rq);
    > > + sg_cpu->util_rt = cpu_util_rt(rq);
    > > }
    > >
    > > static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
    > > {
    > > struct rq *rq = cpu_rq(sg_cpu->cpu);
    > > + unsigned long util;
    > >
    > > if (rq->rt.rt_nr_running)
    > > return sg_cpu->max;
    > >
    > > + util = sg_cpu->util_dl;
    > > + util += sg_cpu->util_cfs;
    > > + util += sg_cpu->util_rt;
    > > +
    > > /*
    > > * Utilization required by DEADLINE must always be granted while, for
    > > * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
    > > @@ -197,7 +204,7 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
    > > * util_cfs + util_dl as requested freq. However, cpufreq is not yet
    > > * ready for such an interface. So, we only do the latter for now.
    > > */
    > > - return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
    > > + return min(sg_cpu->max, util);
    > > }
    > >
    > > static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
    > >
    >

    \
     
     \ /
      Last update: 2018-06-18 14:59    [W:3.366 / U:1.096 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site