lkml.org 
[lkml]   [2018]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 09/11] sched: use pelt for scale_rt_capacity()
From
Date
On Mon, 2018-07-16 at 00:15 +0200, Ingo Molnar wrote:
> * Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> > The utilization of the CPU by rt, dl and interrupts are now tracked with
> > PELT so we can use these metrics instead of rt_avg to evaluate the remaining
> > capacity available for cfs class.
> >
> > scale_rt_capacity() behavior has been changed and now returns the remaining
> > capacity available for cfs instead of a scaling factor because rt, dl and
> > interrupt provide now absolute utilization value.
> >
> > The same formula as schedutil is used:
> > irq util_avg + (1 - irq util_avg / max capacity ) * /Sum rq util_avg
> > but the implementation is different because it doesn't return the same value
> > and doesn't benefit of the same optimization
[]
> I have applied the delta fix below for simplicity, but what we really want is a
> cleanup of that function to eliminate the #ifdefs. One solution would be to factor
> out the 'irq' utilization value into a helper inline, and double check that if the
> configs are off the compiler does the right thing and eliminates this identity
> transformation for the irq==0 case:
[]
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
[]
> @@ -7550,7 +7550,10 @@ static unsigned long scale_rt_capacity(int cpu)
> {
> struct rq *rq = cpu_rq(cpu);
> unsigned long max = arch_scale_cpu_capacity(NULL, cpu);
> - unsigned long used, irq, free;
> + unsigned long used, free;
> +#if defined(CONFIG_IRQ_TIME_ACCOUNTING) || defined(CONFIG_PARAVIRT_TIME_ACCOUNTING)
> + unsigned long irq;
> +#endif
>
> #if defined(CONFIG_IRQ_TIME_ACCOUNTING) || defined(CONFIG_PARAVIRT_TIME_ACCOUNTING)

Perhaps combine these two #if defined blocks into
a single block

> irq = READ_ONCE(rq->avg_irq.util_avg);

\
 
 \ /
  Last update: 2018-07-16 00:47    [W:0.058 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site