lkml.org 
[lkml]   [2015]   [Sep]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 5/6] sched/fair: Get rid of scaling utilization by capacity_orig
On Tue, Sep 08, 2015 at 02:52:05PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 08, 2015 at 02:26:06PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 08, 2015 at 09:22:05AM +0200, Vincent Guittot wrote:
> > > No, but
> > > sa->util_avg = (sa->util_sum << SCHED_CAPACITY_SHIFT) / LOAD_AVG_MAX;
> > > will fix the unit issue.
> >
> > Tricky that, LOAD_AVG_MAX very much relies on the unit being 1<<10.

I don't get why LOAD_AVG_MAX relies on the util_avg shifting being
1<<10, it is just the sum of the geometric series and the upper bound of
util_sum?

> > And where load_sum already gets a factor 1024 from the weight
> > multiplication, util_sum does not get such a factor, and all the scaling
> > we do on it loose bits.
> >
> > So at the moment we go compute the util_avg value, we need to inflate
> > util_sum with an extra factor 1024 in order to make it work.

Agreed. Inflating the util_sum instead of util_avg like you do below
makes more sense. The load_sum/util_sum assymmetry is somewhat confusing.

> > And seeing that we do the shift up on sa->util_sum without consideration
> > of overflow, would it not make sense to add that factor before the
> > scaling and into the addition?

I don't think util_sum can overflow as it is bounded by LOAD_AVG_MAX
unless you shift it a lot, like << 20. The << SCHED_LOAD_SHIFT in the
existing code is wrong I think. Looking at the initialization of
util_avg = scale_load_down(SCHED_LOAD_SCALE) it is not using using high
resolution load.

> > Now, given all that, units are a complete mess here, and I'd not mind
> > something like:
> >
> > #if (SCHED_LOAD_SHIFT - SCHED_LOAD_RESOLUTION) != SCHED_CAPACITY_SHIFT
> > #error "something usefull"
> > #endif
> >
> > somewhere near here.

Yes. As I see it, it all falls completely if that isn't true.

>
> Something like teh below..
>
> Another thing to ponder; the downside of scaled_delta_w is that its
> fairly likely delta is small and you loose all bits, whereas the weight
> is likely to be large can could loose a fwe bits without issue.

That issue applies both to load and util.

>
> That is, in fixed point scaling like this, you want to start with the
> biggest numbers, not the smallest, otherwise you loose too much.
>
> The flip side is of course that now you can share a multiplcation.

But if we apply the scaling to the weight instead of time, we would only
have to apply it once and not three times like it is now? So maybe we
can end up with almost the same number of multiplications.

We might be loosing bits for low priority task running on cpus at a low
frequency though.

>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -682,7 +682,7 @@ void init_entity_runnable_average(struct
> sa->load_avg = scale_load_down(se->load.weight);
> sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
> sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
> - sa->util_sum = LOAD_AVG_MAX;
> + sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
> /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
> }
>
> @@ -2515,6 +2515,10 @@ static u32 __compute_runnable_contrib(u6
> return contrib + runnable_avg_yN_sum[n];
> }
>
> +#if (SCHED_LOAD_SHIFT - SCHED_LOAD_RESOLUTION) != 10 || SCHED_CAPACITY_SHIFT != 10
> +#error "load tracking assumes 2^10 as unit"
> +#endif

As mentioned above. Does it have to be 10?


\
 
 \ /
  Last update: 2015-09-08 17:01    [W:0.457 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site