lkml.org 
[lkml]   [2010]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] sched: Avoid side-effect of tickless idle on update_cpu_load
From
On Wed, May 12, 2010 at 3:54 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, 2010-05-07 at 18:48 -0700, Venkatesh Pallipadi wrote:
>>
>>  /*
>> + * Load degrade calculations below are approximated on a 128 point scale.
>> + * degrade_zero_ticks is the number of ticks after which old_load at any
>> + * particular idx is approximated to be zero.
>> + * degrade_factor is a precomputed table, a row for each load idx.
>> + * Each column corresponds to degradation factor for a power of two ticks,
>> + * based on 128 point scale.
>> + * Example:
>> + * row 2, col 3 (=12) says that the degradation at load idx 2 after
>> + * 8 ticks is 12/128 (which is an approximation of 3^8/4^8).
>> + */
>
> This comment utterly forgets to explain why. Does the degradation factor
> correspond with the decay otherwise used? Maybe explicitly mention that
> function and clarify the whole cpu_load math.

OK. Will try to explain this better with a patch refresh.

>> +#define DEGRADE_SHIFT                7
>> +static const unsigned char
>> +             degrade_zero_ticks[CPU_LOAD_IDX_MAX] = {0, 8, 32, 64, 128};
>> +static const unsigned char
>> +             degrade_factor[CPU_LOAD_IDX_MAX][DEGRADE_SHIFT + 1] = {
>> +                                     {0, 0, 0, 0, 0, 0, 0, 0},
>> +                                     {64, 32, 8, 0, 0, 0, 0, 0},
>> +                                     {96, 72, 40, 12, 1, 0, 0},
>> +                                     {112, 98, 75, 43, 15, 1, 0},
>> +                                     {120, 112, 98, 76, 45, 16, 2} };
>> +
>> +/*
>> + * Update cpu_load for any backlog'd ticks. The backlog would be when
>> + * CPU is idle and so we just decay the old load without adding any new load.
>> + */
>> +static unsigned long update_backlog(unsigned long load,
>> +                        unsigned long missed_updates, int idx)
>> +{
>> +     int j = 0;
>> +
>> +     if (missed_updates >= degrade_zero_ticks[idx])
>> +             return 0;
>> +
>> +     if (idx == 1)
>> +             return load >> missed_updates;
>> +
>> +     while (missed_updates) {
>> +             if (missed_updates % 2)
>> +                     load =(load * degrade_factor[idx][j]) >> DEGRADE_SHIFT;
>> +
>> +             missed_updates >>= 1;
>> +             j++;
>> +     }
>> +     return load;
>> +}
>> +
>> +/*
>>   * Update rq->cpu_load[] statistics. This function is usually called every
>> - * scheduler tick (TICK_NSEC).
>> + * scheduler tick (TICK_NSEC). With tickless idle this will not be called
>> + * every tick. We fix it up based on jiffies.
>>   */
>>  static void update_cpu_load(struct rq *this_rq)
>>  {
>>       unsigned long this_load = this_rq->load.weight;
>> +     unsigned long curr_jiffies = jiffies;
>> +     unsigned long pending_updates, missed_updates;
>>       int i, scale;
>>
>>       this_rq->nr_load_updates++;
>>
>> +     if (curr_jiffies == this_rq->last_load_update_tick)
>> +             return;
>
> Under which conditions can this happen? Going idle right after having
> had the tick?

Yes. If we go idle after a tick and idle load balancer CPU gets its tick.

>
>> +     pending_updates = curr_jiffies - this_rq->last_load_update_tick;
>> +     this_rq->last_load_update_tick = curr_jiffies;
>> +     missed_updates = pending_updates - 1;
>> +
>>       /* Update our load: */
>> -     for (i = 0, scale = 1; i < CPU_LOAD_IDX_MAX; i++, scale += scale) {
>> +     this_rq->cpu_load[0] = this_load; /* Fasttrack for idx 0 */
>
> Why is this special case worth it?

I don't think it is really visible from performance point of view.
But, I did not like seeing
(old_load*(scale-1) + new_load) >> i
for scale = 1 and i = 0.
We have a subtraction, multiplication, shift, jump (loop) which can be
prevented with an
additional line of code, without making code any harder.

>
>> +     for (i = 1, scale = 2; i < CPU_LOAD_IDX_MAX; i++, scale += scale) {
>>               unsigned long old_load, new_load;
>>
>>               /* scale is effectively 1 << i now, and >> i divides by scale */
>>
>>               old_load = this_rq->cpu_load[i];
>> +             if (missed_updates)
>> +                     old_load = update_backlog(old_load, missed_updates, i);
>
> Would it make sense to stuff that conditional in update_backlog() and
> have a clearer flow? Maybe rename update_backlog() to decay_load() or
> such?

Yes. Makes sense. Will do and resend the patch.

Thanks,
Venki
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-05-13 18:49    [W:0.053 / U:1.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site