lkml.org 
[lkml]   [2013]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 6/6] sched: consider runnable load average in effective_load
> @@ -3120,6 +3124,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> struct task_group *tg;
> unsigned long weight;
> int balanced;
> + int runnable_avg;
>
> idx = sd->wake_idx;
> this_cpu = smp_processor_id();
> @@ -3135,13 +3140,19 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> if (sync) {
> tg = task_group(current);
> weight = current->se.load.weight;
> + runnable_avg = current->se.avg.runnable_avg_sum * NICE_0_LOAD
> + / (current->se.avg.runnable_avg_period + 1);
>
> - this_load += effective_load(tg, this_cpu, -weight, -weight);
> - load += effective_load(tg, prev_cpu, 0, -weight);
> + this_load += effective_load(tg, this_cpu, -weight, -weight)
> + * runnable_avg >> NICE_0_SHIFT;
> + load += effective_load(tg, prev_cpu, 0, -weight)
> + * runnable_avg >> NICE_0_SHIFT;
> }


I'm fairly sure this is wrong; but I haven't bothered to take pencil to paper.

I think you'll need to insert the runnable avg load and make sure
effective_load() uses the right sums itself.


\
 
 \ /
  Last update: 2013-05-02 16:01    [W:0.175 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site