lkml.org 
[lkml]   [2013]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
On 12/20/2013 07:19 PM, Morten Rasmussen wrote:
>> @@ -4132,10 +4137,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>> >
>> > for_each_cpu(i, sched_group_cpus(group)) {
>> > /* Bias balancing toward cpus of our domain */
>> > - if (local_group)
>> > + if (i == this_cpu)
> What is the motivation for changing the local_group load calculation?
> Now the load contributions of all cpus in the local group, except
> this_cpu, will contribute more as their contribution (this_load) is
> determined using target_load() instead.
>
> If I'm not mistaken, that will lead to more frequent load balancing as
> the local_group bias has been reduced. That is the opposite of your
> intentions based on your comment in target_load().

Good catch. will reconsider this again. :)
>
>> > load = source_load(i);
>> > else
>> > - load = target_load(i);
>> > + load = target_load(i, sd->imbalance_pct);
> You scale by sd->imbalance_pct instead of 100+(sd->imbalance_pct-100)/2
> that you removed above. sd->imbalance_pct may have been arbitrarily
> chosen in the past, but changing it may affect behavior.
>


--
Thanks
Alex


\
 
 \ /
  Last update: 2013-12-20 16:01    [W:0.101 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site