lkml.org 
[lkml]   [2010]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] sched: fix SMT scheduler regression in find_busiest_queue()
From
Date
On Mon, 2010-02-15 at 18:05 +0530, Vaidyanathan Srinivasan wrote:
> * Peter Zijlstra <peterz@infradead.org> [2010-02-14 11:11:58]:
>
> > On Sun, 2010-02-14 at 02:06 +0530, Vaidyanathan Srinivasan wrote:
> > > > > > @@ -4119,12 +4119,23 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle,
> > > > > > continue;
> > > > > >
> > > > > > rq = cpu_rq(i);
> > > > > > - wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
> > > > > > - wl /= power;
> > > > > > + wl = weighted_cpuload(i);
> > > > > >
> > > > > > + /*
> > > > > > + * When comparing with imbalance, use weighted_cpuload()
> > > > > > + * which is not scaled with the cpu power.
> > > > > > + */
> > > > > > if (capacity && rq->nr_running == 1 && wl > imbalance)
> > > > > > continue;
> > > > > >
> > > > > > + /*
> > > > > > + * For the load comparisons with the other cpu's, consider
> > > > > > + * the weighted_cpuload() scaled with the cpu power, so that
> > > > > > + * the load can be moved away from the cpu that is potentially
> > > > > > + * running at a lower capacity.
> > > > > > + */
> > > > > > + wl = (wl * SCHED_LOAD_SCALE) / power;
> > > > > > +
> > > > > > if (wl > max_load) {
> > > > > > max_load = wl;
> > > > > > busiest = rq;
> > > > > >
> > > > > >
> > >
> > > In addition to the above fix, for sched_smt_powersavings to work, the
> > > group capacity of the core (mc level) should be made 2 in
> > > update_sg_lb_stats() by changing the DIV_ROUND_CLOSEST to
> > > DIV_RPUND_UP()
> > >
> > > sgs->group_capacity =
> > > DIV_ROUND_UP(group->cpu_power, SCHED_LOAD_SCALE);
> > >
> > > Ideally we can change this to DIV_ROUND_UP and let SD_PREFER_SIBLING
> > > flag to force capacity to 1. Need to see if there are any side
> > > effects of setting SD_PREFER_SIBLING at SIBLING level sched domain
> > > based on sched_smt_powersavings flag.
> >
> > OK, so while I think that Suresh' patch can make sense (haven't had time
> > to think it through), the above really sounds wrong. Things should not
> > rely on the cpu_power value like that.
>
> Hi Peter,
>
> The reason rounding is a problem is because threads have fractional
> cpu_power and we lose some power in DIV_ROUND_CLOSEST(). At MC level
> a group has 2*589=1178 and group_capacity will be 1 always if
> DIV_ROUND_CLOSEST() is used irrespective of the SD_PREFER_SIBLING
> flag.
>
> We are reducing group capacity here to 1 even though we have 2 sibling
> threads in the group. In the sched_smt_powassavings>0 case, the
> group_capacity should be 2 to allow task consolidation to this group
> while leaving other groups completely idle.
>
> DIV_ROUND_UP(group->cpu_power, SCHED_LOAD_SCALE) will ensure any spare
> capacity is rounded up and counted.
>
> While, if SD_REFER_SIBLING is set,
>
> update_sd_lb_stats():
> if (prefer_sibling)
> sgs.group_capacity = min(sgs.group_capacity, 1UL);
>
> will ensure the group_capacity is 1 and allows spreading of tasks.

We should be weakening this link between cpu_power and capacity, not
strengthening it. What I think you want is to use
cpumask_weight(sched_gropu_cpus(group)) or something as capacity.

The setup for cpu_power is that it can reflect the actual capacity for
work, esp with todays asymmetric cpus where a socket can run on a
different frequency we need to make sure this is so.

So no, that DIV_ROUND_UP is utterly broken, as there might be many ways
for cpu_power of multiple threads/cpus to be less than the number of
cpus.

Furthermore, for powersavings it makes sense to make the capacity a
function of an overload argument/tunable, so that you can specify the
threshold of packing.

So really, cpu_power is a normalization factor to equally distribute
load across cpus that have asymmetric work capacity, if you need any
placement constraints outside of that, do _NOT_ touch cpu_power.



\
 
 \ /
  Last update: 2010-02-15 14:05    [W:0.086 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site