lkml.org 
[lkml]   [2010]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] sched: fix SMT scheduler regression in find_busiest_queue()
From
Date
On Sun, 2010-02-14 at 02:06 +0530, Vaidyanathan Srinivasan wrote:
> > > > @@ -4119,12 +4119,23 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle,
> > > > continue;
> > > >
> > > > rq = cpu_rq(i);
> > > > - wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
> > > > - wl /= power;
> > > > + wl = weighted_cpuload(i);
> > > >
> > > > + /*
> > > > + * When comparing with imbalance, use weighted_cpuload()
> > > > + * which is not scaled with the cpu power.
> > > > + */
> > > > if (capacity && rq->nr_running == 1 && wl > imbalance)
> > > > continue;
> > > >
> > > > + /*
> > > > + * For the load comparisons with the other cpu's, consider
> > > > + * the weighted_cpuload() scaled with the cpu power, so that
> > > > + * the load can be moved away from the cpu that is potentially
> > > > + * running at a lower capacity.
> > > > + */
> > > > + wl = (wl * SCHED_LOAD_SCALE) / power;
> > > > +
> > > > if (wl > max_load) {
> > > > max_load = wl;
> > > > busiest = rq;
> > > >
> > > >
>
> In addition to the above fix, for sched_smt_powersavings to work, the
> group capacity of the core (mc level) should be made 2 in
> update_sg_lb_stats() by changing the DIV_ROUND_CLOSEST to
> DIV_RPUND_UP()
>
> sgs->group_capacity =
> DIV_ROUND_UP(group->cpu_power, SCHED_LOAD_SCALE);
>
> Ideally we can change this to DIV_ROUND_UP and let SD_PREFER_SIBLING
> flag to force capacity to 1. Need to see if there are any side
> effects of setting SD_PREFER_SIBLING at SIBLING level sched domain
> based on sched_smt_powersavings flag.

OK, so while I think that Suresh' patch can make sense (haven't had time
to think it through), the above really sounds wrong. Things should not
rely on the cpu_power value like that.




\
 
 \ /
  Last update: 2010-02-14 11:15    [W:0.192 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site