Messages in this thread | | | Date | Fri, 20 Feb 2015 12:27:43 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH RESEND v9 10/10] sched: move cfs task on a CPU with higher capacity |
| |
On Thu, Jan 15, 2015 at 11:09:30AM +0100, Vincent Guittot wrote: > As a sidenote, this will note generate more spurious ilb because we already
s/note/not/
> trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that > has a task, we will trig the ilb once for migrating the task.
> +static inline bool nohz_kick_needed(struct rq *rq) > { > unsigned long now = jiffies; > struct sched_domain *sd; > struct sched_group_capacity *sgc; > int nr_busy, cpu = rq->cpu; > + bool kick = false; > > if (unlikely(rq->idle_balance)) > + return false; > > /* > * We may be recently in ticked or tickless idle mode. At the first > @@ -7472,38 +7498,44 @@ static inline int nohz_kick_needed(struct rq *rq) > * balancing. > */ > if (likely(!atomic_read(&nohz.nr_cpus))) > + return false; > > if (time_before(now, nohz.next_balance)) > + return false; > > if (rq->nr_running >= 2) > + return true;
So this,
> rcu_read_lock(); > sd = rcu_dereference(per_cpu(sd_busy, cpu)); > if (sd) { > sgc = sd->groups->sgc; > nr_busy = atomic_read(&sgc->nr_busy_cpus); > > + if (nr_busy > 1) { > + kick = true; > + goto unlock; > + } > + > } > > + sd = rcu_dereference(rq->sd); > + if (sd) { > + if ((rq->cfs.h_nr_running >= 1) && > + check_cpu_capacity(rq, sd)) { > + kick = true; > + goto unlock; > + } > + }
vs this: how would we ever get here?
If h_nr_running > 1, must then not nr_running > 1 as well?
> > + sd = rcu_dereference(per_cpu(sd_asym, cpu)); > if (sd && (cpumask_first_and(nohz.idle_cpus_mask, > sched_domain_span(sd)) < cpu)) > + kick = true;
For consistencies sake I would've added a goto unlock here as well.
> +unlock: > rcu_read_unlock(); > + return kick; > }
| |