Messages in this thread | ![/](/images/icornerl.gif) | | Date | Fri, 15 Dec 2023 16:14:34 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH 1/2] sched/fair: take into account scheduling domain in select_idle_smt() |
| |
On Thu, Dec 14, 2023 at 06:55:50PM +0100, Keisuke Nishimura wrote: > When picking out a CPU on a task wakeup, select_idle_smt() has to take > into account the scheduling domain of @target. This is because cpusets > and isolcpus can remove CPUs from the domain to isolate them from other > SMT siblings. > > This fix checks if the candidate CPU is in the target scheduling domain. > > The commit df3cb4ea1fb6 ("sched/fair: Fix wrong cpu selecting from isolated > domain") originally proposed this fix by adding the check of the scheduling > domain in the loop. However, the commit 3e6efe87cd5cc ("sched/fair: Remove > redundant check in select_idle_smt()") accidentally removed the check. > This commit brings the check back with the tiny optimization of computing > the intersection of the task's CPU mask and the sched domain mask up front. > > Fixes: 3e6efe87cd5c ("sched/fair: Remove redundant check in select_idle_smt()")
Simply reverting that patch is simpler no? That cpumask_and() is likely more expensive than anything else that function does.
And I'm probably already in holiday more, but I don't immediately understand the problem, if you're doing cpusets, then the affinity in p->cpus_ptr should never cross your set, so how can it go wrong?
Is this some isolcpus idiocy? (I so hate that option)
> Signed-off-by: Keisuke Nishimura <keisuke.nishimura@inria.fr> > Signed-off-by: Julia Lawall <julia.lawall@inria.fr> > --- > kernel/sched/fair.c | 15 +++++++++++---- > 1 file changed, 11 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index bcd0f230e21f..71306b48cf68 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7284,11 +7284,18 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu > /* > * Scan the local SMT mask for idle CPUs. > */ > -static int select_idle_smt(struct task_struct *p, int target) > +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) > { > int cpu; > + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask); > + > + /* > + * Check if a candidate cpu is in the LLC scheduling domain where target exists. > + * Due to isolcpus and cpusets, there is no guarantee that it holds. > + */ > + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); > > - for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) { > + for_each_cpu_and(cpu, cpu_smt_mask(target), cpus) { > if (cpu == target) > continue; > if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) > @@ -7314,7 +7321,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma > return __select_idle_cpu(core, p); > } > > -static inline int select_idle_smt(struct task_struct *p, int target) > +static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) > { > return -1; > } > @@ -7564,7 +7571,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) > has_idle_core = test_idle_cores(target); > > if (!has_idle_core && cpus_share_cache(prev, target)) { > - i = select_idle_smt(p, prev); > + i = select_idle_smt(p, sd, prev); > if ((unsigned int)i < nr_cpumask_bits) > return i; > } > -- > 2.34.1 >
| ![\](/images/icornerr.gif) |