Messages in this thread | | | From | Valentin Schneider <> | Subject | Re: [Patch v3 2/6] sched/topology: Record number of cores in sched group | Date | Mon, 10 Jul 2023 21:33:47 +0100 |
| |
On 07/07/23 15:57, Tim Chen wrote: > From: Tim C Chen <tim.c.chen@linux.intel.com> > > When balancing sibling domains that have different number of cores, > tasks in respective sibling domain should be proportional to the number > of cores in each domain. In preparation of implementing such a policy, > record the number of tasks in a scheduling group. > > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> > --- > kernel/sched/sched.h | 1 + > kernel/sched/topology.c | 10 +++++++++- > 2 files changed, 10 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 3d0eb36350d2..5f7f36e45b87 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1860,6 +1860,7 @@ struct sched_group { > atomic_t ref; > > unsigned int group_weight; > + unsigned int cores; > struct sched_group_capacity *sgc; > int asym_prefer_cpu; /* CPU of highest priority in group */ > int flags; > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 6d5628fcebcf..6b099dbdfb39 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu) > static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) > { > struct sched_group *sg = sd->groups; > + struct cpumask *mask = sched_domains_tmpmask2; > > WARN_ON(!sg); > > do { > - int cpu, max_cpu = -1; > + int cpu, cores = 0, max_cpu = -1; > > sg->group_weight = cpumask_weight(sched_group_span(sg)); > > + cpumask_copy(mask, sched_group_span(sg)); > + for_each_cpu(cpu, mask) { > + cores++; > + cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); > + }
This rekindled my desire for an SMT core cpumask/iterator. I played around with a global mask but that's a headache: what if we end up with a core whose SMT threads are split across two exclusive cpusets?
I ended up necro'ing a patch from Peter [1], but didn't get anywhere nice (the LLC shared storage caused me issues).
All that to say, I couldn't think of a nicer way :(
[1]: https://lore.kernel.org/all/20180530143106.082002139@infradead.org/#t
| |