lkml.org 
[lkml]   [2021]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v5 2/6] sched/topology: Introduce sched_group::flags
On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
<ricardo.neri-calderon@linux.intel.com> wrote:
>
> There exist situations in which the load balance needs to know the
> properties of the CPUs in a scheduling group. When using asymmetric
> packing, for instance, the load balancer needs to know not only the
> state of dst_cpu but also of its SMT siblings, if any.
>
> Use the flags of the child scheduling domains to initialize scheduling
> group flags. This will reflect the properties of the CPUs in the
> group.
>
> A subsequent changeset will make use of these new flags. No functional
> changes are introduced.
>
> Cc: Aubrey Li <aubrey.li@intel.com>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Quentin Perret <qperret@google.com>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> Reviewed-by: Len Brown <len.brown@intel.com>
> Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
> Changes since v4:
> * None
>
> Changes since v3:
> * Clear the flags of the scheduling groups of a domain if its child is
> destroyed.
> * Minor rewording of the commit message.
>
> Changes since v2:
> * Introduced this patch.
>
> Changes since v1:
> * N/A
> ---
> kernel/sched/sched.h | 1 +
> kernel/sched/topology.c | 21 ++++++++++++++++++---
> 2 files changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3d3e5793e117..86ab33ce529d 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1809,6 +1809,7 @@ struct sched_group {
> unsigned int group_weight;
> struct sched_group_capacity *sgc;
> int asym_prefer_cpu; /* CPU of highest priority in group */
> + int flags;
>
> /*
> * The CPUs this group covers.
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 4e8698e62f07..c56faae461d9 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -716,8 +716,20 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
> tmp = sd;
> sd = sd->parent;
> destroy_sched_domain(tmp);
> - if (sd)
> + if (sd) {
> + struct sched_group *sg = sd->groups;
> +
> + /*
> + * sched groups hold the flags of the child sched
> + * domain for convenience. Clear such flags since
> + * the child is being destroyed.
> + */
> + do {
> + sg->flags = 0;
> + } while (sg != sd->groups);
> +
> sd->child = NULL;
> + }
> }
>
> for (tmp = sd; tmp; tmp = tmp->parent)
> @@ -916,10 +928,12 @@ build_group_from_child_sched_domain(struct sched_domain *sd, int cpu)
> return NULL;
>
> sg_span = sched_group_span(sg);
> - if (sd->child)
> + if (sd->child) {
> cpumask_copy(sg_span, sched_domain_span(sd->child));
> - else
> + sg->flags = sd->child->flags;
> + } else {
> cpumask_copy(sg_span, sched_domain_span(sd));
> + }
>
> atomic_inc(&sg->ref);
> return sg;
> @@ -1169,6 +1183,7 @@ static struct sched_group *get_group(int cpu, struct sd_data *sdd)
> if (child) {
> cpumask_copy(sched_group_span(sg), sched_domain_span(child));
> cpumask_copy(group_balance_mask(sg), sched_group_span(sg));
> + sg->flags = child->flags;
> } else {
> cpumask_set_cpu(cpu, sched_group_span(sg));
> cpumask_set_cpu(cpu, group_balance_mask(sg));
> --
> 2.17.1
>

\
 
 \ /
  Last update: 2021-09-17 17:27    [W:0.183 / U:1.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site