lkml.org 
[lkml]   [2021]   [Apr]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 2/4] sched/fair: Introduce arch_sched_asym_prefer_early()
    Date
    Introduce arch_sched_asym_prefer_early() so that architectures with SMT
    can delay the decision to label a candidate busiest group as
    group_asym_packing.

    When using asymmetric packing, high priority idle CPUs pull tasks from
    scheduling groups with low priority CPUs. The decision on using asymmetric
    packing for load balancing is done after collecting the statistics of a
    candidate busiest group. However, this decision needs to consider the
    state of SMT siblings of dst_cpu.

    Cc: Aubrey Li <aubrey.li@intel.com>
    Cc: Ben Segall <bsegall@google.com>
    Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
    Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
    Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Quentin Perret <qperret@google.com>
    Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Tim Chen <tim.c.chen@linux.intel.com>
    Reviewed-by: Len Brown <len.brown@intel.com>
    Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
    ---
    include/linux/sched/topology.h | 1 +
    kernel/sched/fair.c | 11 ++++++++++-
    2 files changed, 11 insertions(+), 1 deletion(-)

    diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
    index 8f0f778b7c91..663b98959305 100644
    --- a/include/linux/sched/topology.h
    +++ b/include/linux/sched/topology.h
    @@ -57,6 +57,7 @@ static inline int cpu_numa_flags(void)
    #endif

    extern int arch_asym_cpu_priority(int cpu);
    +extern bool arch_sched_asym_prefer_early(int a, int b);

    struct sched_domain_attr {
    int relax_domain_level;
    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index 4ef3fa0d5e8d..e74da853b046 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -106,6 +106,15 @@ int __weak arch_asym_cpu_priority(int cpu)
    return -cpu;
    }

    +/*
    + * For asym packing, early check if CPUs with higher priority should be
    + * preferred. On some architectures, more data is needed to make a decision.
    + */
    +bool __weak arch_sched_asym_prefer_early(int a, int b)
    +{
    + return sched_asym_prefer(a, b);
    +}
    +
    /*
    * The margin used when comparing utilization with CPU capacity.
    *
    @@ -8458,7 +8467,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
    if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
    env->idle != CPU_NOT_IDLE &&
    sgs->sum_h_nr_running &&
    - sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
    + arch_sched_asym_prefer_early(env->dst_cpu, group->asym_prefer_cpu)) {
    sgs->group_asym_packing = 1;
    }

    --
    2.17.1
    \
     
     \ /
      Last update: 2021-04-06 06:12    [W:2.822 / U:0.500 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site