lkml.org 
[lkml]   [2017]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC 2/2] sched/fair: Remove group imbalance from calculate_imbalance()
    Date
    The group_imbalance path in calculate_imbalance() made sense when it was
    added back in 2007 with commit 908a7c1b9b80 ("sched: fix improper load
    balance across sched domain") because busiest->load_per_task factored into
    the amount of imbalance that was calculated. That is not the case today.

    The group_imbalance path can only affect the outcome of
    calculate_imbalance() when the average load of the domain is less than the
    original busiest->load_per_task. In this case, busiest->load_per_task is
    overwritten with the scheduling domain load average. Thus
    busiest->load_per_task no longer represents actual load that can be moved.

    At the final comparison between env->imbalance and busiest->load_per_task,
    imbalance may be larger than the new busiest->load_per_task causing the
    check to fail under the assumption that there is a task that could be
    migrated to satisfy the imbalance. However env->imbalance may still be
    smaller than the original busiest->load_per_task, thus it is unlikely that
    there is a task that can be migrated to satisfy the imbalance.
    Calculate_imbalance() would not choose to run fix_small_imbalance() when we
    expect it should. In the worst case, this can result in idle cpus.

    Since the group imbalance path in calculate_imbalance() is at best a NOP
    but otherwise harmful, remove it.

    Signed-off-by: Austin Christ <austinwc@codeaurora.org>
    Signed-off-by: Jeffrey Hugo <jhugo@codeaurora.org>
    Tested-by: Tyler Baicar <tbaicar@codeaurora.org>
    ---
    kernel/sched/fair.c | 9 ---------
    1 file changed, 9 deletions(-)

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index 8f783ba..3283561 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -7760,15 +7760,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
    local = &sds->local_stat;
    busiest = &sds->busiest_stat;

    - if (busiest->group_type == group_imbalanced) {
    - /*
    - * In the group_imb case we cannot rely on group-wide averages
    - * to ensure cpu-load equilibrium, look at wider averages. XXX
    - */
    - busiest->load_per_task =
    - min(busiest->load_per_task, sds->avg_load);
    - }
    -
    /*
    * Avg load of busiest sg can be less and avg load of local sg can
    * be greater than avg load across all sgs of sd because avg load
    --
    Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
    Qualcomm Technologies, Inc. is a member of the
    Code Aurora Forum, a Linux Foundation Collaborative Project.
    \
     
     \ /
      Last update: 2017-05-12 19:03    [W:3.257 / U:0.800 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site