Messages in this thread Patch in this message | | | Subject | Re: [PATCH V6] sched/fair: Remove group imbalance from calculate_imbalance() | From | Dietmar Eggemann <> | Date | Tue, 18 Jul 2017 20:48:53 +0100 |
| |
Hi Jeffrey,
On 13/07/17 20:55, Jeffrey Hugo wrote: > The group_imbalance path in calculate_imbalance() made sense when it was > added back in 2007 with commit 908a7c1b9b80 ("sched: fix improper load > balance across sched domain") because busiest->load_per_task factored into > the amount of imbalance that was calculated. Beginning with commit > dd5feea14a7d ("sched: Fix SCHED_MC regression caused by change in sched > cpu_power"), busiest->load_per_task is not a factor in the imbalance > calculation, thus the group_imbalance path no longer makes sense.
You're referring here to the use of 'sds->max_load - sds->busiest_load_per_task' in the calculation of max_pull which got replaced by load_above_capacity with dd5feea14a7d?
I still wonder if the original code (908a7c1b9b80)
if (group_imb) busiest_load_per_task = min(busiest_load_per_task, avg_load);
had something to do with the following:
if (max_load <= busiest_load_per_task) goto out_balanced;
> The group_imbalance path can only affect the outcome of > calculate_imbalance() when the average load of the domain is less than the > original busiest->load_per_task. In this case, busiest->load_per_task is > overwritten with the scheduling domain load average. Thus > busiest->load_per_task no longer represents actual load that can be moved. > > At the final comparison between env->imbalance and busiest->load_per_task, > imbalance may be larger than the new busiest->load_per_task causing the > check to fail under the assumption that there is a task that could be > migrated to satisfy the imbalance. However env->imbalance may still be > smaller than the original busiest->load_per_task, thus it is unlikely that > there is a task that can be migrated to satisfy the imbalance. > Calculate_imbalance() would not choose to run fix_small_imbalance() when we > expect it should. In the worst case, this can result in idle cpus. > > Since the group imbalance path in calculate_imbalance() is at best a NOP > but otherwise harmful, remove it. >
IIRC the topology you had in mind was MC + DIE level with n (n > 2) DIE level sched groups.
Running the testcase 'taskset 0x05 '2 always running task'' (both tasks starting on cpu0) on your machine shows the issue since with your previous patch [1] "sched/fair: Fix load_balance() affinity redo path" we now propagate 'group imbalance' from MC level to DIE level and since you have n > 2 you lower busiest->load_per_task in this group_imbalanced related if condition all the time and env->imbalance stays too small to let one of these tasks migrate to cpu2.
Tried to test it on an Intel i5-3320M (2 cores x 2 HT) with rt-app (2 always running cfs task with affinity 0x05 for 2*x ms and one rt task affine to 0x04 for x ms):
# cat /proc/schedstat | grep ^domain | awk '{ print $1" "$2}' domain0 03 domain1 0f domain0 03 domain1 0f domain0 0c domain1 0f domain0 0c domain1 0f
but here the prefer_sibling handling (group overloaded) eclipses 'group imbalance' the moment one of the cfs tasks can go to cpu2 so the if condition you got rid of is a nop.
I wonder if it is fair to say that your fix helps multi-cluster (especially with n > 2) systems without SMT and with your first patch [1] for this specific, cpu affinity restricted test cases.
> Co-authored-by: Austin Christ <austinwc@codeaurora.org> > Signed-off-by: Jeffrey Hugo <jhugo@codeaurora.org> > Tested-by: Tyler Baicar <tbaicar@codeaurora.org> > ---
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > [v6] > -Added additional history clarification to commit text > > kernel/sched/fair.c | 9 --------- > 1 file changed, 9 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 84255ab..3600713 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7760,15 +7760,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > local = &sds->local_stat; > busiest = &sds->busiest_stat; > > - if (busiest->group_type == group_imbalanced) { > - /* > - * In the group_imb case we cannot rely on group-wide averages > - * to ensure cpu-load equilibrium, look at wider averages. XXX > - */ > - busiest->load_per_task = > - min(busiest->load_per_task, sds->avg_load); > - } > - > /* > * Avg load of busiest sg can be less and avg load of local sg can > * be greater than avg load across all sgs of sd because avg load >
| |