Messages in this thread Patch in this message | | | From | Srikar Dronamraju <> | Subject | [PATCH] sched/fair: Enable group_asym_packing in find_idlest_group | Date | Wed, 18 Oct 2023 21:20:35 +0530 |
| |
Current scheduler code doesn't handle SD_ASYM_PACKING in the find_idlest_cpu path. On few architectures, like Powerpc, cache is at a core. Moving threads across cores may end up in cache misses.
While asym_packing can be enabled above SMT level, enabling Asym packing across cores could result in poorer performance due to cache misses. However if the initial task placement via find_idlest_cpu does take Asym_packing into consideration, then scheduler can avoid asym_packing migrations. This will result in lesser migrations and better packing and better overall performance.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> --- kernel/sched/fair.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cb225921bbca..7164f79a3d13 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9931,11 +9931,13 @@ static int idle_cpu_without(int cpu, struct task_struct *p) * @group: sched_group whose statistics are to be updated. * @sgs: variable to hold the statistics for this group. * @p: The task for which we look for the idlest group/CPU. + * @this_cpu: current cpu */ static inline void update_sg_wakeup_stats(struct sched_domain *sd, struct sched_group *group, struct sg_lb_stats *sgs, - struct task_struct *p) + struct task_struct *p, + int this_cpu) { int i, nr_running; @@ -9972,6 +9974,11 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, } + if (sd->flags & SD_ASYM_PACKING && sgs->sum_h_nr_running && + sched_asym_prefer(group->asym_prefer_cpu, this_cpu)) { + sgs->group_asym_packing = 1; + } + sgs->group_capacity = group->sgc->capacity; sgs->group_weight = group->group_weight; @@ -10012,8 +10019,17 @@ static bool update_pick_idlest(struct sched_group *idlest, return false; break; - case group_imbalanced: case group_asym_packing: + if (sched_asym_prefer(group->asym_prefer_cpu, idlest->asym_prefer_cpu)) { + int busy_cpus = idlest_sgs->group_weight - idlest_sgs->idle_cpus; + + busy_cpus -= (sgs->group_weight - sgs->idle_cpus); + if (busy_cpus >= 0) + return true; + } + return false; + + case group_imbalanced: case group_smt_balance: /* Those types are not used in the slow wakeup path */ return false; @@ -10080,7 +10096,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) sgs = &tmp_sgs; } - update_sg_wakeup_stats(sd, group, sgs, p); + update_sg_wakeup_stats(sd, group, sgs, p, this_cpu); if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) { idlest = group; @@ -10112,6 +10128,17 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) if (local_sgs.group_type > idlest_sgs.group_type) return idlest; + if (idlest_sgs.group_type == group_asym_packing) { + if (sched_asym_prefer(idlest->asym_prefer_cpu, local->asym_prefer_cpu)) { + int busy_cpus = local_sgs.group_weight - local_sgs.idle_cpus; + + busy_cpus -= (idlest_sgs.group_weight - idlest_sgs.idle_cpus); + if (busy_cpus >= 0) + return idlest; + } + return NULL; + } + switch (local_sgs.group_type) { case group_overloaded: case group_fully_busy: -- 2.31.1
| |