lkml.org 
[lkml]   [2021]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] sched/fair: Fix task utilization accountability in cpu_util_next()
Date
From: Vincent Donnefort <vincent.donnefort@arm.com>

Currently, cpu_util_next() estimates the CPU utilization as follows:

max(cpu_util + task_util,
cpu_util_est + task_util_est)

This is an issue when making a comparison between CPUs, as the task
contribution can be either:

(1) task_util_est, on a mostly idle CPU, where cpu_util is close to 0
and task_util_est > cpu_util.
(2) task_util, on a mostly busy CPU, where cpu_util > task_util_est.

This gives an unfair advantage to some CPUs, when comparing energy deltas
in the task waking placement, where task_util is always smaller than
task_util_est. The energy delta is therefore, likely to be bigger on
a mostly idle CPU (1) than a mostly busy CPU (2).

This issue is, moreover, not sporadic. By starving idle CPUs, it keeps
their cpu_util < task_util_est (1) while others will maintain cpu_util >
task_util_est (2).

The new approach uses (if UTIL_EST is enabled) task_util_est() as task
contribution, which ensures that all CPUs will use the same value:

max(cpu_util + max(task_util, task_util_est),
cpu_util_est + max(task_util, task_util_est))

This patch doesn't modify the !UTIL_EST behaviour.

Also, replace sub_positive with lsub_positive which saves one explicit
load-store.

Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fb9f10d4312b..dd143aafaf97 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6516,32 +6516,42 @@ static unsigned long cpu_util_without(int cpu, struct task_struct *p)
static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
{
struct cfs_rq *cfs_rq = &cpu_rq(cpu)->cfs;
- unsigned long util_est, util = READ_ONCE(cfs_rq->avg.util_avg);
+ unsigned long util = READ_ONCE(cfs_rq->avg.util_avg);

/*
- * If @p migrates from @cpu to another, remove its contribution. Or,
- * if @p migrates from another CPU to @cpu, add its contribution. In
- * the other cases, @cpu is not impacted by the migration, so the
- * util_avg should already be correct.
+ * UTIL_EST case: hide the task_util contribution from util.
+ * During wake-up, the task isn't enqueued yet and doesn't
+ * appear in the util_est of any CPU. No contribution has
+ * therefore to be removed from util_est.
+ *
+ * If @p migrates to this CPU, add its contribution to util and
+ * util_est.
*/
- if (task_cpu(p) == cpu && dst_cpu != cpu)
- sub_positive(&util, task_util(p));
- else if (task_cpu(p) != cpu && dst_cpu == cpu)
- util += task_util(p);
-
if (sched_feat(UTIL_EST)) {
+ unsigned long util_est;
+
util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);

- /*
- * During wake-up, the task isn't enqueued yet and doesn't
- * appear in the cfs_rq->avg.util_est.enqueued of any rq,
- * so just add it (if needed) to "simulate" what will be
- * cpu_util() after the task has been enqueued.
- */
- if (dst_cpu == cpu)
- util_est += _task_util_est(p);
+ if (task_cpu(p) == cpu)
+ lsub_positive(&util, task_util(p));
+
+ if (dst_cpu == cpu) {
+ unsigned long task_util = task_util_est(p);
+
+ util += task_util;
+ util_est += task_util;
+ }

util = max(util, util_est);
+ /*
+ * !UTIL_EST case: If @p migrates from @cpu to another, remove its
+ * contribution. If @p migrates to @cpu, add it.
+ */
+ } else {
+ if (task_cpu(p) == cpu && dst_cpu != cpu)
+ lsub_positive(&util, task_util(p));
+ else if (task_cpu(p) != cpu && dst_cpu == cpu)
+ util += task_util(p);
}

return min(util, arch_scale_cpu_capacity(cpu));
--
2.25.1
\
 
 \ /
  Last update: 2021-02-22 10:57    [W:0.098 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site