lkml.org 
[lkml]   [2021]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 04/10] sched/fair: Prefer idle CPU to cache affinity
Date
Current order of preference to pick a LLC while waking a wake-affine
task:
1. Between the waker CPU and previous CPU, prefer the LLC of the CPU
that is idle.

2. Between the waker CPU and previous CPU, prefer the LLC of the CPU
that is less lightly loaded.

In the current situation where waker and previous CPUs are busy, but
only one of its LLC has an idle CPU, Scheduler may end up picking a LLC
with no idle CPUs. To mitigate this, add a method where Scheduler
compares idle CPUs in waker and previous LLCs and picks the appropriate
one.

The new method looks at idle-core to figure out idle LLC. If there are
no idle LLCs, it compares the ratio of busy CPUs to the total number of
CPUs in the LLC. This method will only be useful to compare 2 LLCs. If
the previous CPU and the waking CPU are in the same LLC, this method
would not be useful. For now the new method is disabled by default.

Cc: LKML <linux-kernel@vger.kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Cc: Parth Shah <parth@linux.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
Based on similar posting:
http://lore.kernel.org/lkml/20210226164029.122432-1-srikar@linux.vnet.ibm.com/t/#u
Some comments in the next patch
- Make WA_WAKER default (Suggested by Rik) : done in next patch
- Make WA_WAKER check more conservative: (Suggested by Rik / Peter)
- Rename WA_WAKER to WA_IDLER_LLC (Suggested by Vincent)
- s/pllc_size/tllc_size while checking for busy case: (Pointed by Dietmar)
- Add rcu_read_lock and check for validity of shared domains
- Add idle-core support

kernel/sched/fair.c | 64 +++++++++++++++++++++++++++++++++++++++++
kernel/sched/features.h | 1 +
2 files changed, 65 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 09c33cca0349..943621367a96 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5869,6 +5869,67 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
return this_eff_load < prev_eff_load ? this_cpu : nr_cpumask_bits;
}

+static int wake_affine_idler_llc(struct task_struct *p, int this_cpu, int prev_cpu, int sync)
+{
+#ifdef CONFIG_NO_HZ_COMMON
+ int pnr_busy, pllc_size, tnr_busy, tllc_size;
+#endif
+ struct sched_domain_shared *tsds, *psds;
+ int diff;
+
+ tsds = rcu_dereference(per_cpu(sd_llc_shared, this_cpu));
+ psds = rcu_dereference(per_cpu(sd_llc_shared, prev_cpu));
+ if (!tsds || !psds)
+ return nr_cpumask_bits;
+
+ if (sync) {
+ if (available_idle_cpu(this_cpu) || sched_idle_cpu(this_cpu))
+ return this_cpu;
+ if (tsds->idle_core != -1) {
+ if (cpumask_test_cpu(tsds->idle_core, p->cpus_ptr))
+ return tsds->idle_core;
+ return this_cpu;
+ }
+ }
+
+ if (available_idle_cpu(prev_cpu) || sched_idle_cpu(prev_cpu))
+ return prev_cpu;
+ if (psds->idle_core != -1) {
+ if (cpumask_test_cpu(psds->idle_core, p->cpus_ptr))
+ return psds->idle_core;
+ return prev_cpu;
+ }
+
+ if (!sync) {
+ if (available_idle_cpu(this_cpu) || sched_idle_cpu(this_cpu))
+ return this_cpu;
+ if (tsds->idle_core != -1) {
+ if (cpumask_test_cpu(tsds->idle_core, p->cpus_ptr))
+ return tsds->idle_core;
+ return this_cpu;
+ }
+ }
+
+#ifdef CONFIG_NO_HZ_COMMON
+ tnr_busy = atomic_read(&tsds->nr_busy_cpus);
+ pnr_busy = atomic_read(&psds->nr_busy_cpus);
+
+ tllc_size = per_cpu(sd_llc_size, this_cpu);
+ pllc_size = per_cpu(sd_llc_size, prev_cpu);
+
+ if (pnr_busy == pllc_size && tnr_busy == tllc_size)
+ return nr_cpumask_bits;
+
+ diff = pnr_busy * tllc_size - tnr_busy * pllc_size;
+ if (diff > 0)
+ return this_cpu;
+ if (diff < 0)
+ return prev_cpu;
+#endif /* CONFIG_NO_HZ_COMMON */
+
+ return nr_cpumask_bits;
+}
+
static int wake_affine(struct sched_domain *sd, struct task_struct *p,
int this_cpu, int prev_cpu, int sync)
{
@@ -5877,6 +5938,9 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
if (sched_feat(WA_IDLE))
target = wake_affine_idle(this_cpu, prev_cpu, sync);

+ if (sched_feat(WA_IDLER_LLC) && target == nr_cpumask_bits)
+ target = wake_affine_idler_llc(p, this_cpu, prev_cpu, sync);
+
if (sched_feat(WA_WEIGHT) && target == nr_cpumask_bits)
target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);

diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 1bc2b158fc51..c77349a47e01 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -83,6 +83,7 @@ SCHED_FEAT(ATTACH_AGE_LOAD, true)

SCHED_FEAT(WA_IDLE, true)
SCHED_FEAT(WA_WEIGHT, true)
+SCHED_FEAT(WA_IDLER_LLC, false)
SCHED_FEAT(WA_BIAS, true)

/*
--
2.18.2
\
 
 \ /
  Last update: 2021-04-22 12:27    [W:0.169 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site