lkml.org 
[lkml]   [2017]   [Mar]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 8/8] sched/deadline: Return the best satisfying affinity and dl in cpudl_find
Date
cpudl_find() is used to find a cpu having the latest dl. The function
should return the latest cpu among ones satisfying task's affinity and
dl constraint, but current code gives up immediately and just return
fail when it fails at the test *only with* the maximum cpu.

For example:

cpu 0 is running a task (dl: 10).
cpu 1 is running a task (dl: 9).
cpu 2 is running a task (dl: 8).
cpu 3 is running a task (dl: 2).

where cpu 3 want to push a task (affinity is 1 2 3 and dl is 1).

In this case, the task should be migrated from cpu 3 to cpu 1, and
preempt cpu 1's task. However, current code just returns fail because
it fails at the affinity test with the maximum cpu, that is, cpu 0.

This patch tries to find the best among ones satisfying task's affinity
and dl constraint until success or no more to see.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
kernel/sched/cpudeadline.c | 38 ++++++++++++++++++++++++++++++++++++++
kernel/sched/cpudeadline.h | 9 +++++++++
2 files changed, 47 insertions(+)

diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c
index 453159a..9172646 100644
--- a/kernel/sched/cpudeadline.c
+++ b/kernel/sched/cpudeadline.c
@@ -174,6 +174,42 @@ static void __cpudl_set(struct cpudl *cp, int cpu, u64 dl)
}
}

+static int cpudl_slow_find(struct cpudl *cp, struct task_struct *p)
+{
+ const struct sched_dl_entity *dl_se = &p->dl;
+ unsigned long flags;
+ int prev_cpu = -1;
+ int max_cpu;
+ u64 max_dl;
+
+ raw_spin_lock_irqsave(&cp->lock, flags);
+ max_cpu = cpudl_maximum_cpu(cp);
+ max_dl = cpudl_maximum_dl(cp);
+
+ while (max_cpu != -1) {
+ if (cpumask_test_cpu(max_cpu, &p->cpus_allowed) &&
+ dl_time_before(dl_se->deadline, max_dl))
+ break;
+
+ /* Pick up the next. */
+ cp->elements[max_cpu].restore_cpu = prev_cpu;
+ cp->elements[max_cpu].restore_dl = max_dl;
+ prev_cpu = max_cpu;
+ __cpudl_clear(cp, max_cpu);
+ max_cpu = cpudl_maximum_cpu(cp);
+ max_dl = cpudl_maximum_dl(cp);
+ }
+
+ /* Restore the heap tree */
+ while (prev_cpu != -1) {
+ __cpudl_set(cp, prev_cpu, cp->elements[prev_cpu].restore_dl);
+ prev_cpu = cp->elements[prev_cpu].restore_cpu;
+ }
+
+ raw_spin_unlock_irqrestore(&cp->lock, flags);
+ return max_cpu;
+}
+
static int cpudl_fast_find(struct cpudl *cp, struct task_struct *p)
{
const struct sched_dl_entity *dl_se = &p->dl;
@@ -213,6 +249,8 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,
goto out;
} else {
best_cpu = cpudl_fast_find(cp, p);
+ if (best_cpu == -1)
+ best_cpu = cpudl_slow_find(cp, p);
if (best_cpu != -1 && later_mask)
cpumask_set_cpu(best_cpu, later_mask);
}
diff --git a/kernel/sched/cpudeadline.h b/kernel/sched/cpudeadline.h
index f7da8c5..736ff89 100644
--- a/kernel/sched/cpudeadline.h
+++ b/kernel/sched/cpudeadline.h
@@ -10,6 +10,15 @@ struct cpudl_item {
u64 dl;
int cpu;
int idx;
+ /*
+ * cpudl_slow_find() needs to pop one by one from
+ * the heap tree until it eventually finds suitable
+ * cpu considering task's affinity. After that,
+ * we need to restore the tree to original state,
+ * using the following restore_cpu and restore_dl.
+ */
+ int restore_cpu;
+ u64 restore_dl;
};

struct cpudl {
--
1.9.1
\
 
 \ /
  Last update: 2017-03-23 11:36    [W:0.740 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site