lkml.org 
[lkml]   [2011]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patches in this message
/
Date
From
Subjectsched: fix/optimise some issues
After reviewing the kernels processscheduler I have found three issues.
I am not 100% sure, but I think they are worth patching. (see attached
patch 0001 to 0003) - These patches are agains 3.0-rc7.


I also implemented an 128bit vruntime support:
Majorly on systems with many tasks and (for example) deep cgroups
(or increased NICE0_LOAD/ SCHED_LOAD_SCALE as in commit
c8b281161dfa4bb5d5be63fb036ce19347b88c63), a weighted timeslice
(unsigned long) can become very large (on x86_64) and consumes a
large part of the u64 vruntimes (per tick) when added.
This might lead to missscheduling because of overflows.

The patches (as single files or as a one file blockpatch) in the bz2-files
mainly intruduce code (and a Kconfig) to switch to a 128bit vruntime on
x86_64 (of course with a little overhead) or limiting a virtual timeslice.
These patches are also "tidying up" the code around vruntimes by
abstracting it into seperate files and types, which also makes further
coding
easier and simplifies debugging.
For example vruntimes are stored into sched_vruntime_t type instead u64
after patching.
Please see for your own (and excuse the direct export from my local git)...

The bz2-patches are working against 2.6.39.3.

Best regards, Stephan

--
Dipl.-Inf. Stephan Bärwolf
Ilmenau University of Technology, Integrated Communication Systems Group
Phone: +49 (0)3677 69 4130
Email: stephan.baerwolf@tu-ilmenau.de,
Web: http://www.tu-ilmenau.de/iks

[unhandled content-type:application/x-bzip][unhandled content-type:application/x-bzip]From c9b7e910de032ce0851ec297575d42bc8f07a4a3 Mon Sep 17 00:00:00 2001
From: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
Date: Wed, 20 Jul 2011 15:06:17 +0200
Subject: [PATCH 3/3] sched: fix incorrect use of "ideal_runtime"s timeunit

In "check_preempt_tick()" (kernel/sched_fair.c:1093) a ulong
called "ideal_runtime" stores a timeslice of the current task
(scheduling entity). This time complies real cpu-time.

At the end of the same function (nr_running > 1) this (real) time
is compared with a virtual-runtime-delta. Obviously the timeunits
(real vs. virtual) didn't fit.

Using "wakeup_preempt_entity()" instead should fix this in a even
more general way.

Signed-off-by: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
---
kernel/sched_fair.c | 16 +++++++---------
1 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 66dc9f7..61d002d 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1083,6 +1083,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
update_cfs_shares(cfs_rq);
}

+static int
+wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
+
/*
* Preempt the current task with a newly woken task if needed:
*/
@@ -1115,14 +1118,11 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
return;

if (cfs_rq->nr_running > 1) {
+ // check, if maybe curr has finally overtaken the remaining leftmost
struct sched_entity *se = __pick_first_entity(cfs_rq);
- s64 delta = curr->vruntime - se->vruntime;
-
- if (delta < 0)
- return;
-
- if (delta > ideal_runtime)
- resched_task(rq_of(cfs_rq)->curr);
+
+ if (wakeup_preempt_entity(curr, se) > 0)
+ resched_task(rq_of(cfs_rq)->curr);
}
}

@@ -1156,8 +1156,6 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
se->prev_sum_exec_runtime = se->sum_exec_runtime;
}

-static int
-wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);

/*
* Pick the next process, keeping these things in mind, in this order:
--
1.7.3.4
From df6bc28340e3cc2b4ebe132492971e8a4164fe11 Mon Sep 17 00:00:00 2001
From: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
Date: Wed, 20 Jul 2011 14:46:59 +0200
Subject: [PATCH 2/3] sched: replace use of "entity_key()"

"entity_key()" is only used in "__enqueue_entity()" and
its only function is to subtract a tasks vruntime by
its groups minvruntime.
Before this patch a rbtree enqueue-decision is done by
comparing two tasks in the style:

"if (entity_key(cfs_rq, se) < entity_key(cfs_rq, entry))"

which would be

"if (se->vruntime-cfs_rq->min_vruntime < entry->vruntime-cfs_rq->min_vruntime)"

or (if reducing cfs_rq->min_vruntime out)

"if (se->vruntime < entry->vruntime)"

which is

"if (entity_before(se, entry))"

So we do not need "entity_key()".
If "entity_before()" is inline we will also save one subtraction (only one,
because "entity_key(cfs_rq, se)" was cached in "key")

Signed-off-by: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
---
kernel/sched_fair.c | 8 +-------
1 files changed, 1 insertions(+), 7 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index e092e72..66dc9f7 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -334,11 +334,6 @@ static inline int entity_before(struct sched_entity *a,
return (s64)(a->vruntime - b->vruntime) < 0;
}

-static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
-{
- return se->vruntime - cfs_rq->min_vruntime;
-}
-
static void update_min_vruntime(struct cfs_rq *cfs_rq)
{
u64 vruntime = cfs_rq->min_vruntime;
@@ -372,7 +367,6 @@ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
struct rb_node **link = &cfs_rq->tasks_timeline.rb_node;
struct rb_node *parent = NULL;
struct sched_entity *entry;
- s64 key = entity_key(cfs_rq, se);
int leftmost = 1;

/*
@@ -385,7 +379,7 @@ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
* We dont care about collisions. Nodes with
* the same key stay together.
*/
- if (key < entity_key(cfs_rq, entry)) {
+ if (entity_before(se, entry)) {
link = &parent->rb_left;
} else {
link = &parent->rb_right;
--
1.7.3.4
From ccd1e7d300c1f939da745e1c0d50d13fc3ccec7b Mon Sep 17 00:00:00 2001
From: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
Date: Wed, 20 Jul 2011 14:37:56 +0200
Subject: [PATCH 1/3] sched: check WAKEUP_PREEMPT feature before preemting anything

The function "check_preempt_wakeup" (kernel/sched_fair.c:1885)
will preempt idle-task (for non-idle task), even if WAKEUP_PREEMPT
is not featured (because of a too late checking the feature).
This patches moves the checking of WAKEUP_PREEMT in front of
idle-preemtion.

Signed-off-by: Stephan Baerwolf <stephan.baerwolf@tu-ilmenau.de>
---
kernel/sched_fair.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 433491c..e092e72 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1905,6 +1905,9 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
if (test_tsk_need_resched(curr))
return;

+ if (!sched_feat(WAKEUP_PREEMPT))
+ return;
+
/* Idle tasks are by definition preempted by non-idle tasks. */
if (unlikely(curr->policy == SCHED_IDLE) &&
likely(p->policy != SCHED_IDLE))
@@ -1918,9 +1921,6 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
return;


- if (!sched_feat(WAKEUP_PREEMPT))
- return;
-
update_curr(cfs_rq);
find_matching_se(&se, &pse);
BUG_ON(!pse);
--
1.7.3.4
\
 
 \ /
  Last update: 2011-07-20 15:47    [W:0.081 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site