lkml.org 
[lkml]   [2010]   [Dec]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] sched: Buggy comparison in check_preempt_tick
Date
A preempt comparison line in check_preempt_tick has two bugs.
* It compares signed and unsigned quantities, which breaks when signed
quantity happens to be negative
* It compares runtime and vruntime, which breaks when there are niced tasks

The bug was initially found by linsched[1]. Change here fixes both
the problems.

On x86-64, the signed unsigned compare results in tasks running _longer_
than their expected time slice as a false resched_task() gets signalled after
4 ticks (on tick after preceding sysctl_sched_min_granularity check) and
currently running task gets picked again and runs for another ideal_slice
interval.

With 2 busy loops on a single CPU and trace_printks inside this buggy check
triggering resched task and in pick_next_task shows this:

[001] 510.524336: pick_next_task_fair: loop (5939)
[001] 510.536326: pick_next_task_fair: loop (5883)
[001] 510.540319: task_tick_fair: delta -4897059, ideal_runtime 11994146
[001] 510.540321: pick_next_task_fair: loop (5883)
[001] 510.544306: task_tick_fair: delta -906540, ideal_runtime 11994146
[001] 510.544309: pick_next_task_fair: loop (5883)
[001] 510.556306: pick_next_task_fair: loop (5939)
[001] 510.560301: task_tick_fair: delta -7105824, ideal_runtime 11994146
[001] 510.560304: pick_next_task_fair: loop (5939)
[001] 510.564298: task_tick_fair: delta -3105461, ideal_runtime 11994146
[001] 510.564300: pick_next_task_fair: loop (5939)
[001] 510.576288: pick_next_task_fair: loop (5883)
[001] 510.580282: task_tick_fair: delta -4897210, ideal_runtime 11994146
[001] 510.580285: pick_next_task_fair: loop (5883)
[001] 510.584278: task_tick_fair: delta -897348, ideal_runtime 11994146
[001] 510.584281: pick_next_task_fair: loop (5883)
[001] 510.596269: pick_next_task_fair: loop (5939)

That is 20 ms slice for each task, with some redundant resched_tasks and
with the fix it is expected ~12ms slices (on 16 cpu system).

[1] - http://lwn.net/Articles/409680/

Signed-off-by: Venkatesh Pallipadi <venki@google.com>
---
kernel/sched_fair.c | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 00ebd76..fc5ffbd 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -871,8 +871,10 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
if (cfs_rq->nr_running > 1) {
struct sched_entity *se = __pick_next_entity(cfs_rq);
s64 delta = curr->vruntime - se->vruntime;
+ unsigned long ideal_vruntime;

- if (delta > ideal_runtime)
+ ideal_vruntime = calc_delta_fair(ideal_runtime, curr);
+ if (delta > (s64)ideal_vruntime)
resched_task(rq_of(cfs_rq)->curr);
}
}
--
1.7.3.1


\
 
 \ /
  Last update: 2010-12-25 01:29    [W:0.249 / U:0.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site