lkml.org 
[lkml]   [2013]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 5/5] sched: limit sched_slice if it is more than sysctl_sched_latency
Date
sched_slice() compute ideal runtime slice. If there are many tasks
in cfs_rq, period for this cfs_rq is extended to guarantee that each task
has time slice at least, sched_min_granularity. And then each task get
a portion of this period for it. If there is a task which have much larger
load weight than others, a portion of period can exceed far more than
sysctl_sched_latency.

For exampple, you can simply imagine that one task with nice -20 and
9 tasks with nice 0 on one cfs_rq. In this case, load weight sum for
this cfs_rq is 88761 + 9 * 1024, 97977. So a portion of slice for the
task with nice -20 is sysctl_sched_min_granularity * 10 * (88761 / 97977),
that is, approximately, sysctl_sched_min_granularity * 9. This aspect
can be much larger if there is more tasks with nice 0.

So we should limit this possible weird situation.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e232421..6ceffbc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -645,6 +645,9 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
}
slice = calc_delta_mine(slice, se->load.weight, load);

+ if (unlikely(slice > sysctl_sched_latency))
+ slice = sysctl_sched_latency;
+
return slice;
}

--
1.7.9.5


\
 
 \ /
  Last update: 2013-03-28 09:41    [W:0.619 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site