lkml.org 
[lkml]   [2013]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] sched: Precise load checking in get_rr_interval_fair
Date
From: Charles Wang <muming.wq@taobao.com>

Positive load weight of rq.cfs can not represent positive load weight
of se->cfs_rq. And when se->cfs_rq's load is 0, the slice calculated
by sched_slice is not that sensible.

Use se->cfs_rq for load checking instead of rq->cfs. And correct the
comments.

Cc: Ingo Molnar <mingo@elte.hu>
Cc: Zhu Yanhai <gaoyang.zyh@taobao.com>
Signed-off-by: Charles Wang <muming.wq@taobao.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 539760e..5d58ac9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6086,14 +6086,15 @@ void unregister_fair_sched_group(struct task_group *tg, int cpu) { }
static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task)
{
struct sched_entity *se = &task->se;
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
unsigned int rr_interval = 0;

/*
* Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
- * idle runqueue:
+ * idle cfs_rq:
*/
- if (rq->cfs.load.weight)
- rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se));
+ if (cfs_rq->load.weight)
+ rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq, se));

return rr_interval;
}
--
1.7.9.5


\
 
 \ /
  Last update: 2013-03-28 15:01    [W:6.786 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site