lkml.org 
[lkml]   [2015]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] sched: update blocked load of idle cpus
Date
The load and the util of idle cpus must be updated periodically in order to
decay the blocked part.

If CONFIG_FAIR_GROUP_SCHED is not set, the load and util of idle cpus
are not decayed and stay at the values set before becoming idle.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
Hi Yuyang,

While testing your patchset without CONFIG_FAIR_GROUP_SCHED, i have noticed
that the load of idle cpus stays sometimes to an high value whereas they were
not used for a while because we are not decaying the blocked load.
Futhermore, the peridodic load balance was not pulling tasks onto some idle
cpus because their load stayed high.

This patchset fixes the issue.

Regards,
Vincent

kernel/sched/fair.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c5f18d9..665cc4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5864,6 +5864,17 @@ static unsigned long task_h_load(struct task_struct *p)
#else
static inline void update_blocked_averages(int cpu)
{
+ struct rq *rq = cpu_rq(cpu);
+ struct cfs_rq *cfs_rq = &rq->cfs;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rq->lock, flags);
+ update_rq_clock(rq);
+
+ update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq))
+
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+
}

static unsigned long task_h_load(struct task_struct *p)
--
1.9.1


\
 
 \ /
  Last update: 2015-06-24 09:41    [W:0.087 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site