lkml.org 
[lkml]   [2014]   [Sep]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] sched: Update task group load contributions during active load-balancing
Date
Task group load-contributions are not updated when tasks belonging to
task groups are migrated by active load-balancing. If no other task
belonging to the same task group is already queued at the destination
cpu the group sched_entity will be enqueued with load_avg_contrib=0.
Hence, weighted_cpuload() won't reflect the newly added load.

The load may remain invisible until the next tick, when the sched_entity
load_avg_contrib and task group contributions are reevaluated.

The enqueue loop

for_each_entity(se) {
enqueue_entity(cfs_rq,se)
...
enqueue_entity_load_avg(cfs_rq,se)
...
update_entity_load_avg(se)
...
__update_entity_load_avg_contrib(se)
...
...
update_cfs_rq_blocked_load(cfs_rq)
...
__update_cfs_rq_tg_load_contrib(cfs_rq)
...
}

currently skips __update_entity_load_avg_contrib() and
__update_cfs_rq_tg_load_contrib() for group entities for active
load-balance migrations. The former updates the sched_entity
load_avg_contrib, and the latter updates the task group contribution
which is needed by the former. They must both be called to ensure that
load doesn't temporarily disappear.

cc: Paul Turner <pjt@google.com>
cc: Ben Segall <bsegall@google.com>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index be9e97b..2b6e2eb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2521,7 +2521,8 @@ static inline void update_entity_load_avg(struct sched_entity *se,
else
now = cfs_rq_clock_task(group_cfs_rq(se));

- if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq))
+ if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq) &&
+ entity_is_task(se))
return;

contrib_delta = __update_entity_load_avg_contrib(se);
@@ -2609,6 +2610,10 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
cfs_rq->runnable_load_avg += se->avg.load_avg_contrib;
/* we force update consideration on load-balancer moves */
update_cfs_rq_blocked_load(cfs_rq, !wakeup);
+
+ /* We force update group contributions on load-balancer moves */
+ if (wakeup && !entity_is_task(se))
+ __update_cfs_rq_tg_load_contrib(cfs_rq, 0);
}

/*
--
1.7.9.5



\
 
 \ /
  Last update: 2014-09-16 02:21    [W:0.074 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site