lkml.org 
[lkml]   [2009]   [Jan]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[patch] Re: problem with "sched: revert back to per-rq vruntime"?
From
Date
Would perhaps be prettier to have the load already in place at call
time, but methinks the enqueue/dequeue accounting logic is nice as is,
so complete the unlikely case handling in an unlikely block.

Impact: bug fixlet.

Account for tasks which have not yet been enqueued in calc_delta_weight().

Signed-off-by: Mike Galbraith <efault@gmx.de>

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 5ad4440..4685f28 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -392,8 +392,16 @@ static inline unsigned long
calc_delta_weight(unsigned long delta, struct sched_entity *se)
{
for_each_sched_entity(se) {
- delta = calc_delta_mine(delta,
- se->load.weight, &cfs_rq_of(se)->load);
+ struct load_weight *load = &cfs_rq_of(se)->load;
+
+ if (unlikely(!se->on_rq)) {
+ struct load_weight tmp;
+
+ tmp.weight = load->weight + se->load.weight;
+ tmp.inv_weight = 0;
+ load = &tmp;
+ }
+ delta = calc_delta_mine(delta, se->load.weight, load);
}

return delta;



\
 
 \ /
  Last update: 2009-01-01 13:58    [W:0.057 / U:0.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site