lkml.org 
[lkml]   [2013]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: PROBLEM: Persistent unfair sharing of a processor by auto groups in 3.11-rc2 (has twice regressed)


OK, so I have the below; however on a second look, Paul, shouldn't that
update_cfs_shares() call be in entity_tick(), right after calling
update_cfs_rq_blocked_load(). Because placing it in
update_cfs_rq_blocked_load() means its now called twice on the
enqueue/dequeue paths through:

{en,de}queue_entity()
{en,de}queue_entity_load_avg()
update_cfs_rq_blocked_load()
update_cfs_shares()



---
Subject: sched: Ensure update_cfs_shares() is called for parents of continuously-running tasks
From: Max Hailperin <max@gustavus.edu>

We typically update a task_group's shares within the dequeue/enqueue
path. However, continuously running tasks sharing a CPU are not
subject to these updates as they are only put/picked. Unfortunately,
when we reverted f269ae046 (in 17bc14b7), we lost the augmenting
periodic update that was supposed to account for this; resulting in a
potential loss of fairness.

To fix this, re-introduce the explicit update in
update_cfs_rq_blocked_load() [called via entity_tick()].

Cc: stable@kernel.org
Reviewed-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/fair.c | 1 +
1 file changed, 1 insertion(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1531,6 +1531,7 @@ static void update_cfs_rq_blocked_load(s
}

__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
+ update_cfs_shares(cfs_rq);
}

static inline void update_rq_runnable_avg(struct rq *rq, int runnable)

\
 
 \ /
  Last update: 2013-07-26 23:21    [W:0.725 / U:0.796 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site