lkml.org 
[lkml]   [2010]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [patch 2/2] sched: charge unaccounted run-time on entity re-weight
Hum -- forgot to refresh mbx file, slightly cleaner version (we can 
still charge unaccounted time against our queuing cfs_rq).

- Paul

-----

sched: move periodic share updates to entity_tick()

Long running entities that do not block (dequeue) require periodic
updates to
maintain accurate share values. (Note: group entities with several
threads are
quite likely to be non-blocking in many circumstances).

By virtue of being long-running however, we will see entity ticks
(otherwise
the required update occurs in dequeue/put and we are done). Thus we can
move
the detection (and associated work) for these updates into the periodic
path.

This restores the 'atomicity' of update_curr() with respect to accounting.

Signed-off-by: Paul Turner <pjt@google.com>

---
kernel/sched_fair.c | 21 +++++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)

Index: tip3/kernel/sched_fair.c
===================================================================
--- tip3.orig/kernel/sched_fair.c
+++ tip3/kernel/sched_fair.c
@@ -564,11 +564,8 @@ __update_curr(struct cfs_rq *cfs_rq, str

#if defined CONFIG_SMP && defined CONFIG_FAIR_GROUP_SCHED
cfs_rq->load_unacc_exec_time += delta_exec;
- if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
- update_cfs_load(cfs_rq, 0);
- update_cfs_shares(cfs_rq, 0);
- }
#endif
}

static void update_curr(struct cfs_rq *cfs_rq)
@@ -809,6 +806,15 @@ static void update_cfs_shares(struct cfs

reweight_entity(cfs_rq_of(se), se, shares);
}
+
+static void update_cfs_rq_shares_tick(struct cfs_rq *cfs_rq)
+{
+ /* rate limit updates by the averaging window */
+ if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
+ update_cfs_load(cfs_rq, 0);
+ update_cfs_shares(cfs_rq, 0);
+ }
+}
#else /* CONFIG_FAIR_GROUP_SCHED */
static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
{
@@ -1133,6 +1139,13 @@ entity_tick(struct cfs_rq *cfs_rq, struc
*/
update_curr(cfs_rq);

+#if defined CONFIG_SMP && defined CONFIG_FAIR_GROUP_SCHED
+ /*
+ * Update share accounting for long-running entities.
+ */
+ update_cfs_rq_shares_tick(cfs_rq);
+#endif
+
#ifdef CONFIG_SCHED_HRTICK
/*
* queued ticks are scheduled to match the slice, so don't bother

\
 
 \ /
  Last update: 2010-12-16 04:37    [W:0.140 / U:0.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site