lkml.org 
[lkml]   [2015]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip:sched/core] sched: Track group sched_entity usage contributions
Commit-ID:  21f4486630b0bd1b6dbcc04f61836987fa54278f
Gitweb: http://git.kernel.org/tip/21f4486630b0bd1b6dbcc04f61836987fa54278f
Author: Morten Rasmussen <morten.rasmussen@arm.com>
AuthorDate: Fri, 27 Feb 2015 16:54:05 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 27 Mar 2015 09:35:58 +0100

sched: Track group sched_entity usage contributions

Add usage contribution tracking for group entities. Unlike
se->avg.load_avg_contrib, se->avg.utilization_avg_contrib for group
entities is the sum of se->avg.utilization_avg_contrib for all entities on the
group runqueue.

It is _not_ influenced in any way by the task group h_load. Hence it is
representing the actual cpu usage of the group, not its intended load
contribution which may differ significantly from the utilization on
lightly utilized systems.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Morten.Rasmussen@arm.com
Cc: Paul Turner <pjt@google.com>
Cc: dietmar.eggemann@arm.com
Cc: efault@gmx.de
Cc: kamalesh@linux.vnet.ibm.com
Cc: linaro-kernel@lists.linaro.org
Cc: nicolas.pitre@linaro.org
Cc: preeti@linux.vnet.ibm.com
Cc: riel@redhat.com
Link: http://lkml.kernel.org/r/1425052454-25797-3-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/debug.c | 2 ++
kernel/sched/fair.c | 3 +++
2 files changed, 5 insertions(+)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 578ff83..a245c1f 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -94,8 +94,10 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
P(se->load.weight);
#ifdef CONFIG_SMP
P(se->avg.runnable_avg_sum);
+ P(se->avg.running_avg_sum);
P(se->avg.avg_period);
P(se->avg.load_avg_contrib);
+ P(se->avg.utilization_avg_contrib);
P(se->avg.decay_count);
#endif
#undef PN
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 414408dd..d94a865 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2747,6 +2747,9 @@ static long __update_entity_utilization_avg_contrib(struct sched_entity *se)

if (entity_is_task(se))
__update_task_entity_utilization(se);
+ else
+ se->avg.utilization_avg_contrib =
+ group_cfs_rq(se)->utilization_load_avg;

return se->avg.utilization_avg_contrib - old_contrib;
}

\
 
 \ /
  Last update: 2015-03-27 13:21    [W:0.293 / U:0.452 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site