Messages in this thread Patch in this message | | | From | KOSAKI Motohiro <> | Subject | [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count | Date | Thu, 30 Apr 2009 15:11:15 +0900 (JST) |
| |
Changelog: since v1 - use percpu_counter_sum() instead percpu_counter_read()
------------------------------------- Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
cpuacct_update_stats() is called at every tick updating. and it use percpu_counter for avoiding performance degression.
For archs which define VIRT_CPU_ACCOUNTING, every tick would result in >1000 units of cputime updates and since this is much much greater than percpu_batch_counter, we end up taking spinlock on every tick.
This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies" cputime in per-cpu cache. it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com> Cc: Balaji Rao <balajirrao@gmail.com> Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> --- kernel/sched.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
Index: b/kernel/sched.c =================================================================== --- a/kernel/sched.c 2009-04-30 11:37:47.000000000 +0900 +++ b/kernel/sched.c 2009-04-30 14:17:00.000000000 +0900 @@ -10221,6 +10221,7 @@ struct cpuacct { }; struct cgroup_subsys cpuacct_subsys; +static s32 cpuacct_batch; /* return cpu accounting group corresponding to this container */ static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp) @@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac if (!ca->cpuusage) goto out_free_ca; + if (!cpuacct_batch) + cpuacct_batch = jiffies_to_cputime(percpu_counter_batch); + for (i = 0; i < CPUACCT_STAT_NSTATS; i++) if (percpu_counter_init(&ca->cpustat[i], 0)) goto out_free_counters; @@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr int i; for (i = 0; i < CPUACCT_STAT_NSTATS; i++) { - s64 val = percpu_counter_read(&ca->cpustat[i]); + s64 val = percpu_counter_sum(&ca->cpustat[i]); val = cputime64_to_clock_t(val); cb->fill(cb, cpuacct_stat_desc[i], val); } @@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct ca = task_ca(tsk); do { - percpu_counter_add(&ca->cpustat[idx], val); + __percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch); ca = ca->parent; } while (ca); rcu_read_unlock();
| |