lkml.org 
[lkml]   [2009]   [Nov]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] fix granularity of task_u/stime(), v2
Originally task_s/utime() were designed to return clock_t but later
changed to return cputime_t by following commit:

commit efe567fc8281661524ffa75477a7c4ca9b466c63
Author: Christian Borntraeger <borntraeger@de.ibm.com>
Date: Thu Aug 23 15:18:02 2007 +0200

It only changed the type of return value, but not the implementation.
As the result the granularity of task_s/utime() is still that of
clock_t, not that of cputime_t.

So using task_s/utime() in __exit_signal() makes values accumulated
to the signal struct to be rounded and coarse grained.

This patch removes casts to clock_t in task_u/stime(), to keep
granularity of cputime_t over the calculation.

v2:
Use div_u64() to avoid error "undefined reference to `__udivdi3`"
on some 32bit systems.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
---
kernel/sched.c | 22 +++++++++++++---------
1 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 3c11ae0..1f8d028 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5172,41 +5172,45 @@ cputime_t task_stime(struct task_struct *p)
return p->stime;
}
#else
+
+#ifndef nsecs_to_cputime
+# define nsecs_to_cputime(__nsecs) \
+ msecs_to_cputime(div_u64((__nsecs), NSEC_PER_MSEC))
+#endif
+
cputime_t task_utime(struct task_struct *p)
{
- clock_t utime = cputime_to_clock_t(p->utime),
- total = utime + cputime_to_clock_t(p->stime);
+ cputime_t utime = p->utime, total = utime + p->stime;
u64 temp;

/*
* Use CFS's precise accounting:
*/
- temp = (u64)nsec_to_clock_t(p->se.sum_exec_runtime);
+ temp = (u64)nsecs_to_cputime(p->se.sum_exec_runtime);

if (total) {
temp *= utime;
do_div(temp, total);
}
- utime = (clock_t)temp;
+ utime = (cputime_t)temp;

- p->prev_utime = max(p->prev_utime, clock_t_to_cputime(utime));
+ p->prev_utime = max(p->prev_utime, utime);
return p->prev_utime;
}

cputime_t task_stime(struct task_struct *p)
{
- clock_t stime;
+ cputime_t stime;

/*
* Use CFS's precise accounting. (we subtract utime from
* the total, to make sure the total observed by userspace
* grows monotonically - apps rely on that):
*/
- stime = nsec_to_clock_t(p->se.sum_exec_runtime) -
- cputime_to_clock_t(task_utime(p));
+ stime = nsecs_to_cputime(p->se.sum_exec_runtime) - task_utime(p);

if (stime >= 0)
- p->prev_stime = max(p->prev_stime, clock_t_to_cputime(stime));
+ p->prev_stime = max(p->prev_stime, stime);

return p->prev_stime;
}
--
1.6.5.2



\
 
 \ /
  Last update: 2009-11-12 05:37    [W:0.202 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site