lkml.org 
[lkml]   [2012]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 05/16] sched: SCHED_DEADLINE policy implementation.
From
Date
On Mon, 2012-04-23 at 15:37 +0200, Juri Lelli wrote:
>
> This is what I got for that snippet:
>
> ffffffff81062826 <enqueue_task_dl>:
> [...]
> ffffffff81062885: 49 03 44 24 20 add 0x20(%r12),%rax
> ffffffff8106288a: 49 8b 54 24 28 mov 0x28(%r12),%rdx
> ffffffff8106288f: 49 01 54 24 38 add %rdx,0x38(%r12)
> ffffffff81062894: 49 89 44 24 30 mov %rax,0x30(%r12)
> ffffffff81062899: 49 8b 44 24 30 mov 0x30(%r12),%rax
> ffffffff8106289e: 48 85 c0 test %rax,%rax
> ffffffff810628a1: 7e e2 jle ffffffff81062885 <enqueue_task_dl+0x5f>
>
> So it seems we are fine in this case, right?

Yep.

> It is anyway better to enforce this Gcc behaviour, just to be
> on the safe side?

Dunno, the 'fix' is somewhat hideous (although we could make it suck
less), we've only ever bothered with it if caused problems, so I guess
we'll just wait and see until it breaks.


---
Subject: kernel,sched,time: Clean up gcc work-arounds
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Mon Apr 23 15:55:48 CEST 2012

We've grown various copies of a particular gcc work-around, consolidate
them into one and add a larger comment.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/compiler.h | 12 ++++++++++++
include/linux/math64.h | 4 +---
kernel/sched/core.c | 8 ++------
kernel/sched/fair.c | 8 ++------
kernel/time.c | 11 ++++-------
5 files changed, 21 insertions(+), 22 deletions(-)

--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -310,4 +310,16 @@ void ftrace_likely_update(struct ftrace_
*/
#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))

+/*
+ * Avoid gcc loop optimization by clobbering a variable, forcing a reload
+ * and invalidating the optimization.
+ *
+ * The optimization in question transforms various loops into divisions/modulo
+ * operations, this is a problem when either the resulting operation generates
+ * unimplemented libgcc functions (u64 divisions for example) or the loop is
+ * known not to contain a lot of iterations and the division is in fact more
+ * expensive.
+ */
+#define __gcc_dont_optimize_loop(var) asm("" "+rm" (var))
+
#endif /* __LINUX_COMPILER_H */
--- a/include/linux/math64.h
+++ b/include/linux/math64.h
@@ -105,9 +105,7 @@ __iter_div_u64_rem(u64 dividend, u32 div
u32 ret = 0;

while (dividend >= divisor) {
- /* The following asm() prevents the compiler from
- optimising this loop into a modulo operation. */
- asm("" : "+rm"(dividend));
+ __gcc_dont_optimize_loop(dividend);

dividend -= divisor;
ret++;
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -628,12 +628,8 @@ void sched_avg_update(struct rq *rq)
s64 period = sched_avg_period();

while ((s64)(rq->clock - rq->age_stamp) > period) {
- /*
- * Inline assembly required to prevent the compiler
- * optimising this loop into a divmod call.
- * See __iter_div_u64_rem() for another example of this.
- */
- asm("" : "+rm" (rq->age_stamp));
+ __gcc_dont_optimize_loop(rq->age_stamp);
+
rq->age_stamp += period;
rq->rt_avg /= 2;
}
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -853,12 +853,8 @@ static void update_cfs_load(struct cfs_r
update_cfs_rq_load_contribution(cfs_rq, global_update);

while (cfs_rq->load_period > period) {
- /*
- * Inline assembly required to prevent the compiler
- * optimising this loop into a divmod call.
- * See __iter_div_u64_rem() for another example of this.
- */
- asm("" : "+rm" (cfs_rq->load_period));
+ __gcc_dont_optimize_loop(cfs_rq->load_period);
+
cfs_rq->load_period /= 2;
cfs_rq->load_avg /= 2;
}
--- a/kernel/time.c
+++ b/kernel/time.c
@@ -349,17 +349,14 @@ EXPORT_SYMBOL(mktime);
void set_normalized_timespec(struct timespec *ts, time_t sec, s64 nsec)
{
while (nsec >= NSEC_PER_SEC) {
- /*
- * The following asm() prevents the compiler from
- * optimising this loop into a modulo operation. See
- * also __iter_div_u64_rem() in include/linux/time.h
- */
- asm("" : "+rm"(nsec));
+ __gcc_dont_optimize_loop(nsec);
+
nsec -= NSEC_PER_SEC;
++sec;
}
while (nsec < 0) {
- asm("" : "+rm"(nsec));
+ __gcc_dont_optimize_loop(nsec);
+
nsec += NSEC_PER_SEC;
--sec;
}


\
 
 \ /
  Last update: 2012-04-23 16:05    [W:0.135 / U:2.260 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site