lkml.org 
[lkml]   [2008]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [git pull] scheduler updates

* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Sat, 8 Nov 2008, Ingo Molnar wrote:
> >
> > So that's why my change moves it from the __native_read_tsc() over to
> > _only_ the vget_cycles().
>
> Ahh. I was looking at native_read_tscp(). Which has no barriers. But then
> we don't actually save the actual TSC, we only end up using the "p" part,
> so we don't care..
>
> Anyway, even for the vget_cycles(), is there really any reason to
> have _two_ barriers? Also, I still think it would be a hell of a lot
> more readable and logical to put the barriers in the _caller_, so
> that people actually see what the barriers are there for.
>
> When they are hidden, they make no sense. The helper function just
> has two insane barriers without explanation, and the caller doesn't
> know that the code is serialized wrt something random.

ok, fully agreed, i've queued up the cleanup for that, see it below.

sidenote: i still kept the get_cycles() versus vget_cycles()
distinction, to preserve the explicit marker that vget_cycles() is
used in user-space mode code. We periodically forgot about that in the
past. But otherwise, the two inline functions are now identical.
(except for the assymetry of its inlining, and the comment about the
boot_cpu_data use of the has_tsc check)

Ingo

--------------->
From cb9e35dce94a1b9c59d46224e8a94377d673e204 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@elte.hu>
Date: Sat, 8 Nov 2008 20:27:00 +0100
Subject: [PATCH] x86: clean up rdtsc_barrier() use

Impact: cleanup

Move rdtsc_barrier() use to vsyscall_64.c where it's relied on,
and point out its role in the context of its use.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
arch/x86/include/asm/tsc.h | 6 +-----
arch/x86/kernel/vsyscall_64.c | 9 +++++++++
2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 9cd83a8..700aeb8 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -44,11 +44,7 @@ static __always_inline cycles_t vget_cycles(void)
if (!cpu_has_tsc)
return 0;
#endif
- rdtsc_barrier();
- cycles = (cycles_t)__native_read_tsc();
- rdtsc_barrier();
-
- return cycles;
+ return (cycles_t)__native_read_tsc();
}

extern void tsc_init(void);
diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index 0b8b669..ebf2f12 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -128,7 +128,16 @@ static __always_inline void do_vgettimeofday(struct timeval * tv)
gettimeofday(tv,NULL);
return;
}
+
+ /*
+ * Surround the RDTSC by barriers, to make sure it's not
+ * speculated to outside the seqlock critical section and
+ * does not cause time warps:
+ */
+ rdtsc_barrier();
now = vread();
+ rdtsc_barrier();
+
base = __vsyscall_gtod_data.clock.cycle_last;
mask = __vsyscall_gtod_data.clock.mask;
mult = __vsyscall_gtod_data.clock.mult;


\
 
 \ /
  Last update: 2008-11-08 20:33    [W:1.271 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site