lkml.org 
[lkml]   [2018]   [Mar]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4.16-rc5 2/3] x86/vdso: on Intel, VDSO should handle CLOCK_MONOTONIC_RAW
On Wed, 14 Mar 2018, jason.vas.dias@gmail.com wrote:

Again: Read and comply with Documentation/process/ and fix the complaints
of checkpatch.pl.

> diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
> index fbc7371..2c46675 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -184,10 +184,9 @@ notrace static u64 vread_tsc(void)
>
> notrace static u64 vread_tsc_raw(void)
> {
> - u64 tsc
> + u64 tsc = (gtod->has_rdtscp ? rdtscp((void*)0) : rdtsc_ordered())
> , last = gtod->raw_cycle_last;

Aside of the totally broken coding style including usage of (void*)0 :

Did you ever benchmark rdtscp() against rdtsc_ordered()?

If so, then the results want to be documented in the changelog and this
change only makes sense when rdtscp() is actually faster.

Please document how you measured that so others can actually run the same
tests and make their own judgement.

If it would turn out that rdtscp() is faster, which I doubt, then the
conditional is the wrong way to do that. It wants to be patched at boot
time which completely avoids conditionals.

Thanks,

tglx

\
 
 \ /
  Last update: 2018-03-14 15:48    [W:0.062 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site