lkml.org 
[lkml]   [2018]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RESEND PATCH] x86/vdso: Handle clock_gettime(CLOCK_TAI) in vDSO
Minor nit: if it's not literally a resend, don't call it "RESEND" in
$SUBJECT. Call it v2, please.

Also, I added LKML and relevant maintainers to cc. John and Stephen:
this is a purely x86 patch, but it digs into the core timekeeping
structures a bit.

On Fri, Aug 17, 2018 at 5:12 AM, Matt Rickard <matt@softrans.com.au> wrote:
> Process clock_gettime(CLOCK_TAI) in vDSO. This makes the call about as fast as
> CLOCK_REALTIME instead of taking about four times as long.

I'm conceptually okay with this, but the bug encountered last time
around makes me suspect that GCC is generating genuinely horrible
code. Can you benchmark CLOCK_MONOTONIC before and after to make sure
there isn't a big regression? Please do this benchmark with
CONFIG_RETPOLINE=y.

If there is a regression, then the code will need some reasonable
restructuring to fix it. Or perhaps -fno-jump-tables.

--Andy

> Signed-off-by: Matt Rickard <matt@softrans.com.au>
> ---
> arch/x86/entry/vdso/vclock_gettime.c | 25 +++++++++++++++++++++++++
> arch/x86/entry/vsyscall/vsyscall_gtod.c | 2 ++
> arch/x86/include/asm/vgtod.h | 1 +
> 3 files changed, 28 insertions(+)
>
> diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
> index f19856d95c60..91ed1bb2a3bb 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -246,6 +246,27 @@ notrace static int __always_inline do_monotonic(struct timespec *ts)
> return mode;
> }
>
> +notrace static int __always_inline do_tai(struct timespec *ts)
> +{
> + unsigned long seq;
> + u64 ns;
> + int mode;
> +
> + do {
> + seq = gtod_read_begin(gtod);
> + mode = gtod->vclock_mode;
> + ts->tv_sec = gtod->tai_time_sec;
> + ns = gtod->wall_time_snsec;
> + ns += vgetsns(&mode);
> + ns >>= gtod->shift;
> + } while (unlikely(gtod_read_retry(gtod, seq)));
> +
> + ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns);
> + ts->tv_nsec = ns;
> +
> + return mode;
> +}
> +
> notrace static void do_realtime_coarse(struct timespec *ts)
> {
> unsigned long seq;
> @@ -277,6 +298,10 @@ notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
> if (do_monotonic(ts) == VCLOCK_NONE)
> goto fallback;
> break;
> + case CLOCK_TAI:
> + if (do_tai(ts) == VCLOCK_NONE)
> + goto fallback;
> + break;
> case CLOCK_REALTIME_COARSE:
> do_realtime_coarse(ts);
> break;
> diff --git a/arch/x86/entry/vsyscall/vsyscall_gtod.c b/arch/x86/entry/vsyscall/vsyscall_gtod.c
> index e1216dd95c04..d61392fe17f6 100644
> --- a/arch/x86/entry/vsyscall/vsyscall_gtod.c
> +++ b/arch/x86/entry/vsyscall/vsyscall_gtod.c
> @@ -53,6 +53,8 @@ void update_vsyscall(struct timekeeper *tk)
> vdata->monotonic_time_snsec = tk->tkr_mono.xtime_nsec
> + ((u64)tk->wall_to_monotonic.tv_nsec
> << tk->tkr_mono.shift);
> + vdata->tai_time_sec = tk->xtime_sec
> + + tk->tai_offset;
> while (vdata->monotonic_time_snsec >=
> (((u64)NSEC_PER_SEC) << tk->tkr_mono.shift)) {
> vdata->monotonic_time_snsec -=
> diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h
> index fb856c9f0449..adc9f7b20b9c 100644
> --- a/arch/x86/include/asm/vgtod.h
> +++ b/arch/x86/include/asm/vgtod.h
> @@ -32,6 +32,7 @@ struct vsyscall_gtod_data {
> gtod_long_t wall_time_coarse_nsec;
> gtod_long_t monotonic_time_coarse_sec;
> gtod_long_t monotonic_time_coarse_nsec;
> + gtod_long_t tai_time_sec;
>
> int tz_minuteswest;
> int tz_dsttime;

\
 
 \ /
  Last update: 2018-08-24 19:48    [W:0.053 / U:1.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site