lkml.org 
[lkml]   [2015]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 06/12] time: Add infrastructure to cap clocksource reads to the max_cycles value
Date
When calculating the current delta since the last tick, we
currently have no hard protections to prevent a multiplciation
overflow from ocurring.

This patch introduces infrastructure to allow a capp that
limits the read delta value to the max_cycles value, which is
where an overflow would occur.

Since this is in the hotpath, it adds the extra checking under
CONFIG_DEBUG_TIMEKEEPING.

There was some concern that capping time like this could cause
problems as we may stop expiring timers, which could go circular
if the timer that triggers time accumulation were misscheduled
too far in the future, which would cause time to stop.

However, since the mult overflow would result in a smaller time
value, we would effectively have the same problem there.

Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
kernel/time/timekeeping.c | 39 +++++++++++++++++++++++++++------------
1 file changed, 27 insertions(+), 12 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7e9d433..8b9e328 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -134,11 +134,34 @@ static void timekeeping_check_update(struct timekeeper *tk, cycle_t offset)
" the %s 50%% safety margin (%lld)\n",
offset, name, max_cycles>>1);
}
+
+static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr)
+{
+ cycle_t cycle_now, delta;
+
+ /* read clocksource */
+ cycle_now = tkr->read(tkr->clock);
+
+ /* calculate the delta since the last update_wall_time */
+ delta = clocksource_delta(cycle_now, tkr->cycle_last, tkr->mask);
+
+ /* Cap delta value to the max_cycles values to avoid mult overflows */
+ if (unlikely(delta > tkr->clock->max_cycles))
+ delta = tkr->clock->max_cycles;
+
+ return delta;
+}
#else
static inline
void timekeeping_check_update(struct timekeeper *tk, cycle_t offset)
{
}
+static inline cycle_t timekeeping_get_delta(struct tk_read_base *tkr)
+{
+ /* calculate the delta since the last update_wall_time */
+ return clocksource_delta(tkr->read(tkr->clock), tkr->cycle_last,
+ tkr->mask);
+}
#endif

/**
@@ -216,14 +239,10 @@ static inline u32 arch_gettimeoffset(void) { return 0; }

static inline s64 timekeeping_get_ns(struct tk_read_base *tkr)
{
- cycle_t cycle_now, delta;
+ cycle_t delta;
s64 nsec;

- /* read clocksource: */
- cycle_now = tkr->read(tkr->clock);
-
- /* calculate the delta since the last update_wall_time: */
- delta = clocksource_delta(cycle_now, tkr->cycle_last, tkr->mask);
+ delta = timekeeping_get_delta(tkr);

nsec = delta * tkr->mult + tkr->xtime_nsec;
nsec >>= tkr->shift;
@@ -235,14 +254,10 @@ static inline s64 timekeeping_get_ns(struct tk_read_base *tkr)
static inline s64 timekeeping_get_ns_raw(struct timekeeper *tk)
{
struct clocksource *clock = tk->tkr.clock;
- cycle_t cycle_now, delta;
+ cycle_t delta;
s64 nsec;

- /* read clocksource: */
- cycle_now = tk->tkr.read(clock);
-
- /* calculate the delta since the last update_wall_time: */
- delta = clocksource_delta(cycle_now, tk->tkr.cycle_last, tk->tkr.mask);
+ delta = timekeeping_get_delta(&tk->tkr);

/* convert delta to nanoseconds. */
nsec = clocksource_cyc2ns(delta, clock->mult, clock->shift);
--
1.9.1


\
 
 \ /
  Last update: 2015-03-07 04:01    [W:0.311 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site