lkml.org 
[lkml]   [2008]   [Jan]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH 13/22 -v2] handle accurate time keeping over long delays

    On Wed, 9 Jan 2008, john stultz wrote:
    > > Index: linux-compile-i386.git/kernel/time/timekeeping.c
    > > ===================================================================
    > > --- linux-compile-i386.git.orig/kernel/time/timekeeping.c 2008-01-09 14:07:34.000000000 -0500
    > > +++ linux-compile-i386.git/kernel/time/timekeeping.c 2008-01-09 15:17:31.000000000 -0500
    > > @@ -448,27 +449,29 @@ static void clocksource_adjust(s64 offse
    > > */
    > > void update_wall_time(void)
    > > {
    > > - cycle_t offset;
    > > + cycle_t cycle_now, offset;
    > >
    > > /* Make sure we're fully resumed: */
    > > if (unlikely(timekeeping_suspended))
    > > return;
    > >
    > > #ifdef CONFIG_GENERIC_TIME
    > > - offset = (clocksource_read(clock) - clock->cycle_last) & clock->mask;
    > > + cycle_now = clocksource_read(clock);
    > > #else
    > > - offset = clock->cycle_interval;
    > > + cycle_now = clock->cycle_last + clock->cycle_interval;
    > > #endif
    > > + offset = (cycle_now - clock->cycle_last) & clock->mask;
    >
    > It seems this offset addition was to merge against the colliding
    > xtime_cache changes in mainline. However, I don't think its quite right,
    > and might be causing incorrect time() or vtime() results if NO_HZ is
    > enabled.

    Yeah, this had a bit of clashes in its life in the RT kernel.

    >
    > > + clocksource_accumulate(clock, cycle_now);
    > > +
    > > clock->xtime_nsec += (s64)xtime.tv_nsec << clock->shift;
    > >
    > > /* normally this loop will run just once, however in the
    > > * case of lost or late ticks, it will accumulate correctly.
    > > */
    > > - while (offset >= clock->cycle_interval) {
    > > + while (clock->cycle_accumulated >= clock->cycle_interval) {
    > > /* accumulate one interval */
    > > clock->xtime_nsec += clock->xtime_interval;
    > > - clock->cycle_last += clock->cycle_interval;
    > > - offset -= clock->cycle_interval;
    > > + clock->cycle_accumulated -= clock->cycle_interval;
    > >
    > > if (clock->xtime_nsec >= (u64)NSEC_PER_SEC << clock->shift) {
    > > clock->xtime_nsec -= (u64)NSEC_PER_SEC << clock->shift;
    > > @@ -482,7 +485,7 @@ void update_wall_time(void)
    > > }
    > >
    > > /* correct the clock when NTP error is too big */
    > > - clocksource_adjust(offset);
    > > + clocksource_adjust(clock->cycle_accumulated);
    >
    >
    > I suspect the following is needed, but haven't been able to test it yet.

    Thanks, I'll pull it in and start testing it.

    -- Steve



    \
     
     \ /
      Last update: 2008-01-10 01:27    [W:0.023 / U:61.084 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site