lkml.org 
[lkml]   [1998]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: About the new time.c code
Andr Derrick Balsa wrote:
> We are trying to ensure that gettimeofday() ends up monotonic, but we
> don't have any tools to test this.
>
> What do you suggest we use? You mention you fought this problem some
> time back. Do you still have some code that would help us test/break our
> new gettimeofday()?

I don't have anything any more, sorry.

You can test for monotonicity easily, by just running a background
process that keeps calling `gettimeofday' and checking the results. I
used to fetch the results into a ring buffer, so I could see what was
happening at a discontinuity. The errors were not frequent, but occured
just enough that the video game I was developing would crash every few
days because of the timer, until I wrote a wrapper to keep
`gettimeofday' monotonic.

I will be happy to look over your code once written, to check it though.
Not sure I have time on a new kernel for testing.

> BTW our TSC calibration code doesn't need any filter: it has no jitter
> at all! TSC calibration is done at boot time before Linux sets up any
> interrupt, with all interrupts disabled.

Are you sure this is ok?

1. Can TSC calibration be guaranteed accurate if done quickly?
What about glitches, say due to the APM BIOS doing some checks
in a SMM interrupt, which even cli() does not disable?

2. Is the PIT (timer chip) oscillator guaranteed a fixed relationship
with the CPU oscillator? I'd imagine it is on all modern
motherboards, with just one master oscillator controlling CPU
and I/O chipset.

3. Power management on modern motherboards slows down the CPU clock
by a factor of 2 or 4 when it is not busy. The TSC frequency will
also change, even on an Intel CPU. I expect intermediate
frequencies while the CPU's internal PLL resynchronises; I don't
know if CPU operation is paused while that happens.

So perhaps you need to calibrate the TSC continuously, as well as
having monotonicity clamps in the reading code.

> It also doesn't drift relative to the jiffy clock.

You could ensure this by counting jiffies as a fixed number of TSC ticks
I suppose, but I wouldn't recommend it.

> The only problem we have now is that gettimeofday() is now so fast, that
> a fast CPU could call it twice in the same microsecond, and get twice
> the same timestamp.
>
> Do you see this as a problem?

Not for anything I've ever done. [Aside: For the video game, a zero
inter-frame time would have crashed the program, so I always clamped the
difference at 1. I counted in milliseconds anyway so it was more likely
to happen. Some things depended on real time, but the game used two
clocks anyway: ticks since program start, and differential time, both
using wraparound counters and wrapping compares.]

The network code may get confused by this. Gigabit ethernet can handle
several packets in a microsecond, not that any real implementation seems
to manage this _yet_. And even slow networks may have packets processed
by the driver that quickly, if buffered and it's a darn good card. I
hope the network timestamp stuff (round trip estimators etc.) handles
cases of zero time difference between successive packets properly.

There's already a `struct timespec' using nanoseconds instead
microseconds. Used by `nanosleep'. Perhaps it makes sense to migrate
to that?

-- Jamie

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:42    [W:0.036 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site