[lkml]   [2005]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: 2.6.14-rt13
    On Fri, Nov 18, 2005 at 05:13:03PM -0500, Lee Revell wrote:
    > On Fri, 2005-11-18 at 14:05 -0800, Fernando Lopez-Lezcano wrote:
    > > On Fri, 2005-11-18 at 16:54 -0500, Lee Revell wrote:
    > > > On Fri, 2005-11-18 at 10:02 -0800, Fernando Lopez-Lezcano wrote:
    > > > > You mentioned before that the TSC's from both cpus could drift from
    > > > > each other over time. Assuming that is the source of timing (I have no
    > > > > idea) that could explain the behavior of Jack, it gets a reference
    > > > > time from one of the cpus and then compares that with what it gets
    > > > > from either cpu depending on where it is running at a given time. If
    > > > > it is the same cpu all is fine, if it is the other and it has drifted
    > > > > then the warning is printed.
    > > >
    > > > Yes, JACK uses rdtsc() for microsecond resolution timing and assumes
    > > > that the TSCs are in sync.
    > > >
    > > > I've asked on this list what a better time source could be and didn't
    > > > get any useful responses, people just told me "use gettimeofday()" which
    > > > is WAY too slow.
    > >
    > > Arghhh, at least I take this as a confirmation that the TSCs do drift
    > > and there is no workaround. It currently makes the -rt/Jack combination
    > > not very useful, at least in my tests.
    > >
    > > Is there a way to resync the TSCs?
    > I don't think so. A better question is what mechanism have the hardware
    > vendors provided to replace the apparently-no-longer-reliable TSC for
    > cheap high res timing on modern machines. Unfortunately I suspect the
    > answer at this point is "nothing, you're screwed".

    There are many mechanisms to keep time:

    1) RTC: 0.5 sec resolution, interrupts
    2) PIT: takes ages to read, overflows at each timer interrupt
    3) PMTMR: takes ages to read, overflows in approx 4 seconds, no interrupt
    4) HPET: slow to read, overflows in 5 minutes. Nice, but usually not present.
    5) TSC: fast, completely unreliable. Frequency changes, CPUs diverge over time.
    6) LAPIC: reasonably fast, unreliable, per-cpu

    Userspace can only use 1), 4) and 5). mplayer uses the RTC to
    synchronize, using it as a 1 kHz interrupt source.

    The kernel does quite a lot of magic and jumps through many hoops to
    make a reliable and fast time source combining these.

    > I've read that gettimeofday() does not have to enter the kernel on
    > x86-64, maybe it's fast enough, though almost certainly orders of
    > magnitude slower than rdtsc().

    It depends on the hardware config, and kernel version. With my latest
    patch it takes approximately 175 ns on a reasonably fast CPU to do
    gettimeofday() from userspace. And much better results will be possible
    (~5x better) when RDTSCP enabled CPUs become available.

    This patch still has problems, and as such I'll still have to rewrite
    significant portions before I release it.

    Anyway, current gettimeofday() on SMP AMD x86-64 can be as bad as 1500ns.

    > It seems like a huge step backwards for
    > any apps with high res timing requirements.

    gettimeofday() is the only guaranteed working mechanism. And it's as
    fast as the hardware allows.

    Vojtech Pavlik
    SuSE Labs, SuSE CR
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-11-18 23:35    [W:0.025 / U:89.692 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site