lkml.org 
[lkml]   [2004]   [Oct]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: How is user space notified of CPU speed changes?
    From
    Date
    On Sat, 2004-10-23 at 22:19 +0100, Alan Cox wrote:
    > On Sad, 2004-10-23 at 06:10, Lee Revell wrote:
    > > JACK makes extensive use of microsecond-level timers. These must be
    > > calibrated at startup, and recalibrated when the CPU speed changes. How
    > > does JACK register with the kernel to be notified when the CPU speed
    > > changes?
    >
    > It did
    >
    > - The kernel doesn't always know

    OK, this one seems like the hard one. Wouldn't this cause weird
    behavior though? For example Linux only calculates the delay loop once,
    at boot time. Does this render *delay() useless?

    > - CPU speed is meaningless in hyper-threading since performance is not
    > x2 for two cores but instead varies

    Doesn't matter. As long as they are both the same speed, JACK's
    calculations will be correct. We calculate the CPU speed at startup.
    Then we just read the TSC and do the math.

    > - It doesn't handle split CPU speed SMP - where CPU speeds vary

    We don't have to support this. If someone wants to do it they will have
    to bind jackd to one CPU or the other.

    > - God help you if virtualised

    We don't have to support this.

    Does anyone know how OSX/CoreAudio handles the situation? Apparently
    realtime apps work flawlessly on speed scaling laptops under OSX.

    Lee


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:07    [W:3.019 / U:0.140 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site