Messages in this thread | | | Date | Fri, 23 Jul 1999 23:18:48 +0200 | From | David Olofson <> | Subject | Re: real-time threaded IO with low latency (audio) |
| |
Paul Barton-Davis wrote: (...) > a "real" FX processor can do real-time flanging with a 0.2ms > delay. the idea that linux (or beos) for that matter can step into > this kind of role is absurd, or close to it.
Absurd? My greatest problem seems to be the PCI DMA burst size of the card I'm using now.
Driving the PC speaker at 8 kHz interrupt rate or higher is a no brainer with RTL. (I've done it at 85 kHz on a dual P-II 233 with periodic scheduling, but that's burning lots of CPU power on scheduling, and the jitter is [probably] bigger than the average latency.)
I'll add an RTL hook into the interrupt handler of the hacked driver and put it on the site for download. However, unless I can run the card in non-DMA mode or something, I'll get nowhere near the lowest audio latency possible with RTL.
> >For the processing thread, it's a lot less than 5 ms. A lot less, > >because signal processing is CPU intensive, and I want plug-ins to be > >able to utilize more than a fraction of the available CPU power. There's > >hardly no room for cathing up after a stall under heavy load, so this is > >very important. > > i think (but don't know) that you're wrong. quasimodo does a lot > (arguably all) of what you are talking about, and it works the way i > describe. if you are in a tight loop reading from the ADC, processing, > and then writing to the DAC, the timing i describe is exactly what you > want.
Ok... Say you have the CPU processing data using 80% of the avaiable cycles. That is, the minimum time it can take to process one buffer is 80% of the time it takes to play it.
Now, for some reason, the processing task doesn't get started in time. If this delay is within the maximum allowed limits, the engine will still be available to deliver the output buffer in time.
However, when the next buffer comes in, it has to *wait* until the current (delayed) one is processed, thus causing the next buffer to be delayed as well! The engine will cath up after a few buffers, as the load is less than 100%.
But what if your process loses the CPU once again before it has cathed up...?
(...) > >"Average latency" and "maximum latency" are two very different things. > > no - latency and jitter are two different things. what we have a > problem with is jitter, not latency. the difference is subtle but its > important because it affects how you think about solving it.
True, but I was actually thinking about interrupt and scheduling latencies here. I'm being unclear again, obviously. Maximum latency = average latency + maximum deviation due to jitter. Anyway, the problem is that your code doesn't always get control in time, and/or that it can sometimes lose control before it's done. (For a deadline to be met, the task must actually both be scheduled in time AND get the chance to finish before the deadline.) What more is there to it really?
//David
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |