lkml.org 
[lkml]   [1999]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectSome Audiality info...
Hi,


Back with a description of my project; Audiality.

There is a web site, but there's not much interesting tech info on it yet... It has to be separated into two areas; one that is more technical, and one for audio folks wondering what all this means to end users. Anyway, the site is at

http://www.angelfire.com/or/audiality/audiality.html


Well, here goes...
-----------------------------------------------------

The main goals
--------------

* Provide reliable low latency real time audio on
standard workstation hardware.
+ Reliable means no drop-outs, ever.
+ Low latency means less than 3 ms
audio in -> audio out.

* Provide a powerful API for audio applications
+ Relieves applications of hard real time
requirements.
+ A common interface to drivers, engine and
plug-ins. (events + streams)

* Provide a usable and efficient API for processing
plug-ins.
+ Flexible. (So that it won't be obsoleted
by the next audio processing fad...)
+ Easy to use. (So some people will care to
hack/port plug-ins. :-)
+ Portable. (Plug-ins use only resources
handled by Audiality)


System
------

As some of you have found out the hard way, it's not possible to do this in user space, or even kernel
modules with the current standard Linux kernels. A fundamental requirement for reliable low latency processing is hard real time support.

RTLinux turns low latency audio into a matter of implementing, rather than endless tweaking of a system not designed for that kind of tasks. RTLinux can handle latency/jitter requirements of 0.05 ms or better
reliably. (While some sound cards have DMA burst transfer sizes or internal FIFOs that limit the latency to 0.3 ms or worse...)


Drivers
-------

Obviously (?), as RTlinux tasks and interrupt handlers preempt the kernel, they can't call kernel functions directly. This means that existing ALSA and OSS drivers will have to be modified or ported to a new driver architecture in order to be useable with Audiality.

I've looked at some drivers in the 2.2.10 kernel, and right now my suggestion would be to split the drivers in two layers; hard real time and soft real time. The former would have to use only pre-allocated resources and no standard kernel calls, while the later can remain largely untouched. The problem here is synchronization between these layers... A few new kernel calls (sync) would make it possible to use the drivers with or without RTLinux and/or Audiality.

An alternative would be to design a completely new driver API, or adapt an existing API to RTLinux/Audiality. Looks like ALSA would be best suited for something like that... The point is that there must be no standard kernel calls in the drivers, at least not in areas that will run in RT context under Audiality.

As I'm quite new to Linux kernel hacking and don't exactly know the sound driver code inside out, I'm very interested in hearing what the developers have to say on this. Suggestions, easiest/quickest/best way to go?


Plug-ins
--------

...will be split into two parts; the processing code, and the user interface. (When I mention "plug-in" I actually reffer to the processing part - not the user interface, as that part is completely outside the engine and the real time environment. Can be a part of a client application, a stand-alone client app, a Tcl(/Tk) script or whatever.)

Plug-ins may run under RTLinux, in kernel space, user space or whatever, depending on the Audiality implementation that loads them. This requires that only the plug-in API is used, as a plug-in cannot expect anything but that API to be available.


Applications
------------

The OSS and ALSA APIs can be supported in a few different ways. What I had in mind was OSS/ALSA compatible virtual drivers that look and act like sound card drivers to applications, but show up as inputs and outputs in the Audiality engine.

Setup example: The engine is set up as a mixer (input sections, buses, master section etc), with output devs showing up as mixer input channels, and input devs taking their data from the mixer buses. Sound card drivers are connected in about the same way to the mixer, one stereo output playing the data from the master output. Plug-ins are used for insert and master effects.

The signal routing, loading/unloading/instantiating of plug-ins and so on will need a separate API that "Audiality enhanced" applications may use. Multiple applications will be able to connect to the engine at once, controlling different parts of the set up. By mapping plug-in presets to midi controllers, any audio/MIDI sequencer could work with Audiality, supporting full automation through MIDI.


-----------------------------------------------------
That's about what comes to my mind right now... There will probably be lots of questions anyway, so I'll just post this and run off to hack a few lines before my mailbox explodes! ;-)


Regards,

//David



Angelfire for your free web-based e-mail. http://www.angelfire.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:1.844 / U:0.524 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site