Core Audio in Mac OS X Tiger to improve audio handling
Apple's next-generation OS will include an updated version of the company's Core Audio technology that will deliver new tools and functions to improve the audio handling experience of Mac OS X applications.
Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.
New Audio Units
Tiger's version of Core Audio will present two new audio units that developers can use in their applications.
A file-playback audio unit will make it possible for applications to use an existing sound file as an input source. The audio unit will convert file data from any format to linear PCM so that it can be processed by other audio units, sources said. Linear PCM (LPCM) is an uncompressed audio format that is similar to CD audio, but with higher sampling frequencies and quantisations. The format offers up to 8 channels of 48kHz or 96kHz sampling frequency and 16, 20 or 24 bits per sample but not all at the same time.
The second audio unit will handle time and pitch transformation modifications to audio data. Developers will be able to use this audio unit to change the playback speed without changing the pitch, or vise versa.
Aggregate Device Support
Tiger's version of Core Audio will also support the creation of aggregate devices, sources said, allowing developers to combine multiple audio devices together under the auspices of a single audio device. For example, developers can take two devices with two input channels each, combine them to create a single aggregate device with four input channels, and then let Core Audio take care of routing the audio data to the appropriate hardware device.
Nonnative Audio File Format Support
Another planned enhancement to Core Audio is an extension mechanism for supporting audio file formats not natively supported by Core Audio in Mac OS X. This mechanism will reportedly allow developers to create components for reading and writing these audio file formats, while an updated API will aid applications in detecting and utilizing the custom components.
Two New APIs
Tipsters also said that Core Audio will add two new APIs to its toolbox.
A new extended audio file API will ease the complexity previously associated with converting files from one format to another. The API will read data in any format supported by Core Audio and then convert it to and from linear PCM format.
Meanwhile, a new clock API will allow developers to create and track MIDI time formats. The API will reportedly support the use of the MIDI Time Code (MTC) and MIDI Beat Clock protocols to synchronize operations between MIDI-based controllers.
Developer Tools
Finally, sources expect Tiger to ship with a new Audio Unit Lab tool, which will let developers graphically host audio units and examine the results. The tool will reportedly host audio units and allow developers to use them to do live mixing and playback of audio content.
\tIn recent weeks AppleInsider has provided extensive coverage of Mac OS X 10.4 Tiger. Previous reports include coverage of Tiger's Spotlight search, Safari with RSS, Mail 2.0 with smart mailboxes, iCal 1.5.3, Resolution Independent UI and 256x256 Icons, AppleScript 1.10, Installer 2.0, web enabled Help, Fast Logout, Access Control Lists, OpenGL enhancements, adoption of OpenAL, Core Data, PDF Kit, SQLite, and networking-related enhancements.
Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.
New Audio Units
Tiger's version of Core Audio will present two new audio units that developers can use in their applications.
A file-playback audio unit will make it possible for applications to use an existing sound file as an input source. The audio unit will convert file data from any format to linear PCM so that it can be processed by other audio units, sources said. Linear PCM (LPCM) is an uncompressed audio format that is similar to CD audio, but with higher sampling frequencies and quantisations. The format offers up to 8 channels of 48kHz or 96kHz sampling frequency and 16, 20 or 24 bits per sample but not all at the same time.
The second audio unit will handle time and pitch transformation modifications to audio data. Developers will be able to use this audio unit to change the playback speed without changing the pitch, or vise versa.
Aggregate Device Support
Tiger's version of Core Audio will also support the creation of aggregate devices, sources said, allowing developers to combine multiple audio devices together under the auspices of a single audio device. For example, developers can take two devices with two input channels each, combine them to create a single aggregate device with four input channels, and then let Core Audio take care of routing the audio data to the appropriate hardware device.
Nonnative Audio File Format Support
Another planned enhancement to Core Audio is an extension mechanism for supporting audio file formats not natively supported by Core Audio in Mac OS X. This mechanism will reportedly allow developers to create components for reading and writing these audio file formats, while an updated API will aid applications in detecting and utilizing the custom components.
Two New APIs
Tipsters also said that Core Audio will add two new APIs to its toolbox.
A new extended audio file API will ease the complexity previously associated with converting files from one format to another. The API will read data in any format supported by Core Audio and then convert it to and from linear PCM format.
Meanwhile, a new clock API will allow developers to create and track MIDI time formats. The API will reportedly support the use of the MIDI Time Code (MTC) and MIDI Beat Clock protocols to synchronize operations between MIDI-based controllers.
Developer Tools
Finally, sources expect Tiger to ship with a new Audio Unit Lab tool, which will let developers graphically host audio units and examine the results. The tool will reportedly host audio units and allow developers to use them to do live mixing and playback of audio content.
\tIn recent weeks AppleInsider has provided extensive coverage of Mac OS X 10.4 Tiger. Previous reports include coverage of Tiger's Spotlight search, Safari with RSS, Mail 2.0 with smart mailboxes, iCal 1.5.3, Resolution Independent UI and 256x256 Icons, AppleScript 1.10, Installer 2.0, web enabled Help, Fast Logout, Access Control Lists, OpenGL enhancements, adoption of OpenAL, Core Data, PDF Kit, SQLite, and networking-related enhancements.
Comments
This is great news.
People will now be able to add audio filters onto MP3s and AAC files. Or pretty much anything that makes sound waves.
Does Microsoft have something similar in the pipeline? The Mac is becoming a dream machine to graphics, video and audio artists.
Originally posted by Big Red
Aggregate device support sounds really, really, cool. I wonder how much control over routing and all of that stuff we'll be able to have in the end? I'm guessing a new version of Logic to go along with Tiger to better implement all of these fun things?
What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?
Originally posted by kim kap sol
What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?
No I believe it means you can "combine" two hardware audio interfaces into one "logical" interface and keep sync and routing functions transparent to the end user.
The Midi sync features will be nice as well. Any sync improvements are important in audio where sync is God.
Originally posted by kim kap sol
Umm...can anyone say iTunes Extreme?
"QuickTime 7."
QuickTime is dead. Long live QuickTime!
It'll be interesting to see what happens with the CoreVIA APIs and the QuickTime brand.
I hope this also indicates that neglected technologies like voice synthesis and voice recognition are going to be improved. I haven't noticed any real improvement in either since 1997 using OS 8.5 on a Performa 6400 at 200 MHz. Now I have a dual 2GHz G5 using the latest OS X and the biggest thing is Victoria, high-quality. They need to get rid of 90% of the voices that are jokes (Bad News, Good News, Hysterical, Bubbles) and have little practical application. More human choices would be great. And don't get me started about voice recogniton!
Originally posted by kim kap sol
What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?
No its a bit more than that. At present Core Audio only allows one input/output stream to be active at any one time. That stream can have multiple channels (my Metric Halo IO has 18 in and out) but you can't have multiple streams. This is a pain if you need large numbers of in and outs. A G5 could easily handle hundreds of simultaneous channels. MOTU have their own solution for Digital Performer but its a little kludgey. ProTools of course does a similar proprietary thing. What the update will do (and its about time) will allow multiple interfaces from multiple vendors to be treated and routed as one single large in/out audio interface at the core level.
Tiger is shaping up to be the sexiest development environment on the planet.
Check it out:
http://www.jackosx.com/
Originally posted by MoBird
This feature ia already available. It is known as JackOSX.
Check it out:
http://www.jackosx.com/
Now I may be wrong here but I don't see anywhere on the JACK site where it says you can aggregate I/O hardware. I've heard people talk about JACK but only within the context of routing audio from one app to another in ways that were previously impossible.
Does this mean support for Multiple Audio Interfaces?
Could anyone here answer that question? Thanks!
Originally posted by MoBird
This feature ia already available. It is known as JackOSX.
Check it out:
http://www.jackosx.com/
Er...not the same thing at all.
Originally posted by [email protected]
Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.
Does this mean support for Multiple Audio Interfaces?
Could anyone here answer that question? Thanks!
Read the thread dude!
Originally posted by AppleInsider
A file-playback audio unit will make it possible for applications to use an existing sound file as an input source. The audio unit will convert file data from any format to linear PCM so that it can be processed by other audio units, sources said. Linear PCM (LPCM) is an uncompressed audio format that is similar to CD audio, but with higher sampling frequencies and quantisations. The format offers up to 8 channels of 48kHz or 96kHz sampling frequency and 16, 20 or 24 bits per sample but not all at the same time.
This is cool, but I wonder how it interacts with the FairPlay DRM?
Jack (the Jack Audio Connection Kit) is a low-latency audio server, written originally for the GNU/Linux operating system, and now with Mac OS X support. It can connect any number of different applications to a single hardware audio device; it also allows applications to send and receive audio to and from each other.
>MIDI time formats. The API will reportedly support the use of the
>MIDI Time Code (MTC) and MIDI Beat Clock protocols to synchronize
>operations between MIDI-based controllers.
Now... if iTunes, and Airport Express both used these MIDI technologies, could I play music in my study SYNCHRONISED with the music in my living room?
Hmmm...
What do you think?