Core Audio in Mac OS X Tiger to improve audio handling

Posted:
in macOS edited January 2014
Apple's next-generation OS will include an updated version of the company's Core Audio technology that will deliver new tools and functions to improve the audio handling experience of Mac OS X applications.



Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.



New Audio Units



Tiger's version of Core Audio will present two new audio units that developers can use in their applications.



A file-playback audio unit will make it possible for applications to use an existing sound file as an input source. The audio unit will convert file data from any format to linear PCM so that it can be processed by other audio units, sources said. Linear PCM (LPCM) is an uncompressed audio format that is similar to CD audio, but with higher sampling frequencies and quantisations. The format offers up to 8 channels of 48kHz or 96kHz sampling frequency and 16, 20 or 24 bits per sample but not all at the same time.



The second audio unit will handle time and pitch transformation modifications to audio data. Developers will be able to use this audio unit to change the playback speed without changing the pitch, or vise versa.



Aggregate Device Support



Tiger's version of Core Audio will also support the creation of aggregate devices, sources said, allowing developers to combine multiple audio devices together under the auspices of a single audio device. For example, developers can take two devices with two input channels each, combine them to create a single aggregate device with four input channels, and then let Core Audio take care of routing the audio data to the appropriate hardware device.



Nonnative Audio File Format Support



Another planned enhancement to Core Audio is an extension mechanism for supporting audio file formats not natively supported by Core Audio in Mac OS X. This mechanism will reportedly allow developers to create components for reading and writing these audio file formats, while an updated API will aid applications in detecting and utilizing the custom components.



Two New APIs



Tipsters also said that Core Audio will add two new APIs to its toolbox.



A new extended audio file API will ease the complexity previously associated with converting files from one format to another. The API will read data in any format supported by Core Audio and then convert it to and from linear PCM format.



Meanwhile, a new clock API will allow developers to create and track MIDI time formats. The API will reportedly support the use of the MIDI Time Code (MTC) and MIDI Beat Clock protocols to synchronize operations between MIDI-based controllers.



Developer Tools



Finally, sources expect Tiger to ship with a new Audio Unit Lab tool, which will let developers graphically host audio units and examine the results. The tool will reportedly host audio units and allow developers to use them to do live mixing and playback of audio content.



\tIn recent weeks AppleInsider has provided extensive coverage of Mac OS X 10.4 Tiger. Previous reports include coverage of Tiger's Spotlight search, Safari with RSS, Mail 2.0 with smart mailboxes, iCal 1.5.3, Resolution Independent UI and 256x256 Icons, AppleScript 1.10, Installer 2.0, web enabled Help, Fast Logout, Access Control Lists, OpenGL enhancements, adoption of OpenAL, Core Data, PDF Kit, SQLite, and networking-related enhancements.
«1

Comments

  • Reply 1 of 27
    zapchudzapchud Posts: 844member
    As I (silently) predicted, Tiger will be more than demoed at WWDC and Apple's website.



    This is great news.
  • Reply 2 of 27
    hmurchisonhmurchison Posts: 12,425member
    The "beat" goes on!(hardy har har). Really though the enhancements sound great. Core Audio is shaping up to be a very nice architecture for developers. I'm impressed with Apple.
  • Reply 3 of 27
    kim kap solkim kap sol Posts: 2,987member
    Umm...can anyone say iTunes Extreme?



    People will now be able to add audio filters onto MP3s and AAC files. Or pretty much anything that makes sound waves.



    Does Microsoft have something similar in the pipeline? The Mac is becoming a dream machine to graphics, video and audio artists.
  • Reply 4 of 27
    Aggregate device support sounds really, really, cool. I wonder how much control over routing and all of that stuff we'll be able to have in the end? I'm guessing a new version of Logic to go along with Tiger to better implement all of these fun things?
  • Reply 5 of 27
    kim kap solkim kap sol Posts: 2,987member
    Quote:

    Originally posted by Big Red

    Aggregate device support sounds really, really, cool. I wonder how much control over routing and all of that stuff we'll be able to have in the end? I'm guessing a new version of Logic to go along with Tiger to better implement all of these fun things?



    What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?
  • Reply 6 of 27
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Originally posted by kim kap sol

    What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?



    No I believe it means you can "combine" two hardware audio interfaces into one "logical" interface and keep sync and routing functions transparent to the end user.



    The Midi sync features will be nice as well. Any sync improvements are important in audio where sync is God.
  • Reply 7 of 27
    zapchudzapchud Posts: 844member
    I think it's similar to what Propellerheads Reason offer. I'm not sure though, but it sounds like the routing features of Reason. :-)
  • Reply 8 of 27
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by kim kap sol

    Umm...can anyone say iTunes Extreme?



    "QuickTime 7."



    QuickTime is dead. Long live QuickTime!
  • Reply 9 of 27
    kickahakickaha Posts: 8,760member
    Indeed.



    It'll be interesting to see what happens with the CoreVIA APIs and the QuickTime brand.
  • Reply 10 of 27
    Are we going to get a simple built-in voice recorder yet? Remember Simple Text? I shouldn't have to go to iMovie or GarageBand to record a simple voice memo. I've downloaded Audacity but haven't tried it yet. Nevertheless, such a tool ought to come with the OS.



    I hope this also indicates that neglected technologies like voice synthesis and voice recognition are going to be improved. I haven't noticed any real improvement in either since 1997 using OS 8.5 on a Performa 6400 at 200 MHz. Now I have a dual 2GHz G5 using the latest OS X and the biggest thing is Victoria, high-quality. They need to get rid of 90% of the voices that are jokes (Bad News, Good News, Hysterical, Bubbles) and have little practical application. More human choices would be great. And don't get me started about voice recogniton!
  • Reply 11 of 27
    vinney57vinney57 Posts: 1,162member
    Quote:

    Originally posted by kim kap sol

    What does that mean anyways? Does it mean you'd be able to plug in a guitar and a keyboard piano and play them simultaneously?



    No its a bit more than that. At present Core Audio only allows one input/output stream to be active at any one time. That stream can have multiple channels (my Metric Halo IO has 18 in and out) but you can't have multiple streams. This is a pain if you need large numbers of in and outs. A G5 could easily handle hundreds of simultaneous channels. MOTU have their own solution for Digital Performer but its a little kludgey. ProTools of course does a similar proprietary thing. What the update will do (and its about time) will allow multiple interfaces from multiple vendors to be treated and routed as one single large in/out audio interface at the core level.



    Tiger is shaping up to be the sexiest development environment on the planet.
  • Reply 12 of 27
    mobirdmobird Posts: 753member
    This feature ia already available. It is known as JackOSX.



    Check it out:

    http://www.jackosx.com/
  • Reply 13 of 27
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Originally posted by MoBird

    This feature ia already available. It is known as JackOSX.



    Check it out:

    http://www.jackosx.com/




    Now I may be wrong here but I don't see anywhere on the JACK site where it says you can aggregate I/O hardware. I've heard people talk about JACK but only within the context of routing audio from one app to another in ways that were previously impossible.
  • Reply 14 of 27
    Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.



    Does this mean support for Multiple Audio Interfaces?



    Could anyone here answer that question? Thanks!
  • Reply 15 of 27
    vinney57vinney57 Posts: 1,162member
    Quote:

    Originally posted by MoBird

    This feature ia already available. It is known as JackOSX.



    Check it out:

    http://www.jackosx.com/




    Er...not the same thing at all.
  • Reply 16 of 27
    vinney57vinney57 Posts: 1,162member
    Quote:

    Originally posted by [email protected]

    Sources close to Apple say that the new version of Core Audio will introduce two new audio units, aggregate device support, nonnative audio file format support, and other tools that let developers perform live mixing and playback of audio content.



    Does this mean support for Multiple Audio Interfaces?



    Could anyone here answer that question? Thanks!




    Read the thread dude!
  • Reply 17 of 27
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by AppleInsider

    A file-playback audio unit will make it possible for applications to use an existing sound file as an input source. The audio unit will convert file data from any format to linear PCM so that it can be processed by other audio units, sources said. Linear PCM (LPCM) is an uncompressed audio format that is similar to CD audio, but with higher sampling frequencies and quantisations. The format offers up to 8 channels of 48kHz or 96kHz sampling frequency and 16, 20 or 24 bits per sample but not all at the same time.



    This is cool, but I wonder how it interacts with the FairPlay DRM?
  • Reply 18 of 27
    hmurchisonhmurchison Posts: 12,425member
    JACK is for routing multiple apps through an interface. That's cool but many people want to route multiple apps through multiple hardware interfaces. This is done using custom code for many high end interfaces but now that Core Audio supports it even the small guys can gang their cards together and reap the CA stability and latency.



    Quote:

    Jack (the Jack Audio Connection Kit) is a low-latency audio server, written originally for the GNU/Linux operating system, and now with Mac OS X support. It can connect any number of different applications to a single hardware audio device; it also allows applications to send and receive audio to and from each other.



  • Reply 19 of 27
    >Meanwhile, a new clock API will allow developers to create and track

    >MIDI time formats. The API will reportedly support the use of the

    >MIDI Time Code (MTC) and MIDI Beat Clock protocols to synchronize

    >operations between MIDI-based controllers.



    Now... if iTunes, and Airport Express both used these MIDI technologies, could I play music in my study SYNCHRONISED with the music in my living room?



    Hmmm...

    What do you think?
  • Reply 20 of 27
    kickahakickaha Posts: 8,760member
    I think the vagaries of network latency would eat it alive.
Sign In or Register to comment.