Apple offering musicians financial incentives to mix using Dolby Atmos

in Apple Music edited December 2023

Apple wants more music supporting its Spatial Audio features to be produced, with it reportedly offering financial incentives to record labels and artists to use the technology.

Apple Music

subscribers will be familiar with Spatial Audio, a feature that allows them to hear positional audio in a track when equipped with suitable personal audio devices, such as AirPods Pro. It now appears that Apple wants more tracks to be produced that use Spatial Audio and Dolby Atmos on its service.

According to people with knowledge of the incentives speaking to Bloomberg on Monday, Apple is planning to give more weighting of royalties to streams of songs that are mixed using Dolby Atmos. Tracks mixed with Dolby Atmos are able to be listened to via the Spatial Audio feature.

For artists and record labels, using Dolby Atmos could lead to higher royalty payments than if they used more conventional means. Furthermore, it is believed that artists will benefit from the increased weighting for using Dolby Atmos even if the Spatial Audio benefits aren't actively listened to by users.

The policy apparently won't just apply to new music. There is an expectation that artists and labels will remix older tracks to use Dolby Atmos to enjoy the higher royalties.

Using Dolby Atmos does involve some extra cost, but it is thought by the sources that mixing using it is "broadly affordable," and so it's worthwhile for artists to take advantage of the offer.

To Apple, it stands to benefit in a number of ways, including increased subscriber counts from users wanting to listen to Spatial Audio recordings. With Apple's audio range including AirPods supporting Atmos playback, it could also enjoy an increase in hardware sales.

Apple declined to comment to the report, and it has yet to officially make an announcement about the incentives.

Apple's Spatial Audio support was introduced in 2021. Spatial Audio made with Dolby Atmos gives the effect of listening to music in a simulated 3D audio space, and works with AirPods and speakers in devices like an iPhone, MacBook Pro or HomePod.

When used with personal audio devices like AirPods Pro with head tracking, the tracking element is included as part of the performance, with the music changing depending on how the user moves their head.

Read on AppleInsider


  • Reply 1 of 4
    I assume this process is similar to artificially converting a 2D movie to 3D for older recordings that were originally recorded in stereo.
  • Reply 2 of 4
    Its taking the existing multi-track recording and placing each instrument or vocal in a specific place in the "soundstage,"  A lot of discretion is given to the person remixing the track. 

    I have a 5.1 Atmos setup at home (no height speakers) and the Atmos remixes for most part sound great!   Even old mono recordings that are remixed.   The new mixes are not all perfect.  Some are better than others, but its a positive experience overall.  
  • Reply 3 of 4
    Atmos music is getting better and better. Early mixes were hit or miss. Lately there has been much improvement. The new Atmos mixes of the Beatles' Red and Blue albums are outstanding. I'm specifically talking about the Atmos implementation, not the re-mixing of the music, which is almost all very well done, except for a track or two. Giles Martin went back to re-do the Atmos of Sgt. Pepper a few years back. I think he should re-think the music mix itself on "I Am The Walrus." 
  • Reply 4 of 4
    I assume this process is similar to artificially converting a 2D movie to 3D for older recordings that were originally recorded in stereo.
    Generally speaking this would mean going back to the original multi-track studio recording and remixing the recording from there, rather than creating a simulated three-dimensional output.

    For old-school "normal" mixing of multi-track recordings, the engineers are quite literally assigning what sounds will come out of which speakers, whether mixing for stereo or 5.1 surround or whatever. In that case, no matter where the listener places their speakers in the room, the assigned sound comes out of the assigned speaker. This has always put audio mixing engineers at a disadvantage, because every single listening setup will be a little different and less than the ideal, depending on the actual placement of speakers as well as the acoustics of the room used as a listening space. 

    For Dolby Atmos, the mixing engineer assigns from what direction each sound should come, and the listener's Atmos-capable system determines on-the-fly what sounds will be assigned to which speakers. The step in between is this: If you have a multi-speaker surround sound system, you will run a setup procedure at least once after arranging your speakers around the room. Using a microphone attached to the amplifier, this setup will emit test tones from each speaker and measure their relative position in the room. Once that's established, if an Atmos recording has been mixed with instructions that a violin should sound like it's 25 degrees to the right of center and 8 degrees above the horizontal plane, the system will know how much of that sound must come from the center speaker, the right speaker and and elevated right speaker in order to create that positioning in the listener's actual space. (Original and second-generation big HomePods do this room-measuring thing on the fly, so they can also do Dolby Atmos playback.)

    So ideally, a Dolby Atmos mix will start with the original multi-track recording in order to fully implement this object-oriented mixing process. Short of that, for an older recording where that multi-track information simply doesn't exist, an engineer could perhaps simply use the technology to create the acoustics of stereo speakers in an ideal listening space. The imagined left and right speakers are here in this direction and there in that direction and the natural reverberation off a high ceiling and walls of a perfectly dimensioned imaginary listing room can all be simulated through the seven or more speakers in the Atmos-capable listening room.

    Or, they can use the machine learning tech that the Beatles and Giles Martin have been using to separate multiple instruments and voices from single recorded tracks in order to create a multi-track recording where there wasn't one before, and then mix the whole thing as a full Dolby Atmos deal.

    P.S., for all of this, Apple uses the binaural effect to computationally recreate three-dimensional spatial audio surround sound output in your earbuds or headphones.

Sign In or Register to comment.