Option for high-quality Apple Music streaming over cellular shows up in iOS 9 beta
It appears Apple is planning to give on-the-go Apple Music users the option to stream tunes at high bitrates when iOS 9 debuts later this year, as the latest beta released Wednesday includes a toggle switch for high quality audio streaming over cellular networks.
Currently, Apple Music automatically adjusts streaming bitrates based on a determination of whether an iPhone or iPad is connected to Wi-Fi or a cellular data network. Through a Settings menu option in iOS 9 beta 3, however, users can force Apple Music into streaming high quality audio at all times.
Shortly after Apple Music launched with iOS 8.4, Apple SVP of Internet Software and Service Eddy Cue confirmed the service varies bit rates depending on whether a device is connected to Wi-Fi or cellular. The measure is commonly instituted by streaming music services to offer customers an acceptable listening experience, while saving them money on costly mobile data plans.
Competitors in the space are already fielding similar options, including companies like Jay-Z's Tidal which actively tout high-bitrate streaming as part of their marketing strategy.
With iOS 9 beta 3, Apple includes a brief warning to users not familiar with high bitrate streaming, saying, "This will use more cellular data and songs may take longer to start playing."
Exact Apple Music bitrates remain undisclosed, though previous reports claimed streams max out at 256kbps, lower than offerings from Spotify and the erstwhile Beats Music. Debate rages on over whether bitrates are an accurate indicator of audio quality, however. Since higher bitrates carry more information, some argue that higher quality audio logically follows, while others contend efficient file formats like AAC can achieve equivalent or better sound with lower overhead.
In any case, the iOS 9 streaming option is a welcome addition for those with a penchant for high-quality sound and who also have large or unlimited data allotments.
Currently, Apple Music automatically adjusts streaming bitrates based on a determination of whether an iPhone or iPad is connected to Wi-Fi or a cellular data network. Through a Settings menu option in iOS 9 beta 3, however, users can force Apple Music into streaming high quality audio at all times.
Shortly after Apple Music launched with iOS 8.4, Apple SVP of Internet Software and Service Eddy Cue confirmed the service varies bit rates depending on whether a device is connected to Wi-Fi or cellular. The measure is commonly instituted by streaming music services to offer customers an acceptable listening experience, while saving them money on costly mobile data plans.
Competitors in the space are already fielding similar options, including companies like Jay-Z's Tidal which actively tout high-bitrate streaming as part of their marketing strategy.
With iOS 9 beta 3, Apple includes a brief warning to users not familiar with high bitrate streaming, saying, "This will use more cellular data and songs may take longer to start playing."
Exact Apple Music bitrates remain undisclosed, though previous reports claimed streams max out at 256kbps, lower than offerings from Spotify and the erstwhile Beats Music. Debate rages on over whether bitrates are an accurate indicator of audio quality, however. Since higher bitrates carry more information, some argue that higher quality audio logically follows, while others contend efficient file formats like AAC can achieve equivalent or better sound with lower overhead.
In any case, the iOS 9 streaming option is a welcome addition for those with a penchant for high-quality sound and who also have large or unlimited data allotments.
Comments
As to more efficient codecs, I agree that's a factor as well. I recall several purchased 128kbps iTunes files sounding better than things I encoded at higher rates, usin both AAC and other formats.
The question becomes whether or not you will notice a difference between 128 and 256. Possibly. But not everyone will, and fewer will really care. As a member here said years ago, we've finally reached the point in audio where it's no longer about sounding better, it's about sounding good enough.
But, I did meet with the top guy with Meridian in charge of the MQA project there the end of May. He indicated that Apple was a licensor of the format, without actually saying so. You know how that goes.
But if that's true, it would need new hardware to take advantage of it, as it needs hardware and the software decoder to function in high quality mode. So there's the possibility of new iPhones, iPods, iPads, and Macs that COULD come out with this later this year.
Actually, I disagree. In addition, I find that musicians are the worst people to ask about audio quality. I used to design and build professional audio equipment. My company made recording equipment, Amos and speakers for disco's, etc. We had Barbara Cook and Werner Klemperer. You may have heard of both. When in NYC I had them come to our listening room. They were hard pressed to tell the differences between speakers. They're not alone.
As far as the type of music that's most affected, it's Electronic/dance that's affected most, and classical that'saffected least. That's because of the way these algorithms work. Since we hear poorly in higher frequencies for a number of sometime complex reasons, most of the energy removed is from higher frequencies. Perceptually, it's difficult to tell. But if music has a lot of energy in those frequencies, then it becomes more obvious.
Nevertheless, really high end playback equipment will allow more subtle difference to be heard.
The DAC matters as well. Wolfson DAC's (what Apple used to use until 2008) and Cirrus Logic DAC's do produce different sounds. It's subtle, but present.
That's my assumption as well. The question is what will the low-bitrate option be. My guess is it's 128kibps VBR AAC.
That's probably correct. Even now, Apple offers to down convert 256 to 128.
Interesting, I didn't know you had that background. I disagree about the type of music affected completely. Classical (I hate that term) or orchestral has many more high frequencies, or is at least more complex. As for telling the difference in speakers, that's another matter. I don't doubt your account, but I don't think it's representative of musicians' ability to discern sound quality.
http://v2.twice.com/news/components/meridian-lining-support-high-res-mqa-audio/55448
"The company (MQA) said 7digital will become the first digital-music platform provider to support MQA-encoded music on the download and streaming platforms that it supplies to third parties. Current 7Digital customers include Samsung, T-Mobile, HMV, BlackBerry, and Yahoo. In the U.S., 7Digital’s platform powers the streaming service of cellular MVNO Rok Mobile, the on-demand streaming service for Pure’s multiroom -speaker system, and radio apps from select startup companies. Outside the U.S., the company is working with Panasonic and Onkyo on their high-resolution download services. 7Digital also offers its own download service direct to consumers in Europe but didn’t announce MQA plans for its site.
Meridian also announced that it is in “advanced discussions” with wireless multiroom-audio platform provider Imagination Technologies, the Tidal CD-quality streaming service, Onkyo, and British hi-fi maker Arcam to provide MQA-encoded music and audio products to consumers."
EDIT: For those who never heard of Meridian Audio/MQA an interview is available here:
[VIDEO]
Backward compatibility:
MQA-encoded music, which takes PCM form, can be placed inside any lossless-audio “container” such as FLAC, ALAC or WAV “to take you right to the live performance,” Stuart added during a recent presentation at Meridian’s U.S. headquarters.
An MQA decoder can take the form of an app, a software player or hardware, and it “reconstructs the exact sound approved in the studio along with an indicator to authenticate that what you are hearing is a true rendition of the original master recording,” Stuart’s statement said. A software decoder in a smartphone or tablet app would be “capable of decoding all sources and matching it to the DAC on board,” he told TWICE. Select smartphones already feature 192/24 DACs.
http://www.twice.com/meridian-mqa-audio-tracks-coming-2015/55280
All I can say is that I've known a lot of musicians, mostly jazz guys these days. I also have remained involved in the audio industry, even though we sold the company to JBL many years ago.
While classical, and I hate that term as well, does have subtle quality at higher frequencies, mostly, those higher frequencies are muted, relative to the midrange, which is around 200Hz to about 1.5KHz. Everything above can be considered as high frequencies, or treble. Remember that the highest fundamental on a piano is just above 2k.
If you look at a spectrum analysis of the frequency band from 20-20k when playing music, you'll note that for most classical, the highest frequencies are well down from the mid, while with pop it's stronger. With rock of various types, it's still stronger, and its strongest with music similar to German "Autobahn" music. You know what I mean.
As most of the removed info is from the higher frequencies, the music most affected is that with strong high frequency content. I find it easiest to hear with rock, and hardest with classical. Obviously that is a general statement, because different music may change that. But I can show it with a double blind test on very good, and well set up, equipment such as mine.
I agree with you 100%. I've been in the audio business for many years and have noticed the same thing. I've always attributed this to two reasons. 1. In the case of many rock and pop musicians, their hearing Is simply shot. Obvious reasons. 2. For those musicians who still have somewhat normal hearing, their brains do a better job interpolating the missing info than the rest of us. They know what is supposed to be there and therefore don't miss it. Non-musicians are more likely to notice the difference between audio components and playback mediums.
There are two ways this can be used. One is with no decoder at all. When used that way, standard CD quality is heard, according to them. But, there is very little information in the higher band of CD quality files. MQA cleverly stuffs the higher quality content, that is, the expanded quality, into the unused higher band of the CD quality file.
In order to hear that, you need a decoder. They have a small one that's about the size of the Apple TV controller, but thicker, and it sells for $289, which isn't bad. That's needed to decode the signal to hear the high band quality. Their hope, which may be realized, is to put small chips, perhaps one, or maybe two, into various equipment in order to decode the streams.
When I heard the demo, it sounded very good. Unfortunately, Mathew only had MQA files, so there was no way to directly compare. But I have two of the songs on my own system so I could do that.
I was disappointed that they, like so many others, used a Frank Sinatra song in the demo. It's really too bad, but all the Sinatra recordings done in his most prolific recording era with Nelson Riddle are screwed up. I've always heard a buzz from his voice that I know wasn't there, from hearing him in person. It's distortion. I guess that people are so used to hearing it they don't notice, but it always bothers me. Only when I carefully point it out do people notice.
I recently found out why. HBO did a very good, two part, special on Sinatra commemorating his birth. In the second part, we see a recording session. Then briefly, they show the VU meter recording his voice. WHAM! There's the problem. He's being recorded between "0" and +3 VU. Well... That 3% distortion at 0, and 10% at +3. Given that these are averaging meters, he's probably going up to +6, maybe even 8, or 10! No wonder he's distorted.
Just shows that most people can't, or don't, hear much of this stuff. Even in the industry. I do because I did many live recordings. Not bragging here, really, it's just facts.
I was told, and this was the end of May, beginning of June, that they had 100 hardware and software manufacturers that were already licensed.
Yes, Bob is the guy I spoke to.
That's correct. But it does need to be understood that without the proper hardware, you won't be getting the proper high Rez experience. So while software can decode it, the proper hardware is needed to play it. Two different things.
Your number 2 is correct. Musicians hear what they want to hear, though not knowingly. They add bass that's deficient, because they know the bass line is there. They fill in the differences that exist.
It's always been amusing when Stereophile, a high end audio magazine, interviews people in the music business, such as musicians, conductors, composers, etc. at the end, of course, they just have to ask what audio system they use, the answer is almost always similar:
"My manager got me this. It costs $500, but it's great, it's really worth it."
I remember the interview with Chet Atkins. He took the interviewer to his kitchen. I'm paraphrasing here:
"I had a guy put a phone plug into my table radio here. I plug my phonograph into it. (Putting a record of his on, and playing it). Doesn't that sound real good?"
Interesting, I didn't know you had that background. I disagree about the type of music affected completely. Classical (I hate that term) or orchestral has many more high frequencies, or is at least more complex. As for telling the difference in speakers, that's another matter. I don't doubt your account, but I don't think it's representative of musicians' ability to discern sound quality.
I agree with you, orchestral and especially Jazz is the most affected because of the complex harmonics that get all mashed up by compression algorythms. That's how they work. You'll only really hear that though in good listening conditions with good equipment, otherwise the equipment/environment will have a bigger impact.