Rumor: Apple to offer hi-res 24-bit tracks on iTunes in coming months

135678

Comments

  • Reply 41 of 154
    pazuzupazuzu Posts: 1,728member
    About time. This is the reason we still buy CDs.
  • Reply 42 of 154
    sevenfeetsevenfeet Posts: 465member
    Quote:

    Originally Posted by Mr. H View Post





    AAC is not Apple's "codex" (the word is "codec" by the way).

    Both ALAC and FLAC can have metadata embedded. It is WAV that does not support metadata and that's one of the reasons why AIFF is a superior file format to WAV despite both using the same method/encoding to represent the audio data.



    ALAC and FLAC are both lossless so achieve exactly the same audio quality. ALAC has also been open-sourced by Apple. Determining which is better than the other requires close attention to technical details such as compression efficiency, computational requirements etc.

     

    I don't think there is any technical difference between the formats that's worth talking about.  FLAC does have a multi-channel component defined, but no one uses it.  Every modern processor (desktop, mobile and embedded) has plenty of horsepower to decode and play either of them.  FLAC preceded ALAC by three years (2001 vs. 2004).  FLAC was created to deal with the monster sizes of WAV & AIFF recordings in something more manageable.  ALAC came later since Apple often goes their own way on things, and wasn't open sourced until 2011.

  • Reply 43 of 154
    sevenfeetsevenfeet Posts: 465member
    Quote:

    Originally Posted by frxntier View Post





    First of all, you don't know what you're talking about. 'Original vinyl recordings.' Nothing was ever recorded on vinyl. It was recorded on tape. You've tried to jump on the 'vinyl is the best' bandwagon but you've mixed a recording format with a delivery format.



    Second of all, vinyl is a lossy format anyway. If you think distortion and crackling is better than original pristine tape then you're a fool. So, yes, there is a better process than your made up 'original vinyl recordings.' It's expensive hardware taking original tape recordings and using brilliant sound engineers to remaster them.



    Thirdly, I don't even know why I'm replying. I hate idiots like you who know nothing about audio.



    Fourthly, Pono. Lol.

     

    It's not necessary to slam him; we all know what he meant.  Magnetic tape recording was made practical in WWII but not used commercially until 1948 (Ampex) and then popularized in recording studios a few years later when Les Paul invented the multitrack recorder.  But before that, popular music was indeed recorded on vinyl...78 RPM records that were hardly high fidelity and had their own problems.

  • Reply 44 of 154
    virtuavirtua Posts: 209member
    Some interesting sound bites lol
  • Reply 45 of 154
    jlanddjlandd Posts: 873member
    Quote:

    Originally Posted by winterspan View Post



    Can an audio engineer or someone else who is qualified please comment on a few things:



    It seems like the audio quality debate perpetually rages on the internet, with one group who thinks 256kbps AAC is just as good as lossless tracks, while the other end of the spectrum you have people collecting 24-bit, 96/192hz Super audio CD/DVD-Audio files.



    1) will 20/24-bit sampling actually make a perceptible difference to anyone using headphones or speakers that weren't > $5,000?


    2) Wouldn't 16-bit, 44.1khz (ie CD quality) tracks in Apple Lossless format (or FLAC) be more reasonable for the average user? Would there be any discernible difference between these CD quality tracks and a theoretical 24-bit, high sampling rate master track??

     

    Wrote a reply and then saw that Lorin Schultz responded perfectly.  Agree 100%.

     

    Quote:

    Originally Posted by Lorin Schultz View Post

     

     

    Here's the bottom line: Increasing the sample rate above 44.1 KHz increases the highest frequency the system can store. That's it. It does not improve the resolution of lower frequencies, reduce the size of the steps or any other meaningless gobbledygook. This increase in high frequencies does NOT affect audible frequencies through additional frequency interactions because those, if they existed, and if they were audible, were CAPTURED BY THE MICROPHONE AT THE TIME OF RECORDING. If you think there's something going on above 22 KHz (this world contains almost nothing over 10 KHz much less 20) and you think you can hear it (and still thing so after listening to a tone generator sweep up to that point), by all means, buy high sample rate recordings. Otherwise, refuse to be sucked in by marketing bullshit.

     

     


     

    Yep.  If you've got content there, it will hold useful info.  If not it won't.  Though I think saying there's almost nothing above 10k is probably the coffee talking   :  )   Valuable content between 10 and 20k.

     

    Quote:
    Originally Posted by Lorin Schultz View Post

     

     

    More bottom line: Increasing the word length from 16 bits to 24 bits lowers the volume at which the signal turns to noisy hash. Period, The End, nothing else. For a classical piece with 100 dB dynamic range this can be beneficial because the really, really, really, really, really quiet parts will sound less grainy (and if you can hear them, you better have a seven thousand watt amplifier for when the loud part comes in -- do the math: twice the power for every 3 db). For a Pop piece with the dynamics compressed so hard that the waveform looks like a cylinder, the net benefit of more bits is zero zilch nada FA poodly. Nothing. Increasing dynamic range doesn't magically improve other characteristics.


     

    This is the whole bit discussion in a nutshell.  Original recording should be at the highest bit rate currently available.  But once you've summed the tracks to a mixed piece of music where there are no drops in level to nearly the noise floor there's nothing riding in those lowest levels to use 24 bits.  And in modern acoustic music, where each track is compressed and limited and the master is compressed and limited, the dynamic range is lessened to nullify any advantage of the final playback product being 24 bits, even if it's so subtle as to be barely perceivable.  Even if nothing were to be compressed the instruments and vocals are combining to create a single wave where the dynamic range is a fraction of what the individual tracks are.   And forget about any music in any genre with a drummer.    The song never goes below 10 db below its peak and usually the wave looks like a flattened out brick raised to   -.02db, moving it even further away from the realm where a 24 bit playback even comes into the equation.

     

    Record 24 bits.  Use internal processing engines of 64 or 32 bit, like all DAWs have.  Keep files at 24 bit if there's more processing to be done.  But for the final product to be 24 bit is pointless for 99.5% of released music when you realize where the levels are.  Good sounding D/As on the listener's end (as well as what they're listening on and what it is they're listening to, obviously) are the only things that make a difference in the listening experience.  Releasing 24 bit files to the public is just grasping at more ways to get the public to buy the same music again.

     

     

     

  • Reply 46 of 154
    asciiascii Posts: 5,936member

    4K movies next please.

  • Reply 47 of 154
    jasenj1jasenj1 Posts: 923member

    I'm not a professional audio engineer, but I play one on Sunday mornings; I run the record board for my church. We use a 128 channel Pro Tools rig and record at 24/48. We use a dedicated recording room with pretty decent JBL studio monitors.

     

    IMHO, at the recording & mixing stage using high sampling & bit rates makes a lot of sense. There is a lot of processing (math) going on and artifacts can creep into the audible range.

     

    The recording, mixing, & mastering phases have a HUGE affect on the final sound. A bump in a frequency range here, a bit of compression there, can really change how something sounds. I once talked to a professional musician and mentioned that their albums sounded poor. He said in the studio they sound awesome, but they get mastered so as not to break cheap speakers. I also read somewhere that someone sent out a master CD for duplication and the duplicates came back sounding different than the master - the duplication facility applied EQ & compression!

     

    So there's a pretty long production chain that happens before music gets turned into 16/44.1 PCM or 128kps MP3 or whatever you buy. If the engineers are mixing & mastering for earbuds, the music is never going to sound good on $5000 home rigs. But something recorded,mixed & encoded with attention to fidelity can sound pretty good even at lower bit rates.

     

    What's really needed - and I think we're getting - is files designed for earbuds (256kps AAC) and files designed for home rigs which have better frequency response. The earbud folks can have their volume maximized, bass enchanced, files for listening to on the bus or train. And the audiophiles can have wide dynamic range, minimally compressed & EQed files for showing off their expensive rigs.

     

    - Jasen.

     

    P.S. Another step would be to have sonic profiles for speakers & headphones like we have color profiles for displays & printers.

  • Reply 48 of 154
    tallest skiltallest skil Posts: 43,388member
    Originally Posted by ascii View Post

    4K movies next please.

     

    We’ll need H.265 for that.

  • Reply 49 of 154
    solipsismxsolipsismx Posts: 19,566member
    I'm glad to see [@]winterspan[/@] posting. I learned a great deal about cellular connectivity from him back around 2007 and 2008.

    Instead of writing Apple's codex it should have been Apple's AAC codex. Sorry for that.

    Hold up! So when I've been writing ALAC, which stands for Apple Lossless Audio Codec, you're 1) reading that as AAC a lossy audit codec, and 2) thinking that Apple is somehow the owner or creator of AAC? WHAT?! WHAT?! WHAT?!

    First of all, if ALAC is lossless then how is result any different than FLAC for the same source? The compressed file sizes or processing needed to compress each lossless codec are different but I don't think by much and I seem to recall ALAC being better in both cases.

    Secondly, Apple has nothing to the creation of naming of AAC, they are simply the biggest user of the standard, which is clearly better than MP3. Here's a brief history:
    AAC was developed with the cooperation and contributions of companies including AT&T Bell Laboratories, Fraunhofer IIS, Dolby Laboratories, Sony Corporation and Nokia. It was officially declared an international standard by the Moving Picture Experts Group in April 1997. It is specified both as Part 7 of the MPEG-2 standard, and Subpart 4 in Part 3 of the MPEG-4 standard.[5]
  • Reply 50 of 154
    asciiascii Posts: 5,936member
    Quote:

    Originally Posted by Tallest Skil View Post

     

     

    We’ll need H.265 for that.


    Yep. These guys have already implemented H.265 on the Mac so I'm sure Apple can too: http://www.divx.com/en/software/player

    But then Apple would probably prefer hardware decoding for battery reasons.

  • Reply 51 of 154
    evilutionevilution Posts: 1,399member
    Quote:

    Originally Posted by Smallwheels View Post



    I'm looking forward to experiencing music from the Pono music player. It will be better quality than this alleged Mastered for iTunes product. Pono will use FLAC files and be capable of using other industry standard files of lesser quality.



    I doubt any process will be as good as original vinyl recordings on a good system but Pono will certainly be the top of the line standard for a while to come. They will debut in the summer of 2014.

    I never understand people arguing 2 completely different sides. Either you want high quality lossless sound OR you want to listen to hiss, pop and crackle over the top of a song. 

  • Reply 52 of 154
    rob55rob55 Posts: 1,291member

    As others have stated, all Apple needs to do is start selling music in 44.1/16 ALAC and I'll buy all my music from them.

  • Reply 53 of 154
    solipsismxsolipsismx Posts: 19,566member
    ascii wrote: »
    Yep. These guys have already implemented H.265 on the Mac so I'm sure Apple can too: http://www.divx.com/en/software/player
    But then Apple would probably prefer hardware decoding for battery reasons.

    I think it would need to be a commonality in iDevices before we see Apple update iTS with 4K videos or recompress their catalog to be smaller files. I seem to recall that the Galaxy S4 had the H.265 decoder last year. I wonder if the Galaxy S5 has it this year.
  • Reply 54 of 154
    Quote:

    Originally Posted by EricTheHalfBee View Post

     

     

    I've been an audiophile since the late 70's. By audiophile I mean someone who's really into music and listening to it on a high-end system. However, I'm not one of those idiots who claims they can hear the difference between a $50 speaker cable and a $2,000 speaker cable or that a $50,000 amplifier will sound better than a $2,000 amplifier. My entire system (which happens to be mostly Canadian made) is worth around $10,000, what I consider a point at which you can achieve fantastic sound quality and that I feel is not going to be really improved upon by spending more (much to the chagrin of "true" audiophiles who think $10,000 is a good starting point for a single piece of gear).

     

    I paid $1,000 for my first (and only) high-end turntable in and cartridge and it sounded fantastic (1982). You didn't have to play it loud for it to sound good. I also remember the first CD players that came out. While they had fantastic specs on paper, they didn't always sound good. This had a lot to do with how the DAC's were made in the early days (I found many to sound rather shrill). DAC quality has greatly improved over the years and today you can get a $5 chip that sounds as good as a $2,000 dedicated DAC from the early days, yet I still hear "audiophiles" claim they can hear artifacts in digital recordings as if they were still listening to gear from the early 80's.

     

    It's also funny that Sony/Philips claimed that 44.1KHz was a high enough sample rate to capture all the sound. Yet Sony made digital studio recorders that could also record in 48, 88.2 and 96KHz. What would be the point of recording at higher sampling rates if 44.1KHz was "good enough"?

     

    I hope Apple does offer high quality tracks for download. While I have a lot of music from iTunes, I still buy CD's of music I consider worthwhile (where the artist spent time actually creating something that sounds good). While audiophiles can argue about the differences between two obscenely overpriced components, anyone can hear the difference between a compressed MP3 and the original on a reasonably good system. Most people just never bother to actually do a side-by-side, but if they did they'd soon realize that MP3's really aren't as good.

     

    If Apple does this then I'll finally stop buying physical media for my music.

     


     

    +1 on this.  I've been hoping that Apple would sell HD music for years.  They certainly can and we're long past the point where the size of the files from a download standpoint is a problem.  After all, this is the streaming video/Netflix era.  The big problem is that from a practical standpoint, the only way to accurately play back HD music in the Apple universe has been iTunes playing through a USB DAC.  According to internet sources, Airplay changes all music being streamed to 16 bit/44.1 kHz regardless of what format it began.  This makes a lot of sense because the downstream device only has to worry about PCM audio and not about algorithms to change it from MP3 or AAC to PCM.  But while a 256k AAC file gets up sampled on the way to the Airport Express or embedded chip in your receiver, HD music right now gets down sampled from 24 bits wide and 48-192khz back down to 44.1.  

     

    If Apple sold HD music, would Airplay get updated to stream it in high fidelity too?  Only Apple knows for sure.

  • Reply 55 of 154
    asciiascii Posts: 5,936member
    Quote:

    Originally Posted by SolipsismX View Post





    I think it would need to be a commonality in iDevices before we see Apple update iTS with 4K videos or recompress their catalog to be smaller files. I seem to recall that the Galaxy S4 had the H.265 decoder last year. I wonder if the Galaxy S5 has it this year.

    iDevices have pretty small screens. Couldn't they serve up 4K to Macs and Apple TVs and 1080p to iDevices, at least in a transition period? It's already the case that iTunes serves SD videos if it detects your machine doesn't support HDCP, so there is precedence for customising video for the client.

  • Reply 56 of 154
    knowitallknowitall Posts: 1,648member
    I once tried to tell the difference between 320 kbps mp3 and CD.
    It was almost impossible to discern, but I finally new it for sure.
    To my surprise it appeard to be the mp3!
    AAC is better compression format than mp3 and needs about half the bits per second for the same quality.
    That means that 256 kbps AAC is comparable to 512 kbps mp3 and is undecernable from the original.
    Keep in mind that it doesn't matter if someone else can find the difference (or claims to be able to) because only your ears count.
  • Reply 57 of 154
    solipsismxsolipsismx Posts: 19,566member
    ascii wrote: »
    iDevices have pretty small screens. Couldn't they serve up 4K to Macs and Apple TVs and 1080p to iDevices, at least in a transition period? It's already the case that iTunes serves SD videos if it detects your machine doesn't support HDCP, so there is precedence for customising video for the client.

    I'm thinking that…
    1. Apple will want iTS buyers to be able to use 4K on aide range of new devices so you can take that 4K video you DLed from iTunes on your Mac and also play it on your iPad without a lengthy recode process. Although I don't they'd allow that all, but would rather you re-download if one of those two had to be done, but I think they'd rather we didn't do that.
    2. I wonder if Apple's data centers are also housing the iTS video library already converted to H.265 so that when the flip the switch it's ready to go live. That would make most video about half the current size and when they double the NAND again for a given price point 4K will only be about double the size of a current 1080p video which makes that doable.
    3. We're getting close to being able to record 4K on the iPhone. You can already do 8Mpx still images at about 20fps with the iPhone 5S HW with what is essentially a hack. If they can add an efficient encoder that could be a huge win for Apple since I doubt others have enough control over their HW to make it power efficient and fast enough.
    4. What about being able to grab a 4K video from Netflix, YouTube, or Video app on your phone and push via AirPlay to the Apple TV? Isn't that still being decoded on the iPhone and not the Apple TV?
  • Reply 58 of 154
    Quote:
    Originally Posted by SolipsismX View Post





    I'm thinking that…

    1. Apple will want iTS buyers to be able to use 4K on aide range of new devices so you can take that 4K video you DLed from iTunes on your Mac and also play it on your iPad without a lengthy recode process. Although I don't they'd allow that all, but would rather you re-download if one of those two had to be done, but I think they'd rather we didn't do that.

    2.  

    3. I wonder if Apple's data centers are also housing the iTS video library already converted to H.265 so that when the flip the switch it's ready to go live. That would make most video about half the current size and when they double the NAND again for a given price point 4K will only be about double the size of a current 1080p video which makes that doable.

    4.  

    5. We're getting close to being able to record 4K on the iPhone. You can already do 8Mpx still images at about 20fps with the iPhone 5S HW with what is essentially a hack. If they can add an efficient encoder that could be a huge win for Apple since I doubt others have enough control over their HW to make it power efficient and fast enough.

    6.  

    7. What about being able to grab a 4K video from Netflix, YouTube, or Video app on your phone and push via AirPlay to the Apple TV? Isn't that still being decoded on the iPhone and not the Apple TV?


     

    We all know that:

     

    4K/UHD is coming (already here, really).

    A new Apple TV is coming

    HEVC/H.265 is coming (The Galaxy S4 was an early adopter on this)

     

    It's not too farfetched to speculate that the next Apple TV might have H.265 baked in.  Apple can certainly do this and do it on the hardware level without having to acquire the technology in a separate chip.  The next generation iPad/iPhone I would think would almost certainly have it.  H.265 is the best way to practically stream 4K video but it also dramatically reduces 1080p and 480p video sources as well.  With more and more video being streamed every day, the advent of new iOS devices that take advantage of this would have a noticeable effect on the bandwidth burden on the Internet as a whole (especially in the wireless space).  Then you could also adapt existing streaming services who play on these devices, like Netflix and HBO GO to use the new codec as well, if available (Netflix has been running tests for months).  We might not all get a 4K TV this year due to prices still being high, but we a lot of us may see the benefit of HEVC/H.265 this year.

  • Reply 59 of 154
    solipsismxsolipsismx Posts: 19,566member
    sevenfeet wrote: »
    It's not too farfetched to speculate that the next Apple TV might have H.265 baked in.

    I would hope so and would have expected it but after the Fire TV release I am questioning it. Amazon put a quad-core CPU, 2GB RAM and powerful GPU into that device but no H.265 decoder or support for UHD 4K so that makes me if the technology is still a couple years away from being ready for Apple, a couple that isn't usually an early adopter.
  • Reply 60 of 154
    Quote:

    Originally Posted by knowitall View Post



    I once tried to tell the difference between 320 kbps mp3 and CD.

    It was almost impossible to discern, but I finally new it for sure.

    To my surprise it appeard to be the mp3!

    AAC is better compression format than mp3 and needs about half the bits per second for the same quality.

    That means that 256 kbps AAC is comparable to 512 kbps mp3 and is undecernable from the original.

    Keep in mind that it doesn't matter if someone else can find the difference (or claims to be able to) because only your ears count.

     

    It's harder to tell higher fidelity music from compressed music on the equipment that most people have in their homes or cars.  Let's face it, most consumer grade music playback equipment out there isn't that good (but still better than 20-30 years ago).  But if you've invested in top quality equipment that is either audiophile or near audiophile, you can certainly tell the difference.  I have some music purchased from HD Tracks and Linn.  Nearly all of it sounds a lot sweeter and prettier on my two channel  audiophile rig in my living room (which includes a tube amp) but it also sounds better on my home theater setup which has awesome tower speakers I acquired a decade ago.

     

    Most people have to deal with a lot lower quality music playback sources, or what they hear from their earbuds on mobile devices (as opposed to high end planar headphones played through a quality amp).  HD Music isn't going to take the world by storm and frankly, there are plenty of good reasons to have compressed music (mobile devices, in car reproduction, etc).  But if you have the right equipment, you can tell the difference.

Sign In or Register to comment.