or Connect
AppleInsider › Forums › Mobile › iPod + iTunes + AppleTV › Rumor: Apple to offer hi-res 24-bit tracks on iTunes in coming months
New Posts  All Forums:Forum Nav:

Rumor: Apple to offer hi-res 24-bit tracks on iTunes in coming months - Page 2

post #41 of 155
Quote:
Originally Posted by Smallwheels View Post

I'm looking forward to experiencing music from the Pono music player. It will be better quality than this alleged Mastered for iTunes product. Pono will use FLAC files and be capable of using other industry standard files of lesser quality.

I doubt any process will be as good as original vinyl recordings on a good system but Pono will certainly be the top of the line standard for a while to come. They will debut in the summer of 2014.

First of all, you don't know what you're talking about. 'Original vinyl recordings.' Nothing was ever recorded on vinyl. It was recorded on tape. You've tried to jump on the 'vinyl is the best' bandwagon but you've mixed a recording format with a delivery format.

Second of all, vinyl is a lossy format anyway. If you think distortion and crackling is better than original pristine tape then you're a fool. So, yes, there is a better process than your made up 'original vinyl recordings.' It's expensive hardware taking original tape recordings and using brilliant sound engineers to remaster them.

Thirdly, I don't even know why I'm replying. I hate idiots like you who know nothing about audio.

Fourthly, Pono. Lol.
post #42 of 155
About time. This is the reason we still buy CDs.
 
Where's the new Apple TV?
Reply
 
Where's the new Apple TV?
Reply
post #43 of 155
Quote:
Originally Posted by Mr. H View Post


AAC is not Apple's "codex" (the word is "codec" by the way).
Both ALAC and FLAC can have metadata embedded. It is WAV that does not support metadata and that's one of the reasons why AIFF is a superior file format to WAV despite both using the same method/encoding to represent the audio data.

ALAC and FLAC are both lossless so achieve exactly the same audio quality. ALAC has also been open-sourced by Apple. Determining which is better than the other requires close attention to technical details such as compression efficiency, computational requirements etc.

 

I don't think there is any technical difference between the formats that's worth talking about.  FLAC does have a multi-channel component defined, but no one uses it.  Every modern processor (desktop, mobile and embedded) has plenty of horsepower to decode and play either of them.  FLAC preceded ALAC by three years (2001 vs. 2004).  FLAC was created to deal with the monster sizes of WAV & AIFF recordings in something more manageable.  ALAC came later since Apple often goes their own way on things, and wasn't open sourced until 2011.

post #44 of 155
Quote:
Originally Posted by frxntier View Post


First of all, you don't know what you're talking about. 'Original vinyl recordings.' Nothing was ever recorded on vinyl. It was recorded on tape. You've tried to jump on the 'vinyl is the best' bandwagon but you've mixed a recording format with a delivery format.

Second of all, vinyl is a lossy format anyway. If you think distortion and crackling is better than original pristine tape then you're a fool. So, yes, there is a better process than your made up 'original vinyl recordings.' It's expensive hardware taking original tape recordings and using brilliant sound engineers to remaster them.

Thirdly, I don't even know why I'm replying. I hate idiots like you who know nothing about audio.

Fourthly, Pono. Lol.

 

It's not necessary to slam him; we all know what he meant.  Magnetic tape recording was made practical in WWII but not used commercially until 1948 (Ampex) and then popularized in recording studios a few years later when Les Paul invented the multitrack recorder.  But before that, popular music was indeed recorded on vinyl...78 RPM records that were hardly high fidelity and had their own problems.

post #45 of 155
Some interesting sound bites lol
post #46 of 155
Quote:
Originally Posted by winterspan View Post

Can an audio engineer or someone else who is qualified please comment on a few things:

It seems like the audio quality debate perpetually rages on the internet, with one group who thinks 256kbps AAC is just as good as lossless tracks, while the other end of the spectrum you have people collecting 24-bit, 96/192hz Super audio CD/DVD-Audio files.

1) will 20/24-bit sampling actually make a perceptible difference to anyone using headphones or speakers that weren't > $5,000?
2) Wouldn't 16-bit, 44.1khz (ie CD quality) tracks in Apple Lossless format (or FLAC) be more reasonable for the average user? Would there be any discernible difference between these CD quality tracks and a theoretical 24-bit, high sampling rate master track??

 

Wrote a reply and then saw that Lorin Schultz responded perfectly.  Agree 100%.

 

Quote:
Originally Posted by Lorin Schultz View Post
 

 

Here's the bottom line: Increasing the sample rate above 44.1 KHz increases the highest frequency the system can store. That's it. It does not improve the resolution of lower frequencies, reduce the size of the steps or any other meaningless gobbledygook. This increase in high frequencies does NOT affect audible frequencies through additional frequency interactions because those, if they existed, and if they were audible, were CAPTURED BY THE MICROPHONE AT THE TIME OF RECORDING. If you think there's something going on above 22 KHz (this world contains almost nothing over 10 KHz much less 20) and you think you can hear it (and still thing so after listening to a tone generator sweep up to that point), by all means, buy high sample rate recordings. Otherwise, refuse to be sucked in by marketing bullshit.

 

 

 

Yep.  If you've got content there, it will hold useful info.  If not it won't.  Though I think saying there's almost nothing above 10k is probably the coffee talking   :  )   Valuable content between 10 and 20k.

 

Quote:
Originally Posted by Lorin Schultz View Post
 

 

More bottom line: Increasing the word length from 16 bits to 24 bits lowers the volume at which the signal turns to noisy hash. Period, The End, nothing else. For a classical piece with 100 dB dynamic range this can be beneficial because the really, really, really, really, really quiet parts will sound less grainy (and if you can hear them, you better have a seven thousand watt amplifier for when the loud part comes in -- do the math: twice the power for every 3 db). For a Pop piece with the dynamics compressed so hard that the waveform looks like a cylinder, the net benefit of more bits is zero zilch nada FA poodly. Nothing. Increasing dynamic range doesn't magically improve other characteristics.

 

This is the whole bit discussion in a nutshell.  Original recording should be at the highest bit rate currently available.  But once you've summed the tracks to a mixed piece of music where there are no drops in level to nearly the noise floor there's nothing riding in those lowest levels to use 24 bits.  And in modern acoustic music, where each track is compressed and limited and the master is compressed and limited, the dynamic range is lessened to nullify any advantage of the final playback product being 24 bits, even if it's so subtle as to be barely perceivable.  Even if nothing were to be compressed the instruments and vocals are combining to create a single wave where the dynamic range is a fraction of what the individual tracks are.   And forget about any music in any genre with a drummer.    The song never goes below 10 db below its peak and usually the wave looks like a flattened out brick raised to   -.02db, moving it even further away from the realm where a 24 bit playback even comes into the equation.

 

Record 24 bits.  Use internal processing engines of 64 or 32 bit, like all DAWs have.  Keep files at 24 bit if there's more processing to be done.  But for the final product to be 24 bit is pointless for 99.5% of released music when you realize where the levels are.  Good sounding D/As on the listener's end (as well as what they're listening on and what it is they're listening to, obviously) are the only things that make a difference in the listening experience.  Releasing 24 bit files to the public is just grasping at more ways to get the public to buy the same music again.

 

 

 

post #47 of 155

4K movies next please.

post #48 of 155

I'm not a professional audio engineer, but I play one on Sunday mornings; I run the record board for my church. We use a 128 channel Pro Tools rig and record at 24/48. We use a dedicated recording room with pretty decent JBL studio monitors.

 

IMHO, at the recording & mixing stage using high sampling & bit rates makes a lot of sense. There is a lot of processing (math) going on and artifacts can creep into the audible range.

 

The recording, mixing, & mastering phases have a HUGE affect on the final sound. A bump in a frequency range here, a bit of compression there, can really change how something sounds. I once talked to a professional musician and mentioned that their albums sounded poor. He said in the studio they sound awesome, but they get mastered so as not to break cheap speakers. I also read somewhere that someone sent out a master CD for duplication and the duplicates came back sounding different than the master - the duplication facility applied EQ & compression!

 

So there's a pretty long production chain that happens before music gets turned into 16/44.1 PCM or 128kps MP3 or whatever you buy. If the engineers are mixing & mastering for earbuds, the music is never going to sound good on $5000 home rigs. But something recorded,mixed & encoded with attention to fidelity can sound pretty good even at lower bit rates.

 

What's really needed - and I think we're getting - is files designed for earbuds (256kps AAC) and files designed for home rigs which have better frequency response. The earbud folks can have their volume maximized, bass enchanced, files for listening to on the bus or train. And the audiophiles can have wide dynamic range, minimally compressed & EQed files for showing off their expensive rigs.

 

- Jasen.

 

P.S. Another step would be to have sonic profiles for speakers & headphones like we have color profiles for displays & printers.

post #49 of 155
Originally Posted by ascii View Post
4K movies next please.

 

We’ll need H.265 for that.

Originally posted by Relic

...those little naked weirdos are going to get me investigated.
Reply

Originally posted by Relic

...those little naked weirdos are going to get me investigated.
Reply
post #50 of 155
I'm glad to see @winterspan posting. I learned a great deal about cellular connectivity from him back around 2007 and 2008.

Quote:
Originally Posted by Smallwheels View Post

Instead of writing Apple's codex it should have been Apple's AAC codex. Sorry for that.

Hold up! So when I've been writing ALAC, which stands for Apple Lossless Audio Codec, you're 1) reading that as AAC a lossy audit codec, and 2) thinking that Apple is somehow the owner or creator of AAC? WHAT?! WHAT?! WHAT?!

First of all, if ALAC is lossless then how is result any different than FLAC for the same source? The compressed file sizes or processing needed to compress each lossless codec are different but I don't think by much and I seem to recall ALAC being better in both cases.

Secondly, Apple has nothing to the creation of naming of AAC, they are simply the biggest user of the standard, which is clearly better than MP3. Here's a brief history:
Quote:
AAC was developed with the cooperation and contributions of companies including AT&T Bell Laboratories, Fraunhofer IIS, Dolby Laboratories, Sony Corporation and Nokia. It was officially declared an international standard by the Moving Picture Experts Group in April 1997. It is specified both as Part 7 of the MPEG-2 standard, and Subpart 4 in Part 3 of the MPEG-4 standard.[5]

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #51 of 155
Quote:
Originally Posted by Tallest Skil View Post
 

 

We’ll need H.265 for that.

Yep. These guys have already implemented H.265 on the Mac so I'm sure Apple can too: http://www.divx.com/en/software/player

But then Apple would probably prefer hardware decoding for battery reasons.

post #52 of 155
Quote:
Originally Posted by Smallwheels View Post

I'm looking forward to experiencing music from the Pono music player. It will be better quality than this alleged Mastered for iTunes product. Pono will use FLAC files and be capable of using other industry standard files of lesser quality.

I doubt any process will be as good as original vinyl recordings on a good system but Pono will certainly be the top of the line standard for a while to come. They will debut in the summer of 2014.

I never understand people arguing 2 completely different sides. Either you want high quality lossless sound OR you want to listen to hiss, pop and crackle over the top of a song. 

post #53 of 155

As others have stated, all Apple needs to do is start selling music in 44.1/16 ALAC and I'll buy all my music from them.

post #54 of 155
Quote:
Originally Posted by ascii View Post

Yep. These guys have already implemented H.265 on the Mac so I'm sure Apple can too: http://www.divx.com/en/software/player
But then Apple would probably prefer hardware decoding for battery reasons.

I think it would need to be a commonality in iDevices before we see Apple update iTS with 4K videos or recompress their catalog to be smaller files. I seem to recall that the Galaxy S4 had the H.265 decoder last year. I wonder if the Galaxy S5 has it this year.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #55 of 155
Quote:
Originally Posted by EricTheHalfBee View Post
 

 

I've been an audiophile since the late 70's. By audiophile I mean someone who's really into music and listening to it on a high-end system. However, I'm not one of those idiots who claims they can hear the difference between a $50 speaker cable and a $2,000 speaker cable or that a $50,000 amplifier will sound better than a $2,000 amplifier. My entire system (which happens to be mostly Canadian made) is worth around $10,000, what I consider a point at which you can achieve fantastic sound quality and that I feel is not going to be really improved upon by spending more (much to the chagrin of "true" audiophiles who think $10,000 is a good starting point for a single piece of gear).

 

I paid $1,000 for my first (and only) high-end turntable in and cartridge and it sounded fantastic (1982). You didn't have to play it loud for it to sound good. I also remember the first CD players that came out. While they had fantastic specs on paper, they didn't always sound good. This had a lot to do with how the DAC's were made in the early days (I found many to sound rather shrill). DAC quality has greatly improved over the years and today you can get a $5 chip that sounds as good as a $2,000 dedicated DAC from the early days, yet I still hear "audiophiles" claim they can hear artifacts in digital recordings as if they were still listening to gear from the early 80's.

 

It's also funny that Sony/Philips claimed that 44.1KHz was a high enough sample rate to capture all the sound. Yet Sony made digital studio recorders that could also record in 48, 88.2 and 96KHz. What would be the point of recording at higher sampling rates if 44.1KHz was "good enough"?

 

I hope Apple does offer high quality tracks for download. While I have a lot of music from iTunes, I still buy CD's of music I consider worthwhile (where the artist spent time actually creating something that sounds good). While audiophiles can argue about the differences between two obscenely overpriced components, anyone can hear the difference between a compressed MP3 and the original on a reasonably good system. Most people just never bother to actually do a side-by-side, but if they did they'd soon realize that MP3's really aren't as good.

 

If Apple does this then I'll finally stop buying physical media for my music.

 

 

+1 on this.  I've been hoping that Apple would sell HD music for years.  They certainly can and we're long past the point where the size of the files from a download standpoint is a problem.  After all, this is the streaming video/Netflix era.  The big problem is that from a practical standpoint, the only way to accurately play back HD music in the Apple universe has been iTunes playing through a USB DAC.  According to internet sources, Airplay changes all music being streamed to 16 bit/44.1 kHz regardless of what format it began.  This makes a lot of sense because the downstream device only has to worry about PCM audio and not about algorithms to change it from MP3 or AAC to PCM.  But while a 256k AAC file gets up sampled on the way to the Airport Express or embedded chip in your receiver, HD music right now gets down sampled from 24 bits wide and 48-192khz back down to 44.1.  

 

If Apple sold HD music, would Airplay get updated to stream it in high fidelity too?  Only Apple knows for sure.

post #56 of 155
Quote:
Originally Posted by SolipsismX View Post


I think it would need to be a commonality in iDevices before we see Apple update iTS with 4K videos or recompress their catalog to be smaller files. I seem to recall that the Galaxy S4 had the H.265 decoder last year. I wonder if the Galaxy S5 has it this year.

iDevices have pretty small screens. Couldn't they serve up 4K to Macs and Apple TVs and 1080p to iDevices, at least in a transition period? It's already the case that iTunes serves SD videos if it detects your machine doesn't support HDCP, so there is precedence for customising video for the client.

post #57 of 155
I once tried to tell the difference between 320 kbps mp3 and CD.
It was almost impossible to discern, but I finally new it for sure.
To my surprise it appeard to be the mp3!
AAC is better compression format than mp3 and needs about half the bits per second for the same quality.
That means that 256 kbps AAC is comparable to 512 kbps mp3 and is undecernable from the original.
Keep in mind that it doesn't matter if someone else can find the difference (or claims to be able to) because only your ears count.
post #58 of 155
Quote:
Originally Posted by ascii View Post

iDevices have pretty small screens. Couldn't they serve up 4K to Macs and Apple TVs and 1080p to iDevices, at least in a transition period? It's already the case that iTunes serves SD videos if it detects your machine doesn't support HDCP, so there is precedence for customising video for the client.

I'm thinking that…
  1. Apple will want iTS buyers to be able to use 4K on aide range of new devices so you can take that 4K video you DLed from iTunes on your Mac and also play it on your iPad without a lengthy recode process. Although I don't they'd allow that all, but would rather you re-download if one of those two had to be done, but I think they'd rather we didn't do that.
  2. I wonder if Apple's data centers are also housing the iTS video library already converted to H.265 so that when the flip the switch it's ready to go live. That would make most video about half the current size and when they double the NAND again for a given price point 4K will only be about double the size of a current 1080p video which makes that doable.
  3. We're getting close to being able to record 4K on the iPhone. You can already do 8Mpx still images at about 20fps with the iPhone 5S HW with what is essentially a hack. If they can add an efficient encoder that could be a huge win for Apple since I doubt others have enough control over their HW to make it power efficient and fast enough.
  4. What about being able to grab a 4K video from Netflix, YouTube, or Video app on your phone and push via AirPlay to the Apple TV? Isn't that still being decoded on the iPhone and not the Apple TV?

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #59 of 155
Quote:
Originally Posted by SolipsismX View Post


I'm thinking that…
  1. Apple will want iTS buyers to be able to use 4K on aide range of new devices so you can take that 4K video you DLed from iTunes on your Mac and also play it on your iPad without a lengthy recode process. Although I don't they'd allow that all, but would rather you re-download if one of those two had to be done, but I think they'd rather we didn't do that.
  2.  
  3. I wonder if Apple's data centers are also housing the iTS video library already converted to H.265 so that when the flip the switch it's ready to go live. That would make most video about half the current size and when they double the NAND again for a given price point 4K will only be about double the size of a current 1080p video which makes that doable.
  4.  
  5. We're getting close to being able to record 4K on the iPhone. You can already do 8Mpx still images at about 20fps with the iPhone 5S HW with what is essentially a hack. If they can add an efficient encoder that could be a huge win for Apple since I doubt others have enough control over their HW to make it power efficient and fast enough.
  6.  
  7. What about being able to grab a 4K video from Netflix, YouTube, or Video app on your phone and push via AirPlay to the Apple TV? Isn't that still being decoded on the iPhone and not the Apple TV?

 

We all know that:

 

4K/UHD is coming (already here, really).

A new Apple TV is coming

HEVC/H.265 is coming (The Galaxy S4 was an early adopter on this)

 

It's not too farfetched to speculate that the next Apple TV might have H.265 baked in.  Apple can certainly do this and do it on the hardware level without having to acquire the technology in a separate chip.  The next generation iPad/iPhone I would think would almost certainly have it.  H.265 is the best way to practically stream 4K video but it also dramatically reduces 1080p and 480p video sources as well.  With more and more video being streamed every day, the advent of new iOS devices that take advantage of this would have a noticeable effect on the bandwidth burden on the Internet as a whole (especially in the wireless space).  Then you could also adapt existing streaming services who play on these devices, like Netflix and HBO GO to use the new codec as well, if available (Netflix has been running tests for months).  We might not all get a 4K TV this year due to prices still being high, but we a lot of us may see the benefit of HEVC/H.265 this year.

post #60 of 155
Quote:
Originally Posted by Sevenfeet View Post

It's not too farfetched to speculate that the next Apple TV might have H.265 baked in.

I would hope so and would have expected it but after the Fire TV release I am questioning it. Amazon put a quad-core CPU, 2GB RAM and powerful GPU into that device but no H.265 decoder or support for UHD 4K so that makes me if the technology is still a couple years away from being ready for Apple, a couple that isn't usually an early adopter.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #61 of 155
Quote:
Originally Posted by knowitall View Post

I once tried to tell the difference between 320 kbps mp3 and CD.
It was almost impossible to discern, but I finally new it for sure.
To my surprise it appeard to be the mp3!
AAC is better compression format than mp3 and needs about half the bits per second for the same quality.
That means that 256 kbps AAC is comparable to 512 kbps mp3 and is undecernable from the original.
Keep in mind that it doesn't matter if someone else can find the difference (or claims to be able to) because only your ears count.

 

It's harder to tell higher fidelity music from compressed music on the equipment that most people have in their homes or cars.  Let's face it, most consumer grade music playback equipment out there isn't that good (but still better than 20-30 years ago).  But if you've invested in top quality equipment that is either audiophile or near audiophile, you can certainly tell the difference.  I have some music purchased from HD Tracks and Linn.  Nearly all of it sounds a lot sweeter and prettier on my two channel  audiophile rig in my living room (which includes a tube amp) but it also sounds better on my home theater setup which has awesome tower speakers I acquired a decade ago.

 

Most people have to deal with a lot lower quality music playback sources, or what they hear from their earbuds on mobile devices (as opposed to high end planar headphones played through a quality amp).  HD Music isn't going to take the world by storm and frankly, there are plenty of good reasons to have compressed music (mobile devices, in car reproduction, etc).  But if you have the right equipment, you can tell the difference.

post #62 of 155
I just want to know what is the best way to listen to Mylie Cyrus and Justin Bieber. (expecting some fun answers)

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #63 of 155
Quote:
Originally Posted by SolipsismX View Post


I would hope so and would have expected it but after the Fire TV release I am questioning it. Amazon put a quad-core CPU, 2GB RAM and powerful GPU into that device but no H.265 decoder or support for UHD 4K so that makes me if the technology is still a couple years away from being ready for Apple, a couple that isn't usually an early adopter.

 

The difference between Apple and Amazon is that Amazon designs their stuff but they don't have the resources yet to design their own silicon.  An old friend of mine is the VP at Kindle (the Kindle Fire is his baby) and I'm pretty comfortable in saying that.  Even iFixit commented in their teardown of the device that the Fire TV has a lot of "battle tested" chips in the case...things that are off the shelf and have been seen in other designs.  Apple designs its own silicon which means that there is nothing keeping them from adding H.265 decoding to the existing A7 or future A8.  Also, HDMI 2.0 is available now for manufacturers but the biggest issue is that not all modes dealing with color properties (12 and 16 bit color) are supported in silicon just yet and may not be this year.  Still, you could easily go to market with what is available now and even HDMI 1.4a has some support of 4K at 24/30 fps (good enough for TV applications).

post #64 of 155
Quote:
Originally Posted by Mr. H View Post

Digital is better in every conceivable way. Except that some people who don't understand it convince themselves that it must be worse than vinyl and go on to perform poorly or not-at-all controlled comparisons which - surprise, surprise - reinforce their original viewpoint.

...whilst it (vinyl) is worse in every way as a medium compared to CD (dynamic range, frequency response, wow & flutter, distortion) vinyl masters do not usually have their dynamic range compressed into oblivion.

 

Wholeheartedly agree.  If a vinyl recording sounds better it's because the source was mastered better, not because of the media.

post #65 of 155
Quote:
Originally Posted by Silver Shadow View Post

How much will they charge me to "upgrade" the songs I've already purchased this time? Last time it was $.69 per song or $.33 per song I think...

Then there was the aggravating issue where songs I had purchased were no longer available on the iTunes Store for whatever reason.

I hope it's a free upgrade if your a current iTunes Match subscriber.

 

It was .30 cents per song to upgrade.  Songs no longer available "for whatever reason" was due to licensing agreements with the recording industry.  Free upgrade, unlikely.  iTunes Match is a joke.  Does it surprise you that Apple no longer talks about iTunes Match?  
post #66 of 155
Quote:
Originally Posted by SolipsismX View Post

I just want to know what is the best way to listen to Mylie Cyrus and Justin Bieber. (expecting some fun answers)

 

With plugs in your ears and Gravol in your tummy.

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply
post #67 of 155
Quote:
Originally Posted by hillstones View Post

iTunes Match is a joke. Does it surprise you that Apple no longer talks about iTunes Match?

1) In what ways is it a joke? I personally think it's great and can't imagine going back to having every single song I own be locally stored if I ever want to play them. I store, maybe, 500 songs on my iPhone and none on my iPad or Macs but I have access to them all.

2) It surprises me you'd say "Apple no longer talking about iTunes Match" when they just talked about iTunes Match when they announced and launched iTunes Radio.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #68 of 155

Instead of comparing the quality of recordings with other recordings, shouldn't the real comparison be with the actual performance of the music that is being recorded?

post #69 of 155
Quote:
Originally Posted by Lorin Schultz View PostDepends how you define that.

 

Early anti-aliasing filters were not very good. It's hard to make a good, cheap, steep filter, and easier to make a good, cheap, SHALLOW filter. Raising the sampling frequency allowed the use of cheaper anti-aliasing filters that also sounded better. So systems with higher sample rates sounded better, but NOT because of the higher sampling rate. It was because the filters were better.

 

None of that is relevant anymore. Today the only benefit of higher sampling rate is an increase in the highest frequency that can be recorded. Thus the only argument is whether or not there is any benefit to recording frequencies higher than 22 KHz.

 

That was in response to the sampling rates in use with recorders at the time of the CD's introduction.  I don't recall those higher sampling frequencies getting implemented and appearing in consumer grade equipment until later.

 

Quote:

Reasonable statement, but I would ask you what you expect to hear at 96/24 that you wouldn't at 44/16? Further, what deleterious effects do you believe occur when down converting from 96/24 to 44/16?

 

I don't know what kind of magic people think is going on in recording studios that will be masked by a 44/16 release and only revealed by way of 96/24 media. The laws of physics exist in the studio the same way they do at home. There's nothing going on behind that glass that mysteriously results in sounds so quiet or high in frequency that they can only be heard at 96/24.

 

I'm thinking back to statements I recall from Bob Ludwig several years ago indicating how high res PCM simplifies the mastering process by allowing for a more direct transfer. With vinyl you're obviously having to account for the medium, but Ludwig indicated that the CD was not entirely transparent to the master either. This something I've heard from a few professional audio engineers as well. In the few cases where I have a CD and a high res track that I know were created under truly comparable conditions, the A-B comparisons I've done fare quite well for the CD, although I have observed some subtle differences.

 

If anything, my own A-B comparisons confirm that the actual practices employed during the mixing and mastering process will matter a helluva lot more than the release format (comparing the 5.1 SACD tracks with the two-channel tracks on Concord Jazz's SACD releases provides an excellent illustration of what happens when layers of processing and compression used in the original stereo mixdown are removed). David Chesky of HDTracks.com indicated that good or bad sound quality is still dictated by the artists and labels who provide the original source tracks. Even with a high res master, most pop tracks are still dynamic range compressed. OTOH, other tracks provided to them are indeed transferred straight from the archival digital master. And for those higher quality tracks, making high res lossless tracks available to consumers puts this entire issue to bed by eliminating all of the supposed sins that occur during the mastering process. "This is the master source that you're listening to, now shut up and move on."

 

At a pragmatic level, a small niche outfit like HDTracks isn't in any position to pressure record labels or artists into getting away from heavy-handed compression or other sound sapping practices. However, if Apple makes a big marketing push with the lossless tracks, it would certainly be in their best interests to pressure the labels and artists into providing them with better master sources, and they certainly have the muscle to put real pressure in this area if they choose. And to me, that would hopefully fix the breaks in the chain that result in lousy sound quality with so many of the tracks I've purchased through the iTunes Store. I don't know who or what is at fault (the original source provided to Apple, the conversion to lossy AAC, artistic decisions, etc.), and frankly I don't care -- I just know the often subpar results.

post #70 of 155
This article is music to my ears (excuse the pun) and I really hope there's truth to it. I've been waiting for Apple to do something like this for years!

It would be just so much more convenient to be able to download Studio Master quality tracks from iTunes using my Apple ID than what I have to do now.

I'm also very glad to see that I'm not the only
audiophile on these forums and that there are many others here that want this to happen too!

PS. I have purchased a number of lossy tracks and albums on iTunes because it was just too difficult to find them elsewhere, and while the sound quality on some were passable the majority were pretty bad under scrutiny. So if this does happen I hope Apple give me the option to upgrade them to a higher resolution format for a reasonable premium.
post #71 of 155
Quote:
Originally Posted by Sevenfeet View Post

It's harder to tell higher fidelity music from compressed music on the equipment that most people have in their homes or cars.  Let's face it, most consumer grade music playback equipment out there isn't that good (but still better than 20-30 years ago).  But if you've invested in top quality equipment that is either audiophile or near audiophile, you can certainly tell the difference.  I have some music purchased from HD Tracks and Linn.  Nearly all of it sounds a lot sweeter and prettier on my two channel  audiophile rig in my living room (which includes a tube amp) but it also sounds better on my home theater setup which has awesome tower speakers I acquired a decade ago.

Most people have to deal with a lot lower quality music playback sources, or what they hear from their earbuds on mobile devices (as opposed to high end planar headphones played through a quality amp).  HD Music isn't going to take the world by storm and frankly, there are plenty of good reasons to have compressed music (mobile devices, in car reproduction, etc).  But if you have the right equipment, you can tell the difference.
Your right, a better sound system makes a difference, as does the dynamic range of the music and (especially) the recording quality and AD conversion.
I used a DDD CD and a good quality mp3 encoder. The sound system I tested it with was pretty good but was not as good as some high end systems I listened to.
So you could be right but 190 kbps (or so) extra should be enough to capture the difference between the sound systems.
post #72 of 155
Quote:
Originally Posted by Haggar View Post
 

Instead of comparing the quality of recordings with other recordings, shouldn't the real comparison be with the actual performance of the music that is being recorded?

In principle, yes. In practice, this is pretty problematic.

 

The first issue is venue. Recording studios often have their own unique acoustics that ordinary people can't visit. Also many live performances occur in places that the average listener will never visit, whether it be the Musikverein in Vienna or some concert arena in London. Also, the room in which you're playing a recorded performance (like your living room) may have vastly different acoustics than where it was recorded.

 

The second issue is the performance itself. Each performance, whether it be live or a studio recording is a unique performance. For example, take a simple piece that may have been recorded several times over a performer's career, like Arthur Rubinstein playing a Chopin nocturne or Miles Davis playing "Kind of Blue."

 

I've been to plenty of live rock/pop concerts where the songs played are considerably different than the studio versions. As a matter of fact, the artist often wants the live performance to be different than the studio recording. That's part of the draw of a live rock/pop performance. For classical performances, the nuances are far more subtle, but they are there as well.

 

If one really wants to judge recordings to actual performances, one is pretty limited in the type of material that can be used for the assessment. Basically, it will come down to simple vocal music and/or a small number of unamplified musical instruments: things like piano recitals, violin sonatas, a cappella choral music, acoustic guitar/vocals, etc. by living artists who are still performing.

 

For example, I can assess a Murray Perahia recording of Bach solo piano works because he still performs live and I know what a piano sounds like because I have sat down in front of one and played.

 

Once you get into archival/historic recordings, all bets are off the table since the original performer is no longer around and typically the original recording shows its age in terms of its acoustic limitations. Maybe Murray Perahia could play some Chopin on Artur Rubinstein's piano, but you can't really compare the live Perahia performance with the historic Rubinstein recording.

post #73 of 155
Quote:
Originally Posted by Smallwheels View Post
 


I understand the crap the digital techno guys spread. I trust the opinions of the musicians who have compared the Pono to other digital music players. I'll side with them. They get to hear the master tracks that they create. When they say that Pono is better quality than other formats they've heard I believe them.

 

I've seen the video technical explanations about how humans can't tell the difference because of certain parameters and how the rounding of the steps between each digital sample makes digital just as good as analog. It just isn't true. I don't care if people think of me as a "wacked out audiophile".

 

The hardware used in digital music reproduction does have a huge effect on the quality of the sound. If Apple just adopts a higher quality file it will sound better but it just won't be anywhere near as good as a high end Pono music player.


For some years now, every time one of you golden ears pops up and claims you can hear a pin drop on the far side of the moon, I have posted a wav file that I created that contains 223 kbps aac segments spliced in to the lossless original and asked them to identify the time codes corresponding to the splices. No one has ever been able to discern which bits are compressed and which aren't.

 

So until someone can do that, I will continue to believe that very few people, if any,  can hear any quality difference between 16 bit 44.1 Khz and higher resolution formats.

post #74 of 155
Quote:
Originally Posted by Haggar View Post
 

Instead of comparing the quality of recordings with other recordings, shouldn't the real comparison be with the actual performance of the music that is being recorded?

 

NO!!! Don't! You'll be sorry!

 

Seriously. I came home from a night at the symphony orchestra all excited about one of the pieces I'd heard so I put it up on the stereo.

 

ew...

 

The stereo system I love sounded as much like that orchestra as a photograph looks like a person. Blech. I was so disappointed.

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply
post #75 of 155

In many cases, especially for catalog product, the labels will first convert from a 16 bit 44.1KHz redbook master anyway, because it's either all they have or because it's too costly to remaster the original recording for a relatively small number of sales, so converting to a higher bit rate will accomplish absolutely nothing.  It's like when your TV or player up-resses a DVD to HD.  It only looks very slightly better than watching the DVD natively.

 

Secondly, even if the labels do have source 96/24 masters, 99% of consumers either don't care or can't hear the difference, especially when listening with ear buds or plugging their player into one of those table-top boxes with a 3" speaker, aside from the fact that an iPhone or iPod puts about only about 1/4 watt of power. 

 

I'm an ex-recording engineer.  Years ago, I bought a standalone CD recorder that had 96/24 capability.  I was really excited about that, especially for transferring vinyl to CD-R.   But I couldn't hear any difference whatsoever and no one who I ever demonstrated the results to could ever hear the difference either, not even young people who still had all their hearing.   

 

If Apple offered such hi-res recordings at the same price, then they'd have a competitive advantage over other sites, even if it didn't actually sound any better - people would go for it anyway.   But at a higher price, the vast majority of consumers don't give a crap.   That's why DVD-Audio, Blu-ray Audio and other hi-res formats all failed.   

 

The other issue is that most pop recordings today are mixed and mastered with almost no dynamic range.    Everyone wants their recordings to sound the loudest and they want every note to sound the loudest.   If you watch the recordings play on a waveform monitor, they are basically "flatlined".    Decades ago, only recordings from The Who looked like that.    Compare that to most recordings of the 50's-70's.     The ridiculousness is that vinyl had a dynamic range of about 35db, regular CD has a dynamic range of about 96db, but there's actually less dynamic range on most pop CDs than there was on vinyl recordings.    When there's no dynamic range, an increased recording bit rate is almost a moot point.    While the increased bit rate reduces quantization errors, it's unlikely it would ever be audible.   Increased sampling rates only matter if you think you can hear above 22KHz.   Very few adults can.   It's only likely if you live in the country, don't work in a noisy environment, haven't listened to music with headphones and attend few, if any, live music shows.    

post #76 of 155

Well, the difference between a 16 bit version of a Redbook CD and a 24 Bit HD Tracks version of the same song is VERY noticeable that even your average Joe can hear on a pair of headphones or a decent low end stereo system.  It's VERY noticeable.

Trust me.  It's VERY noticeable without having "trained ears".  Actually musicians don't always have better ears, they just know what the original masters sound like compared to what the compressed, down sampled versions sound like. it's VERY noticeable.  24 bit is actually much closer to the original analog tape or 24 bit original tracks recorded if they were recorded at 24 bit, which a lot of recordings have been done lately.  But you probably can't tell much difference if the original recording was tracked at 16 bit, THOSE you might not hear much of a difference.

 

HD Tracks is only releasing mostly analog tape conversions to 24 Bit and you should be able to hear a noticeable difference and it really doesn't take "Golden Ears" or a ripping expensive system to notice that difference.  You could hear those differences with a  $190 DAC and a pair of $575 powered speakers or just about any decent pair of speakers, like Paradigm, Polk, PSB, JBL, etc. that cost in the $400 a pair or higher running through a decent receiver.  A decent set of headphones is also easy to tell the difference.  It's just a matter of whether or not you want to investigate it further.  It's too bad I couldn't have you come over to my place and I would put a blind fold on you for you to see for yourself that it is VERY noticeable.  But you just have to see for yourself.  Go to a local stereo shop that sells the less expensive stereo equipment (NOT Best Buy), and ask them to play a 24 Bit HD Tracks file through an inexpensive DAC vs a 16 Bit CD or !6 Bit MP3/AAC file.  See for yourself and see what you can hear.   The problem with the big box stores is they have bad listening rooms and are just too noisy, so I always hate listening to anything in those types of stores.  A great, but VERY inexpensive USB DAC is made by iFI and only costs $189 and it's battery powered, has a headphone jack, can be installed into your existing stereo, or pair of powered speakers (Paradigm Shift A2's) are great powered bookshelf speakers.

 

Be as skeptical as you can, but at least open minded enough to try.  That's all it takes. Make it an expedition in sound for an Saturday afternoon. It doesn't cost you anything to try other than some gas money and time to go to a decent audio store.  But call ahead and make sure you they have something along the lines of what i'm suggesting.  But be careful, the audio bug hits you when you least expect it, and once hooked, it's hard habit to break.  But at least it's legal. :-)


Edited by drblank - 4/11/14 at 12:39pm
post #77 of 155
Quote:
Originally Posted by drblank View Post
 

Well, the difference between a 16 bit version of a Redbook CD and a 24 Bit HD Tracks version of the same song is VERY noticeable that even your average Joe can hear on a pair of headphones or a decent low end stereo system.  It's VERY noticeable.

 

Then the word length is NOT the only difference between the two, because if it were, it would NOT be "very noticeable."

 

If what you claim is true, I will confidently bet my car that the HD Tracks version has been remastered. There's just plain no way that an extra 48dB of headroom is gonna make a lick of difference to a recording that never drops more than 6-10 dB below 0 dBFS. In other words, if the ONLY difference between two recordings is 16 vs. 24 bit, you won't hear any difference at all. Well, unless the level is really, really low, like -40 dBFS. Then the 24 bit version will sound better, but that's not really a "real-world" comparison.

 

This is what I mean about letting the results of uncontrolled comparisons lead to erroneous conclusions.

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply

Lorin Schultz (formerly V5V)

Audio Engineer

V5V Digital Media, Vancouver, BC Canada

Reply
post #78 of 155

Quote:

Originally Posted by drblank View Post
 

Well, the difference between a 16 bit version of a Redbook CD and a 24 Bit HD Tracks version of the same song is VERY noticeable that even your average Joe can hear on a pair of headphones or a decent low end stereo system.  It's VERY noticeable.

Unless you can verify that the CD and the 24-bit tracks were transferred under comparable conditions, you cannot assert that any differences you observe are due more to the resolution than differences in the settings, levels, etc. used during the mastering process. In my experience, when the CD and high res tracks are mastered under the same conditions, the differences are not "VERY" noticeable.  If anything, my A-B listenings illustrate just how far general mastering practices have strayed away from the CD format's optimal capabilities.

 

Even David Chesky, who owns HDTracks, indicates that their tracks are limited to what the labels provide. They are just a distributor and exercise limited quality control over the tracks themselves. If the labels give HDTracks the same highly compressed masters that were used on the CD transfers, there won't be enough dynamic range to hear much of a difference.

 

That's why outfits like Mobile Fidelity should be used more as a reference point, because they control the entire chain between the archival master and the release track. MoFi also happens to use a highly customized analog-to-digital playback and encoding system, which they use for both the CD and high res encoding. Because of this, their CD/SACD hybrid releases are truer comparisons, and if anything, demonstrate just how good a CD can sound if proper care is taken during the mastering process and a reference quality analog playback setup is used. In comparing a MoFi CD track against a 96/24 PCM track issued by Classic Records, I found that I preferred the MoFi track. And that's solely on MoFi's playback chain and their editorial decisions taken during the mastering process.

 

Quote:

Originally Posted by drblank

Trust me.  It's VERY noticeable without having "trained ears".  Actually musicians don't always have better ears, they just know what the original masters sound like compared to what the compressed, down sampled versions sound like. it's VERY noticeable.  24 bit is actually much closer to the original analog tape or 24 bit original tracks recorded if they were recorded at 24 bit, which a lot of recordings have been done lately.  But you probably can't tell much difference if the original recording was tracked at 16 bit, THOSE you might not hear much of a difference.

 

HD Tracks is only releasing mostly analog tape conversions to 24 Bit and you should be able to hear a noticeable difference and it really doesn't take "Golden Ears" or a ripping expensive system to notice that difference.  You could hear those differences with a  $190 DAC and a pair of $575 powered speakers or just about any decent pair of speakers, like Paradigm, Polk, PSB, JBL, etc. that cost in the $400 a pair or higher running through a decent receiver.  A decent set of headphones is also easy to tell the difference.  It's just a matter of whether or not you want to investigate it further.  It's too bad I couldn't have you come over to my place and I would put a blind fold on you for you to see for yourself that it is VERY noticeable.  But you just have to see for yourself.  Go to a local stereo shop that sells the less expensive stereo equipment (NOT Best Buy), and ask them to play a 24 Bit HD Tracks file through an inexpensive DAC vs a 16 Bit CD or !6 Bit MP3/AAC file.  See for yourself and see what you can hear.   The problem with the big box stores is they have bad listening rooms and are just too noisy, so I always hate listening to anything in those types of stores.  A great, but VERY inexpensive USB DAC is made by iFI and only costs $189 and it's battery powered, has a headphone jack, can be installed into your existing stereo, or pair of powered speakers (Paradigm Shift A2's) are great powered bookshelf speakers.

 

Some of the issues that you cite are valid, but again, you have attributed this to the 24-bit resolution without any other basis of comparison.

 

Like I've said before, if Apple does go through with issuing lossless tracks and it results in better master sources, then I have no problem with paying more for them. The generally better mastering practices (along with multichannel) are the primary reason why I purchase SACDs when given a choice. But, my listenings have led me to conclude that resolution is way down the list when trying to zero in on factors that contribute to better sound quality. There are far more urgent issues in the post production chain, such as excessive dynamic range compression and clipping, that need addressing before we tackle the bit depth and sampling rates.

post #79 of 155

Quote:

Originally Posted by hillstones View Post
 

It was .30 cents per song to upgrade.  Songs no longer available "for whatever reason" was due to licensing agreements with the recording industry.  Free upgrade, unlikely.  iTunes Match is a joke.  Does it surprise you that Apple no longer talks about iTunes Match?  

 

Hmmm,  I recall that the purchased songs in my iTunes library were all upgraded to 256k AAC in the background. No charge. No intervention on my part at all. Of course, I originally paid $0.99 for them, and the price after the 256k upgrade stayed the same.

 

As far as iTunes Match being joke, are you joking? Or rather, do you actually use iTunes Match? I use it everyday, on my work computer and my iPhone.  10,000+ songs available anytime. When I add new music to my home media server, those tracks show up on my other devices shortly afterwards. The initial match took a long time, but iTunes Match managed to match up close to 3/4 of my music collection on the first pass. The rest of my collection took five days to upload, but it's certainly a lot faster than other music locker services.

 

When I did a new iTunes install and also when I played iTunes Radio, one of the first things that showed up was the iTunes Match screen. I don't know where you have this idea that Apple doesn't talk about iTunes Match, given how often it shows up on iTunes and iTunes Radio.

post #80 of 155
Quote:
Originally Posted by Woochifer View Post
 

Quote:

Unless you can verify that the CD and the 24-bit tracks were transferred under comparable conditions, you cannot assert that any differences you observe are due more to the resolution than differences in the settings, levels, etc. used during the mastering process. In my experience, when the CD and high res tracks are mastered under the same conditions, the differences are not "VERY" noticeable.  If anything, my A-B listenings illustrate just how far general mastering practices have strayed away from the CD format's optimal capabilities.

 

Even David Chesky, who owns HDTracks, indicates that their tracks are limited to what the labels provide. They are just a distributor and exercise limited quality control over the tracks themselves. If the labels give HDTracks the same highly compressed masters that were used on the CD transfers, there won't be enough dynamic range to hear much of a difference.

 

That's why outfits like Mobile Fidelity should be used more as a reference point, because they control the entire chain between the archival master and the release track. MoFi also happens to use a highly customized analog-to-digital playback and encoding system, which they use for both the CD and high res encoding. Because of this, their CD/SACD hybrid releases are truer comparisons, and if anything, demonstrate just how good a CD can sound if proper care is taken during the mastering process and a reference quality analog playback setup is used. In comparing a MoFi CD track against a 96/24 PCM track issued by Classic Records, I found that I preferred the MoFi track. And that's solely on MoFi's playback chain and their editorial decisions taken during the mastering process.

 

Quote:

Some of the issues that you cite are valid, but again, you have attributed this to the 24-bit resolution without any other basis of comparison.

 

Like I've said before, if Apple does go through with issuing lossless tracks and it results in better master sources, then I have no problem with paying more for them. The generally better mastering practices (along with multichannel) are the primary reason why I purchase SACDs when given a choice. But, my listenings have led me to conclude that resolution is way down the list when trying to zero in on factors that contribute to better sound quality. There are far more urgent issues in the post production chain, such as excessive dynamic range compression and clipping, that need addressing before we tackle the bit depth and sampling rates.

All I know is that if the original recordings were done in analog and they were transferred to 24 Bit (AIFF), I've compared plenty to 16 bit Redbook, AIFF and ACC files and 24 bit kicked the living crap out of everything else.  It wasn't even a contest.  But what do I know, I'm just listening to it and even on a decent stereo, nothing super fancy.   It's OBVIOUSLY better.  Better bass definition, better ability to hear subtleties of the original recording, much better clarity, etc. etc.  Everything you want.  Obviously, there are a lot of recordings that were done originally in 16 Bit digital and there isn't much they can do to improve it that.


I did read that several top mastering engineers thought Mastered For iTunes is totally acceptable and there really isn't much of a difference in that and lossless.  They newer software is better than the original process. 

 

I've done some listening tests between the newer 16 bit AAC (Mastered for iTunes) and Redbook 16 bit ripped to AIFF and I couldn't hear any noticeable difference.  But when you get into 24 bit vs 16 bit, they are typically very noticeable.  I talked to and read interviews of top mastering engineers that do mastering of HD Tracks and they have mentioned they have updated their converters to what they used to use and they much better s/n ratio, dynamic range, etc. so the converters are getting far better than they used to use.  I know there were a few bad recordings that got onto HD Tracks in the beginning, but I think they pulled those.  But everything I've downloaded is far better than 16 Bit where I ripped a CD to my mac to AIFF.   But I'm talking in generalities. Yeah, I'm sure if the mastering studio had a crappy 24 bit converter compared to a high end 16 Bit converter, but that generally does't happen.  A lot of earlier 16 Bit CDs really sucked.  I mean they were horrific and sometimes they remastered them using better converters and they sounded better, but I have yet to hear a 24 bit recording that isn't better than 16 bit from HD Tracks.  I'm sure there are some that are out there, but I haven't heard it yet. I have about 20 recordings from them so far and a growing collection from B&W's site.   The biggest problem is digital recordings originally done in 16 bit.  They can't do much with those.  

 

I just do the listening test on my system, if there is a difference and it's noticeable then it's noticeable, if it's not then it's not.  For every single 24 bit download I've done so far from HD Tracks, i heard a VERY noticeable difference than a AIFF from CD.  And the different was night and day.  Once I download the 24 bit version and it's that much better, then I just delete the old or just keep for listening to on my iPad or iPhone since they only do 16 bit.  But for my computer based stereo?  24 bit is the way to go.  go listen to Santana Abraxis. much better than 16 bit lossless.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: iPod + iTunes + AppleTV
AppleInsider › Forums › Mobile › iPod + iTunes + AppleTV › Rumor: Apple to offer hi-res 24-bit tracks on iTunes in coming months