I've seen a ton of blind tests and I've never seen anybody, including studio engineers (who are paid big money to hear tiny tiny tiny blips in sound) who can tell the difference between 256kbps or greater AAC (or LAME encoded mp3) and wav.
I've got about $700 worth of headphones, thousands of dollars worth of speakers, an audiophile external digital to audio converter, etc and I know I can't.
Honestly, I can really only hear a big difference with 128kbps if there are a lot of hard transients.
the psychoacoustic models that are used today are actually very very good. They can come a lot closer to transparency with 128 kbps than they could even 5 years ago.
I've read that people under the age of around 25 can hear some higher frequencies, but this ability quickly diminishes as you age. Us old fogies certainly seem not to be able hear most of what some claim to be able to.
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
So, I tend to believe much of what you say that most people simply cannot hear the difference between bit rates and compression levels. There certainly are people with better hearing that others, but those generally aren't audiophiles, and are instead people who have protected their hearing over a period of many years by NOT listening to a great deal of (particularly loud) sound/music for long periods. If anything audiophiles have worse hearing than most, not better.
Even people that can't tell the difference should encode to a lossless format for the simple fact that if you lose your CDs or the original wav file, you're stuck with a lossy version. If you ever have to convert to another lossy format for whatever reason, you'll be losing even more data.
For example, those poor folks converting from the lossy WMA to the lossy AAC are getting screwed. Convert from a lossy format enough times and you'll be left with a very crappy quality music file.
Of course, not everyone has the storage space for lossless versions of their gazillion music CDs that they never listen to.
I get the feeling that many people tend to think that AAC and mp3 encoders just randomly throw away digital audio information. They tend to think that 256kbps is "twice as good" as 128 kbps, etc
It doesn't really work this way. AAC (and mp3) uses a psycho acoustic model when it converts. 70% of what it ditches is nothing at all. And by nothing, I mean nothing. .wav puts down the same amount of information for pure black silence as it does for a raging guitar solo. mp3 and AAC don't work this way (well, they do if you don't use VBR, but that just means that they throw away less info when less info is required to encode the main audible parts) AAC amd current mp3 encoders throw out a bunch of stuff that's well outside the human audible range. The first thing they do is throw out black silence. There is a surprisingly large amount of pure non-info encoded in .wav. They then dispense with the higher frequencies that are inaudible and then the lower inaudible range (since sometimes the lower inaudible frequencies can be felt, they're cut second). Then it starts cutting out extremely light noises within nano-seconds of very loud noises, since thousands of studies have shown that humans can't hear very quick and large changes in dynamic level (ie it takes some time for the ear to "rebound" after a loud noise). Just these changes alone will often times get you into the 256 kbps range. And if you burned this to .wav and then re-ripped, you'd lose no additional audible information, since it would realize that "hey, I can encode this exact file with just 256 kbps by throwing away all this info that is either black silence or looks exactly like an oddly .wav'd up 256 kbps file."
The problems start when you get much past this point and the encoder starts throwing away information that "the vast majority of people can't hear". THings like the human voice are still almost completely unaffected at 128 kbps. Its things at either end of the spectrum, that are very loud or very soft, that lose a little "definition". cymbals tend to not sound as crisp, bass notes can get "flabby" and lose some of their tightness.
Also, you shouldn't have much of a problem as long as you use the same encoder, which will make the same choices as to what to throw away, at any time. that is ripping to 128 kbps with iTunes, then burning to .wav and re-ripping to 128 kbps won't cause issues, unless you do it thousands and thousands of times, where random errors can start to become an issue.
mp3 was very bad in the early 90's and tends to get a bad rap, because of all the really awful rips that were made at that time and then perpetuated through napster. But programs like LAME are actually quite good these days (if a bit clunky to use for the average music listener) and even the iTunes encoder is actually very good (although it gets some grief from the super audiophile crowd, which tends to be anti-establishment).
I've read that people under the age of around 25 can hear some higher frequencies, but this ability quickly diminishes as you age. Us old fogies certainly seem not to be able hear most of what some claim to be able to.
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
So, I tend to believe much of what you say that most people simply cannot hear the difference between bit rates and compression levels. There certainly are people with better hearing that others, but those generally aren't audiophiles, and are instead people who have protected their hearing over a period of many years by NOT listening to a great deal of (particularly loud) sound/music for long periods. If anything audiophiles have worse hearing than most, not better.
Even children can't hear outside of the 20-20000 htz range. And a good encoder can compress a file down pretty good, just by getting rid of info outside that range.
Apple seems to be sneakily increasing the bitrate, first with iTunes Plus downloads and now with a change to the default encoding bitrate, in order to require people to buy bigger iPods. The average user wouldn't be aware that by encoding their songs at 256Kbps, they can only fit half as many on an iPod as they can at 128Kbps.
Seems to me to be a risky PR move changing all of the standards and defaults to 256Kbps, as it throws off all of their song capacity claims for their iPods (which currently are based on 128Kbps songs).
Sneakily? They advertised iTunes Plus on their website for weeks.
Even if they hadn't done so, I wouldn't have minded being surprised to find iTunes-bought music in higher quality.
I find it hard to believe it is slower than the iPhoto (09). Well, I have over 10,000 photos and 300 songs so the comparison is not fair.
I have just over 7500 songs in iTunes 8 and 5500 photos in iPhoto 09. Running on MBP (late 2006) with 3GB RAM. For me iTunes is significantly slower than iPhoto.
Let's throw our hands up and pray they re-instate the option to turn off the links to the iTunes Store that annoyingly pop up in song line items when you select them. Until then, I'll stay with good old 7.7
And I just don't get the whole 256kbps AAC thing. You can already choose as an option for iTunes to convert to that bit rate upon importing a CD.
Even children can't hear outside of the 20-20000 htz range. And a good encoder can compress a file down pretty good, just by getting rid of info outside that range.
For music recording, I doubt a competent engineer would allow much (if any) information outside
the 20-20000 hz range to make it through to the master. What would you be omitting with
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
Is your name Carol Anne and aren't you a character from Poltergeist?
For music recording, I doubt a competent engineer would allow much (if any) information outside
the 20-20000 hz range to make it through to the master. What would you be omitting with
compression?
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
Look up any test comparison- there are other factors involved. The playback of a CD simply sounds better- sampling rates, D/A converters, etc.
yeah, I think the one thing about a lot of the people who talk about lossless don't get is that you'll hear the difference between a good external digital-analog converter and your mac's built in DAC long before you'll hear the difference between 256 kbps and lossless.
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
Exactly right in both cases. For the highs, even though the harmonic frequency, by itself,
may not be audible, its presence in addition to the fundamental and lower harmonics can
add "coloration" to the sound. I only brought this up, because it seems to contradict your
assertion that people cannot distinguish between compressed and non-compressed
versions of the same music. You even say that you can tell the difference yourself, if
your system is capable of reproducing lows.
Not trying to be confrontational. I enjoyed talking to you.
Exactly right in both cases. For the highs, even though the harmonic frequency, by itself,
may not be audible, its presence in addition to the fundamental and lower harmonics can
add "coloration" to the sound. I only brought this up, because it seems to contradict your
assertion that people cannot distinguish between compressed and non-compressed
versions of the same music. You even say that you can tell the difference yourself, if
your system is capable of reproducing lows.
Not trying to be confrontational. I enjoyed talking to you.
eh, there's a lot of debate about how harmonics that the ear can't pick up effect frequencies you can hear. I tend to come down on the "they don't do anything" side. However, its controversial enough that it causes engineers to not mess with them (well, that and the fact that its a royal PITA to mess with harmonics). As to the lower range stuff, Id certainly not take them off if I was listening through speakers with a LOT of power. However, nothing you're going to listen to in a residential area is going to produce enough power for you to really feel 10 htz waves.
Comments
From what I read on MacRumors 256k will be the new default setting, rather than 128k.
That's absurd to call changing a default setting an improvement. Arrgh!
Thanks for the heads up. I'll be sure to adjust my life to fit your needs.
Cool.
Your welcome.
I've seen a ton of blind tests and I've never seen anybody, including studio engineers (who are paid big money to hear tiny tiny tiny blips in sound) who can tell the difference between 256kbps or greater AAC (or LAME encoded mp3) and wav.
I've got about $700 worth of headphones, thousands of dollars worth of speakers, an audiophile external digital to audio converter, etc and I know I can't.
Honestly, I can really only hear a big difference with 128kbps if there are a lot of hard transients.
the psychoacoustic models that are used today are actually very very good. They can come a lot closer to transparency with 128 kbps than they could even 5 years ago.
I've read that people under the age of around 25 can hear some higher frequencies, but this ability quickly diminishes as you age. Us old fogies certainly seem not to be able hear most of what some claim to be able to.
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
So, I tend to believe much of what you say that most people simply cannot hear the difference between bit rates and compression levels. There certainly are people with better hearing that others, but those generally aren't audiophiles, and are instead people who have protected their hearing over a period of many years by NOT listening to a great deal of (particularly loud) sound/music for long periods. If anything audiophiles have worse hearing than most, not better.
Even people that can't tell the difference should encode to a lossless format for the simple fact that if you lose your CDs or the original wav file, you're stuck with a lossy version. If you ever have to convert to another lossy format for whatever reason, you'll be losing even more data.
For example, those poor folks converting from the lossy WMA to the lossy AAC are getting screwed. Convert from a lossy format enough times and you'll be left with a very crappy quality music file.
Of course, not everyone has the storage space for lossless versions of their gazillion music CDs that they never listen to.
I get the feeling that many people tend to think that AAC and mp3 encoders just randomly throw away digital audio information. They tend to think that 256kbps is "twice as good" as 128 kbps, etc
It doesn't really work this way. AAC (and mp3) uses a psycho acoustic model when it converts. 70% of what it ditches is nothing at all. And by nothing, I mean nothing. .wav puts down the same amount of information for pure black silence as it does for a raging guitar solo. mp3 and AAC don't work this way (well, they do if you don't use VBR, but that just means that they throw away less info when less info is required to encode the main audible parts) AAC amd current mp3 encoders throw out a bunch of stuff that's well outside the human audible range. The first thing they do is throw out black silence. There is a surprisingly large amount of pure non-info encoded in .wav. They then dispense with the higher frequencies that are inaudible and then the lower inaudible range (since sometimes the lower inaudible frequencies can be felt, they're cut second). Then it starts cutting out extremely light noises within nano-seconds of very loud noises, since thousands of studies have shown that humans can't hear very quick and large changes in dynamic level (ie it takes some time for the ear to "rebound" after a loud noise). Just these changes alone will often times get you into the 256 kbps range. And if you burned this to .wav and then re-ripped, you'd lose no additional audible information, since it would realize that "hey, I can encode this exact file with just 256 kbps by throwing away all this info that is either black silence or looks exactly like an oddly .wav'd up 256 kbps file."
The problems start when you get much past this point and the encoder starts throwing away information that "the vast majority of people can't hear". THings like the human voice are still almost completely unaffected at 128 kbps. Its things at either end of the spectrum, that are very loud or very soft, that lose a little "definition". cymbals tend to not sound as crisp, bass notes can get "flabby" and lose some of their tightness.
Also, you shouldn't have much of a problem as long as you use the same encoder, which will make the same choices as to what to throw away, at any time. that is ripping to 128 kbps with iTunes, then burning to .wav and re-ripping to 128 kbps won't cause issues, unless you do it thousands and thousands of times, where random errors can start to become an issue.
mp3 was very bad in the early 90's and tends to get a bad rap, because of all the really awful rips that were made at that time and then perpetuated through napster. But programs like LAME are actually quite good these days (if a bit clunky to use for the average music listener) and even the iTunes encoder is actually very good (although it gets some grief from the super audiophile crowd, which tends to be anti-establishment).
I've read that people under the age of around 25 can hear some higher frequencies, but this ability quickly diminishes as you age. Us old fogies certainly seem not to be able hear most of what some claim to be able to.
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
So, I tend to believe much of what you say that most people simply cannot hear the difference between bit rates and compression levels. There certainly are people with better hearing that others, but those generally aren't audiophiles, and are instead people who have protected their hearing over a period of many years by NOT listening to a great deal of (particularly loud) sound/music for long periods. If anything audiophiles have worse hearing than most, not better.
Even children can't hear outside of the 20-20000 htz range. And a good encoder can compress a file down pretty good, just by getting rid of info outside that range.
Apple seems to be sneakily increasing the bitrate, first with iTunes Plus downloads and now with a change to the default encoding bitrate, in order to require people to buy bigger iPods. The average user wouldn't be aware that by encoding their songs at 256Kbps, they can only fit half as many on an iPod as they can at 128Kbps.
Seems to me to be a risky PR move changing all of the standards and defaults to 256Kbps, as it throws off all of their song capacity claims for their iPods (which currently are based on 128Kbps songs).
Sneakily? They advertised iTunes Plus on their website for weeks.
Even if they hadn't done so, I wouldn't have minded being surprised to find iTunes-bought music in higher quality.
I find it hard to believe it is slower than the iPhoto (09). Well, I have over 10,000 photos and 300 songs so the comparison is not fair.
I have just over 7500 songs in iTunes 8 and 5500 photos in iPhoto 09. Running on MBP (late 2006) with 3GB RAM. For me iTunes is significantly slower than iPhoto.
And I just don't get the whole 256kbps AAC thing. You can already choose as an option for iTunes to convert to that bit rate upon importing a CD.
Could this be related to the 24th march info?
Can't wait- hopefully we get HD movies to buy at a reasonable price.
Thanks for the heads up. I'll be sure to adjust my life to fit your needs.
Even children can't hear outside of the 20-20000 htz range. And a good encoder can compress a file down pretty good, just by getting rid of info outside that range.
For music recording, I doubt a competent engineer would allow much (if any) information outside
the 20-20000 hz range to make it through to the master. What would you be omitting with
compression?
Back in the tubular tv days and when I was much younger, I could distinctly hear a high pitched buzz coming from the tv. Nobody in my family, who were much older, could hear it.
Is your name Carol Anne and aren't you a character from Poltergeist?
For music recording, I doubt a competent engineer would allow much (if any) information outside
the 20-20000 hz range to make it through to the master. What would you be omitting with
compression?
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
For music recording, I doubt a competent engineer would allow much (if any) information outside
the 20-20000 hz range to make it through to the master. What would you be omitting with
compression?
If sound is that important to you, buy the CD- it's still beats lossless.
If sound is that important to you, buy the CD- it's still beats lossless.
How do you figure that? It's called lossless because it doesn't lose any quality.
How do you figure that? It's called lossless because it doesn't lose any quality.
Look up any test comparison- there are other factors involved. The playback of a CD simply sounds better- sampling rates, D/A converters, etc.
Look up any test comparison- there are other factors involved. The playback of a CD simply sounds better- sampling rates, D/A converters, etc.
yeah, I think the one thing about a lot of the people who talk about lossless don't get is that you'll hear the difference between a good external digital-analog converter and your mac's built in DAC long before you'll hear the difference between 256 kbps and lossless.
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
Exactly right in both cases. For the highs, even though the harmonic frequency, by itself,
may not be audible, its presence in addition to the fundamental and lower harmonics can
add "coloration" to the sound. I only brought this up, because it seems to contradict your
assertion that people cannot distinguish between compressed and non-compressed
versions of the same music. You even say that you can tell the difference yourself, if
your system is capable of reproducing lows.
Not trying to be confrontational. I enjoyed talking to you.
Exactly right in both cases. For the highs, even though the harmonic frequency, by itself,
may not be audible, its presence in addition to the fundamental and lower harmonics can
add "coloration" to the sound. I only brought this up, because it seems to contradict your
assertion that people cannot distinguish between compressed and non-compressed
versions of the same music. You even say that you can tell the difference yourself, if
your system is capable of reproducing lows.
Not trying to be confrontational. I enjoyed talking to you.
eh, there's a lot of debate about how harmonics that the ear can't pick up effect frequencies you can hear. I tend to come down on the "they don't do anything" side. However, its controversial enough that it causes engineers to not mess with them (well, that and the fact that its a royal PITA to mess with harmonics). As to the lower range stuff, Id certainly not take them off if I was listening through speakers with a LOT of power. However, nothing you're going to listen to in a residential area is going to produce enough power for you to really feel 10 htz waves.