Originally posted by shetline
It seems that most lossy codecs have an Achilles' heel or two, some particular sounds or sequences of sounds which they don't do well at, no matter how many bits you throw at the problem.
I don't consider myself particularly sensitive to these things, but a couple of years back I played with a test signal (a short segment of electronic music with some odd, buzzy-sounding synthesized notes (sawtooth waves, perhaps)) which some guy was claiming could be encoded much better with MP3 than AAC. At least with the MP3 and AAC encoders I tried at the time, he was right.
Maybe Apple has tweaked and tuned their AAC encoder since then to fix this one area of encoder weakness, but back then, it failed pretty badly on this particular bit of music. At 192 kbps, what I usually use, a very noticeable hiss was added to the background -- but only when these odd buzzy notes were playing. This was not a subtle, "gee, now I don't feel like I can tell what color shirt the singer is wearing anymore" change, but something I think most people would easily notice in a side-by-side comparison with the uncompressed original.
In this one case, even going to 256 kbps didn't help much -- the problem was still there, only reduced a little bit. Given that kind of thing can happen, I don't think it's too surprising that for some recordings, and portions of some recording, bit rate won't matter much. Simply that you've used compression at all will sometimes make a noticeable difference.
What we have to remember about these codecs is that they are psychological in makeup. That is, they are taking things out that we are supposed not to be able to hear.
For the most part, they work well, but the lower the rate, the more must be taken out. At some point, we can begin to hear it.
Due to the nature of sampling, the higher frequencies have the fewest samples. Normally, this won't matter. The Nyquist formula works well. It has been proven many times over the years. It's used in all non compressed formats. But the compressed codecs remove even more samples from the high frequencies. If the music conforms to the psychological and auditory averages that are used for the codecs, all is well. But, if the music doesn't conform, then there could be an audible problem. These problems usually manifest themselves in those mentioned high frequencies first. Any signal with strong high frequencies will be affected the most. Electronic music, rock, with its high compression, etc.
The lower the sample rate, the earlier those problems appear, and the lower in the frequency range they appear. 128Kb/s is fine for most music, as long as it isn't listened to with wide band equipment. That is, a full audible frequency range with a good S/N ratio. $50 computer speakers and $10 headphones aren't likely to show any defects. But the better the equipment, the more likely they WILL show up.
I've found that AAC at 320Kb/s is generally pretty good, even on good equipment. But, even then, occasionally some distortion creeps in.