Originally Posted by kaiser_soze
But, your argument is specious.
Nope, but I'm afraid yours is sophistic.
As for the headphones, I mostly listen through a pair of Sennheiser HD580. As for the particular Sennheiser headphones in the post to which you replied, that pair happens to be a DJ model and not one of their best. The closed back has the single advantage of noise isolation, to the detriment of sound quality. The air spring effect can be used to good effect if and only if the driver itself is over damped so as to avoid a high Q resonance. And even in that case, there will still be a cavity resonance, i.e., standing waves set up between the back side of the diaphragm and the wall in back. These are the reasons that Sennheiser's better headphones always have been open back and always will be open back.
Thanks, I'm sure I'd never heard of issues of acoustics & resonance before. Did you know room acoustics are subject to similar issues? :roll eyes: Or is your argument that the person talking about his Sennehisers can't hear the difference because they're 'not one of their best' and just DJ models? (Yet presumably better than the ear buds that come bundled with the phone/pod.)
But as for your comments, the analogy to display resolution and bit depth and so on is always one that is easy to make. But it and of itself it does not prove anything at all. And the comparison with imaging is a bogus comparison for a fundamental reason. With any image, it is always potentially possible to display it on a display with greater pixel resolution, and for this reason there is always a potential advantage for using greater quantity of pixels in the image file. But bit depth is another matter. It translates into the amount of fine variation in brightness, hue, and saturation. There is inherently a limit to the ability of human vision to detect these differences. To keep it simple, consider the case of grey scale. Initially as you increase the bit depth, the brightness of the reproduced image gets closer and closer to the original, i.e., is neither whiter nor blacker than the original. But at a certain point, the human eye simply can no longer perceive the difference. Double the bit depth and scan and encode again, and you cannot tell any difference at all between that copy and the previous one, or between either and the original. Common sense tells you that eventually this will happen. It is not a question of whether it will happen. It is only a question of what the bit depth has to be, in order for this to happen. And once you have reached that point and are entirely certain that you have reached that point, there is absolutely no reason to increase the bit depth of the scan any further. It is the same with audio encoding, and even with perceptual encoding.
Ummm... yes, it was an analogy, not a proof. I find find pictures difficult to listen to. But as for your comments, you're essentially spending a lot of energy explaining that infinite resolution isn't needed because at some point we can't perceive it? No kidding?
The question of whether perceptual encoding can be indistinguishable from the original is a moot question. The only question that is even worth considering is what amount of compression, given a specific encoding scheme, can be tolerated without introducing some artifact by which any listener would be able to hear any difference between that recording and the original. IF you are entirely certain that the bit rate that you have used is perfectly adequate such that no person could every detect any difference between that recording and a non-lossy recording with arbitrarily high quantization rate and word size, then there is no discernible reason to use a higher bit rate. Because, IF it is true that no one can hear then difference, THEN it is true that no one can hear the difference. The only meaningful, valid questions are what people can and cannot hear. To dismiss all perceptual coding techniques in the manner that manner people do is equivalent to asserting that it is not possible, using a perceptual coding technique, to make a recording that no person would be able to recognize as different from a master using arbitrarily high quantization rate and word size. It is manifestly ludicrous to suggest that this would be the case, yet this is precisely what people are in effect asserting when they criticize perceptual encoding categorically. It is logically preposterous.
"IF it is true that no one can hear the difference, THEN it is true that no one can hear the difference." Jesus, seriously? What a bit of pedantic tossing. To summarize: infinite resolution isn't needed and at some point people can't hear the difference between compressed and uncompressed. Astonishing.
The original argument was whether the lossy low-bit-rate encodings used for PMPs are so good that nobody can tell them from the high res audio that's out there. I've heard the difference, so I'm going to go with no, they're not that
good. Unfortunately you haven't actually helped to resolve whether that's the case or where that point may be.I
want access to the higher res audio for home, while the lower bit rate on my phone is good enough for on the go. Apparently I shouldn't want the better sound because infinite resolution is too much? What!?!?