Originally Posted by kaiser_soze
When you did that casual listening test between the two amps, the absolute most you proved is that the two amps sounded different. It seems unlikely that they would, and I am inclined to think that the two of you tricked yourselves into that conclusion, but even if they really did sound different, that in itself is not proof of the superior quality of one over the other.
I wish I realized that I first tricked myself and then tricked my wife into the conclusion that they sounded different and that the imaging was significantly better (and sound more pleasant). Different components can sound different, and I'll leave the superiority issue for below.
It is a well-known fact that many people prefer the different sound of tube amps. Over the decades various reasons have been proposed for why tube amps would, in theory, sound more accurate, but none of those supposed reasons stand up to careful scrutiny.
People debate 'accurate' on either side of tube / solid state, but fuller & smoother, sure. I've heard the difference there too... not that you're going to believe it.
The difference that is most obvious by far is the difference in the spectral makeup of the harmonic distortion when the amps are driven to clipping, i.e., whether the harmonic components are even-order or odd-order. Here there are differences even within tube amps, so it does not make sense to attribute the effect to the "softer" distortion of one type of clipping over another, and that's before even considering the question of whether the onset and severity of clipping is the same for the two amps. The point is that even if you were truly able to hear a difference, that in and of itself does not constitute proof that the more expensive amp is more accurate than the less expensive amp. Until you can prove otherwise, you have to allow the possibility that you like the sound of a less accurate amp, and the conundrum is that through listening tests alone, it is not possible to prove otherwise. It is a conundrum.
Interestingly, for me it's an outright nundrum. Not if I "were
truly able to" hear it. You sound like a color blind person telling me that red and green are the same. There isn't any question about it, it wasn't a subtle "maybe it's there" difference, there was no friggin' clipping and both amps were class A-B, for god's sake. For the imaging to be better, presumably something in the timing/phase of the signal was more intact. Why? I don't have the amp schematics, but it's just possible that the design & better components of one resulted in better sound. But perhaps 'better' amps are, as you speculate, less accurate than cheaper ones. Seems like cheap ones should then be able to pull 'better' sound off easily, but what do I know.
Of course, none of your tube/solid-state pedagogy or disbelief in audible differences between other amps has much to do with whether you (or others) can hear the difference between 128 kbps or 256 kbps AAC and 24/192 audio.
The stuff about the fact that image files can be converted in ways such that the difference is visually apparent does not prove anything at all about the subject at hand and has no relevance whatsoever. Unless of course your point was to argue that it is in fact possible to apply perceptual encoding using a very low bit rate such that the difference would be obvious.
Ah, so a demonstration of what perceptual compression does in other media is irrelevant?
The success of perceptual compression is generally determined by what is in the source material. In compressed digital images, there are artifacts, much more so in high-frequence (high contrast / sharp edged) areas, and often across areas with similar hues where you see blocking. To remove these, you sometimes have to turn the 'quality' so high that you effectively save nothing in size.
In both images and audio, parts of the signal are being thrown away using perceptual weighting to attempt to remove parts of the information that are deemed insignificant. There are clearly parts of the audio that are more or less heavily impacted by the compression, like cymbals, strings, and brass, much like different areas of an image are more or less susceptible to artifacts.
There are many resources on the net that cover what kinds of artifacts different compressions introduce. The fact that image compression produces visible artifacts in images is intended simply as an analogy - the lack of visualization around audio compression definitely makes it less photogenic. If you listen to recordings with dozens of instruments like an orchestra - with slight tuning, positioning, and timing variations across them, tossing in a triangle / cymbal here an there - you may also find that iTunes is not the end-all of audio quality.
Now if you played a 192/24 or SACD signal vs. that same signal that had been reduced to 16/44.1 and then compressed, then re-digitized them at high resolution, you would find differences in the waveform. The fact that a re-digitized signal is demonstrably different at the output end - the sound going to the listener - seems relevant to anyone listening to that music. Unlike some audio tweaks like cryo-treated outlets, magnetic dampeners, lava lamps, or whatever mumbo-jumbo, there are measurable differences in the output signals that will result from various bit rates and compressions. It's totally illogical to assert that nobody could possibly hear the differences at the data rates being discussed. It's fine to assert that you
can't hear the differences. That's not a problem - it's not like everyone has the same sense of vision, hearing, taste, smell, etc.
The fact that having someone else hear the same thing you do is a challenge - given different ears and playback equipment - seems to make people who either can't or just haven't heard the difference feel justified in saying it's just not possible for others to do so. If you can't tell the difference... Well, the good news is that you'll save some money on stereo equipment!