Originally Posted by fc3
Even CD quality is, at best, only mid-fi. Remember that CD was a compromise built with what was the best technology available at the time. They were invented in the late 1970s and the first players and CDs began appearing around 1982-83. The first decent sounding CD player that I heard was a $2,500 Macintosh that appeared some time around 1985. 16 bit 44.1 kHz sampling was a big technical challenge years ago. Those specs were determined sufficient to reproduce sound for the average persons hearing. But 44.1 kHz sampling is not enough sampling to smoothly and accurately reproduce the waveforms of high frequencies; thus resulting in brittle sound. The 90 dB dynamic range of CD is adequate, but not generous. Symphony orchestras typically have a dynamic range of 70+ dB from pianissimo to forte. Live Jazz is nearly as demanding, and in some cases, as demanding as orchestral music.
Yadda, yadda, yadda. Mostly self-proclaimed audiophile BS with a hint of truth here and there under the BS.
44.1 kHz is perfectly adequate to produce high frequency sounds up to 20 kHz and a little higher. Poorly implemented hardware designs and badly done recordings might not sound as good as they otherwise might, but that's not the same thing as the sampling rate itself being a problem.
Simple-minded drawings of sampled waveforms, made to look all terribly jaggy and very unlike beautifully smooth looking high-frequency soundwaves pictured in the same drawings -- the only "evidence" that there is for this stupid meme that will not die about 44.1 kHz not being "good enough" -- don't tell the real story. Any little bump or bulge on, say, a 20 kHz waveform, represents higher frequency content than 20 kHz.
If 20 kHz is the highest frequency you can hear (a few people can hear a little higher, most people, including most self-acclaimed audiophiles, are lucky if they can hear beyond 15 kHz), then you can't hear the difference between differently-shaped 20 kHz waveforms and a simply 20 kHz sound wave. Once you properly filter a digital output, within practical engineering tolerances, the only thing that's missing -- stupid drawings be damned -- is stuff that most people can't hear, and that, even for those who can hear it, is of highly questionable musical value.
Show me some double-blinded testing that proves that any but a very small number of people (who would mostly be young and unexposed to a lot of loud noise, and most of whom would not be the same people who think they have golden ears) get any real benefit out of crazy, wasteful sampling rates like 96 kHz -- if they can even detect any differences at all between that and 44.1 kHz -- and then maybe you might have a glimmer of a point to make.
As for CD's dynamic range -- which is 96 dB, not 90 -- consider this: Let's presume that a reasonable audio technology stops at, or just before, producing sound levels at threshold of pain. Between the most quiet sound that you can just barely hear in an otherwise perfectly quiet room and the threshold of pain is a range of about 130 dB. I'll grant you that CDs fall a bit short of this dynamic range.
However, it is very rare that even the audiophiles who are out there buying $1000/meter speaker wire and AC power conditioners and $5000 stand-alone DACs have listening rooms with a background noise of less than 40 dB. To get a listening environment much quieter than that you need to get away from traffic, away from refrigerator compressors, away from buzzing florescent lights, away from cooling fans and hard drives, away anyone talking around you. You'll be lucky to get a 20 dB environment when you listen to music.
Turn up your volume just loud enough to hear the quietest sounds a CD can make over the background noise of a 40 dB room, and then the loudest sounds a CD can make will hurt you
. Get yourself a nice 20 dB room instead, and while the loudest sounds won't cause immediate pain, they will be loud enough that you shouldn't listen at levels like that for extended periods of time (20 dB + 96 dB = 116 dB ~= the sound of a jack hammer operating about 2 meters from your head) if you value your ears very much -- something an audiophile presumably does value.
Implemented with decent hardware (and "decent" doesn't require megabuck components and oxygen-free copper wire), and coupled with well-made recordings, the CD standard is far above "mid fi". For the great majority of both listeners and listening environments, 44.1 kHz/16-bit sampling is perfectly suitable for producing excellent
There's really only one department where the CD standard falls down: spacial realism. Basic two-channel stereo sound can't produce a fully realistic sound stage (unless you count binaural recordings, which need to be tuned to specific people to fully realize their potential, must be listened to using headphones, and which produce a sonic illusion which is shattered simply by turning your head and having the whole imaginary sound stage automatically and unrealistically turn with you).
While many audiophiles do spend a whole lot of time going on (and on, and on) about the spacial qualities of music, most people don't give a damn. If they care about sound localization at all, it's for sound effects in movies, not seating arrangements of musicians in a band or orchestra. If spacial realism matters a lot to you, fine, but it's hardly reason enough to demote CDs to mid-fi status for their lack of special-effects grade sound localization.