I heard an odd blip in the EQ somewhere around the 50-second mark, and only for a few seconds.
Speaking frankly, though, this sample contains a LOT of studio wizardry (including post-processing). Unless I knew the "original" recording well (and I do not), I could not presume to differentiate the artistic from the deliberate (i.e., bitrate change).
An experiment like this might be more telling with, say, a live classical performance (even solo piano), recorded with as little technical intrusion as possible.
Great, you had a go, unlike a few people I could mention. No edit at that point. It's probably a mixing thing. I will say that there are more than 6 spliced segments in the file.
I know this sounds counter-intuitive, but actually, spare, classical and solo instruments are actually easier to compress than really busy music like heavy metal etc. I originally thought it would be the other way round too until I did some research.
Five years on and I still haven't found anyone who can hear the supposed quality difference between segments in that file, not even people who originally claim the difference to them between even 320 kbps and uncompressed is vast.
Some time ago I created an audio file that had sections that were from a 223 kbps AAC rip of the original CD track and sections that were from a lossless WAV rip. You can grab it here.
I would be very interested to hear if anyone thinks they can discern where the quality transitions.
There is a change in quality at around 1:02. The sound gets duller, the bass is not as full and round and gets a bit distorted, and the guitar looses some of its spaciousness. The sound moves more center with less room around the instruments.
This is on my Sennheiser HD800/Woo Audio WA-2.
Until they come up with a perfectly calibrated, flat frequency response, distortionless, totally objective, and genre neutral human brain all of these interpretations are purely subjective and heavily colored by personal preferences. Adding another million claims and rebuttals to the mix will not change a thing.
I am not at all surprised with the results of these listening tests with the Pono.
There is a good reason 16bit/44.1Khz was chosen for CD. It covered the hearing range of the vast majority of humans, with a bit of extra thrown in ( see Nyquist Theorum ). The only reason to choose 24bit/96Khz (or even 24/192) is for use within Digital Recording Workstations, as all the mixing , effects, etc. is performed using maths and using this higher bit-rate/sampling means less rounding errors, distortion, etc. as signals are mixed/summed together and so on before final conversion/dithering to 16bit/44.1Khz.
Of course modern music and 'mastering' ( loudness wars ) have ruined any chance of quality audio reproduction with their sausage-shaped audio dynamics/waveforms, crushing the music into a tiny, but LOUD dynamic range. All bets are off until that is fixed. Classical music is probably better for critical listening with it's wide range of dynamics and lack of need to 'stand out from the crowd on the radio'. But it also all depends on how the music was recorded in the first place.
I find it difficult with most material to tell the difference between a well encoded 320Kbps MP3 and lossless 16bit/44.1Khz FLAC/AIFF using my Sennheiser HD650's or Genelec 1030a 's ( recording & mixing ). Blind listening tests have proved again and again that the vast majority cannot hear any differences above this either. Of course our hearing degrades fairly severely with age (loss of high frequencies) so the only people who might perhaps appreciate some slight improvement are all probably under 15 years old and are unlikely to have developed critical listening skills at that point anyway.
There have been blind listening tests ( surely a fair way to test hearing ) where audio professionals/experts/"golden ears" cannot hear the difference between an old coat-hanger and "oxygen free" cable connecting speakers. It's these blind listening tests ( not just the funny coat-hanger example ) and my own personal experience that cause me to be a little dubious of the claims of much of the 'Audiophile' industry.
That is not to say that higher quality equipment cannot improve the sound of 16bit/44.1Khz, it most definitely can, but in my experience, this is mostly where there are moving parts/coils, eg. speakers and headphones! These two items will make the biggest, most noticeable difference to sound 'quality' to the most people, ie. the point at which sound is converted from an electrical signal to a moving wave.. any thing else, is for the most part, and for the majority of human beings, in the ear of the beholder.
Now that my 128GB has effectively replaced my iPod Classic as my car music system, I was wondering if I could use if to store lossless audio (ALAC) and get it working as a hi-res player.
Does the line-out for the Classic provide enough fidelity to a decent Receiver for me to get good sound, or is the iPod limited owing to the size and other engineering decisions made to achieve its primary purpose as an MP3 player?
What about using some good headphones with ALAC on the Classic? I know SQ is highly subjective but there must be some quantifiable difference, right? Or is it negligible?
The line out from your classic would probably be indistinguishable from any CD player, so yes, using it as an input to a HiFi system is a very valid strategy. I know I can't hear a difference between my 3rd gen. iPod and my CD player when I have compared them as sources on my system.
You might also want to consider getting an Airport express as an input to a hifi as you can stream music to it from an iPhone or iPod Touch which can be very convenient as you can play AAC files, tune in to internet radio stations, Youtube, etc, and be able to control everything from the palm of your hand while the sound comes out of the HiFi.
If you use good quality headphones with your Classic, iPhone, the sound quality will only be limited by the quality of the headphones.
Seriously - half the comments here are so absurd it's embarrassing.
1. Sony MDR's are not Reference quality - Same as Standard TV's are not HD TV's
Playing a blue-ray vs a VHS on a standard TV - and claiming that anyone who owns better technology has been "debunked" because a seriously flawed blind test showed nobody could really tell the difference... " src="http://forums-files.appleinsider.com/images/smilies//lol.gif" />
2. I have a very nice pair of Sennheiser reference headphones - playing through the new 2.1 AudioQuest Dragonfly. I've had a slew (15+) of people come by my studio and check them out. We even did a 224mp3 vs Lossless test and EVERYONE heard a difference - because there is a difference, a HUGE difference. Night and day - and yes, every single one of you would hear it too.
3. It's also well documented that although most can't "always" hear below 20hz (it's a range - not an exact cut-off hello) - everyone, including deaf people can feel music down to 5hz and lower. And isn't feeling music part of the music experience and why live music is what a multi-billion dollar industry tries to reproduce?
It sure is for those of us privileged to own the technology that can do it.
So go on all you ignorant 'audiophile' haters - may as well start bashing people with HD TV's, Retina displays, or people with faster cars, more powerful tools, better camera lenses, faster internet, hotter women, or anything better then what you have or have experienced - pointing to 'things uh red on du innernet must be true' - or your own limited broke-ass technology as foundations for your asinine convictions.
Here - watch this... and don't worry, it doesn't have some bogus HD option.
That is not one of the edit points. By that point in the file there had been a couple transitions already with there being several more to follow.
I imagine there would be a quite understandable suspicion I might not be being entirely honest. I will PM the list of timings to you if you want. Thanks for having a go.
How did you go with identifying the splice points in my file?
Do any of you think you could hear the difference between a piano being played live before you and a 256 kbps recording of it?
cnocbui wrote: »
How did you go with identifying the splice points in my file?
Someone with a Phd. in Electrical Engineering thought my method good enough for them to use the file in their University course. Maybe they know something you don't.
I'm positive most people can't tell if they did this blind test. Also just because a player shows a blue light doesn't mean the audio is any better as Pono claims. The only truth here is that the player costs more and the songs will cost more but the audio quality is the same thing you can buy off of iTunes or anywhere else. SCAM!
I won't spend much time on this, as you folks are arguing over something you haven't experienced. I bought my Pono via the Kickstarter campaign, and I think it's a great product.
I have tested it with a number of folks, some professional musicians, some with audio editing experience, some who just like to listen to music, and others who really could care less. All were amazed. One mentioned it as a life changing experience!
Using high end Grado 325e headphones, I have compared the CD version and then the Pono version, and the difference is quite apparent. The difference I hear on my very nice stereo system is not as dramatic, but the end result is that the HD music coming out of the stereo system is much richer and warmer.
Is it worth the $400, and having to buy all the albums over again? I think it is. One noteworthy aspect of these HD music files is there is no DRM attached to the files, so the ability to share these HD music tracks is something built in to the purchase. Whether I share the files or not, the value of the player and the higher rez files is still worth it to me. Also, I've heard stuff on some of the HD albums I have that I've been listening to since the 70's that I had never heard before.
I don't know. It would be fun to try. I don't see how the postulate is relevant to whether or not someone can discern a difference between say, 256 kbps compressed, and an uncompressed source.
Last century, a TV program in Australia called Beyond 2000, did ask a panel, which included professor of music, to distinguish between a person playing a violin behind an acoustically transparent curtain and a PCM recording of the playing reproduced from behind the curtain by a pair of Duntech Sovereign speakers on either side of the violinist. The listeners couldn't tell the sources apart.
I would say under those same circumstances I probably couldn't tell between the violin and a 256 kbps version of the PCM recording. A piano might a different kettle of fish, however, and I would be doubting the capability of the PA mainly.
That's a tough one. I'm not sure the track is very well suited to that exercise, since it has so many transitions of its own. I transferred it to CD and played it on my reference system, and I was unable to hear any significant changes. A time-resolved FFT of the data does not show much variation in bandwidth of the kind that is obvious when comparing lower bitrate music samples, so I the compression artifacts are not very apparent.
I have thought about doing it to a purely orchestral piece, but it is a bit time consuming and these days there is Foobar, which can take a source and a compressed version of whatever bitrate is of interest, and conducts a proper double blind trial alternating the sources in multiple instances which the listener chooses between and then spits out a nice report on whether the listener's choices were correct with statistical significance - or not. The user base of Hydrogenaudio.org have been conducting codec trials at varying bit rates for years and have amassed evidence from a lot of such double blind tests with a multitude of subjects.
I assure you the file is as represented and that it does contain sections derived from a compressed version of the source.
I have another file where I did the same thing, but recording the analog output of my iPod and the analog output of my CD player and mixing those together. I have gotten the gripe that my Mac's A/D converter isn't good enough to resolve the difference so I keep that one shelved.