PC World tests your ability to differentiate 128k vs 256k AAC

Posted:
in iPod + iTunes + AppleTV edited January 2014

Comments

  • Reply 1 of 20
    chuckerchucker Posts: 5,089member
    They didn't even bother to make it a blind test?
  • Reply 2 of 20
    irelandireland Posts: 17,798member
    I never thought 256kbit music would interest me, but I listened to a podcast the other day, they were discussing it. I just said for having good quality music for the future I'd be better with better quality, so I have since re-encoded my full music library in 256, and yes I can tell the difference.
  • Reply 3 of 20
    hmurchisonhmurchison Posts: 12,425member
    blind would have been nice. He'd really screw people up if tomorrow he alerts people that he purposely reversed the files labeling the 256k files "128k" and the inverse.



    That would screw a lot of the "Golden Ears" up. I thought I could perceive slightly more present decay on the REM cymbals. Although it's so close I wonder if I'm just straining to hear the difference.
  • Reply 4 of 20
    flounderflounder Posts: 2,674member
    I thought the cymbals on the REM song sounded quite a bit better. Percussion instruments seems to be what I notice between 128 and 256. I've heard people say that horns are dull at 128 but my ears don't seem to pick that up.
  • Reply 5 of 20
    cato988cato988 Posts: 307member
    i had my sister blind test me w/ the classical 7 times, and although i could tell and only missed one, it was very very difficult, and i wasn't to confident w/ my answers. My feeling on it, along with most this at that is that 256 is better, so id rather have that. Plus the DRM is definitely worth it to me. I hate having that restriction
  • Reply 6 of 20
    addaboxaddabox Posts: 12,665member
    What are people playing these clips back through? Because that's going to make a huge difference.



    The average desktop speaker set-up, with powered sub and satellites, isn't really going to resolve much detail, much less, say, laptop speakers.
  • Reply 7 of 20
    MarvinMarvin Posts: 15,326moderator
    For me the difference isn't enough to warrant doubling the file size. I didn't really notice any difference in those clips at all and I'm using some nice Sennheiser headphones. I'm not much of an audiophile though, I can listen to some quite low quality stuff and be content. For me the importance of this move is the removal of DRM.
  • Reply 8 of 20
    cato988cato988 Posts: 307member
    Quote:
    Originally Posted by addabox View Post


    What are people playing these clips back through? Because that's going to make a huge difference.



    The average desktop speaker set-up, with powered sub and satellites, isn't really going to resolve much detail, much less, say, laptop speakers.



    i was using my Bose Quitecomfort 2 headphones
  • Reply 9 of 20
    shetlineshetline Posts: 4,695member
    Here's listening test I threw together a few years ago...



    http://www.shetline.com/music/listening_test.html



    It's not about 128 vs. 256 kbps, it's about first vs. second generation 128 kbps compression.



    Most people had no trouble telling second-gen compressed audio from first-gen. While 128 kbps might not be too bad the first time around, anyone who thinks DRM is "no big deal" because "all you have to do is burn the music to a CD and re-rip it" probably has no idea how damaging that process is to sound quality -- at least if you try to compress low bit rate music back down to the same low bit rate.



    I've found that taking 128 kbps music and compressing it back to 192 kbps works pretty well, however. Although I haven't tried it yet, I'll bet it would be very, very hard for most people -- quite unlike the case with 128 kbps music -- to tell a second-gen 256 kbps recording from a first gen.



    Because of that, DRM on 256 kbps music would be nearly worthless anyway. It would still be a nuisance barrier, but the sound quality and/or file size penalty for working around the DRM would, for practical intents and purposes, be gone.
  • Reply 10 of 20
    cosmonutcosmonut Posts: 4,872member
    It's interesting: On the PC World test, I could tell a difference on the REM song much more easily than the Mozart piece. I figured it'd be the other way around.



    I also found that the more I listened to it the better I got at hearing the difference.
  • Reply 11 of 20
    sdw2001sdw2001 Posts: 18,016member
    I heard the difference. I heard the oboe better in the Mozart piece right off the bat. I also felt the high pitches had more "pop" to them and there was a better clarity all around. The REM piece sounded a little mudy in the drums on the 128 K (before I even heard the 256 version). There was a better soundstage to it with the 256, and the clarity with the lower pitches and drums was a lot better. I listened on my built in speakers.
  • Reply 12 of 20
    SpamSandwichSpamSandwich Posts: 33,407member
    I'm afraid many of us in the over 40 crowd aren't going to be able to tell the difference. Ruined eardrums, don'cha know.
  • Reply 13 of 20
    hmurchisonhmurchison Posts: 12,425member
    Quote:
    Originally Posted by SpamSandwich View Post


    I'm afraid many of us in the over 40 crowd aren't going to be able to tell the difference. Ruined eardrums, don'cha know.



    Who are you telling? I knew my eardrums were toast when I forgot my earplugs at the range at Ft Leonardwood Missoura at Basic Training. The first shot pretty much took my temporal hearing down to next to nothing. It all came back of course but methinks that kind of stuff has cumulative negative effect. I doubt I can hear 15k all that easily.
  • Reply 14 of 20
    mac voyermac voyer Posts: 1,294member
    I've been itching to weigh in on this one. I am a musician with perfect hearing and a well trained ear. I know that my opinion is probably counter-intuitive at first, but wait for the punch line.



    I think the bit rate debate is all a smokescreen. Grant it, I have purchased music from the iTs that had audible compression artifacts. But, I contend that the same music will have the same artifacts with double the bit rate. Not all compression is created equal. I also have music from the same store that is achingly beautiful and rich enough in detail to bring tears to your eyes. It will not get better with higher bit rate. There is enough anecdotal evidence to suggest this to be broadly true.



    The second issue, though, is even more important. I currently listen to three sources for entertainment purposes. My speakers are JBL Spots, ($99), Shure e2c plugs, ($99), and B & O Form 2 phones, ($99). See a pattern? They are all $99 products designed for computer and portable audio. The vast majority of that audio is highly compressed. They are not designed to the same specs as premium home entertainment speakers and pro musician gear. I contend that even if it was possible to hear the difference in well encoded music recorded at different bit rates, most people still could not hear it because the speakers, plugs, and phones they use cannot play back the frequencies in question. It does not matter how much more data you pump into speakers that cannot play the data. Watching a hi-def movie on a standard-def screen, if it plays at all, will only be standard-def. You cannot get more information from a screen or from speakers that were not designed to play it.



    I suspect that the lion-share of people who claim to be put off by the low bit rate of iTunes music have speakers that are too good for such music. It is ridiculous to play any compressed music over $10,000 speakers. In fact, those people who complain about the bit rate should probably not even be listening to CDs. They should be purchasing only SACD or ADVD formated music. It is like watching downloaded movies on an 80" plasma and complaining that the picture does not look good. Those movies look spectacular on screens that they were designed for. And compressed music sounds spectacular on speakers they were designed for.



    Even the most beautiful woman is somewhat less attractive if you view her skin under a microscope. That is not how she was intended to be appreciated. Take a few steps back and gain perspective. If you have to crank up the music and put your ear to the speaker to hear a slight compression artifact, you are missing the whole point of the music. Music appreciation is not about audio perfection. It is about the ineffable quality of tonal expression that communicates directly to the soul of mankind. We live in a time when the technology of music delivery has never been greater. Yet we seem to appreciate music less than any other generation that has come before. If less than A.M. quality music could touch the hearts of your grandparents and make them fall in love, I would say that we have nothing to complain about.
  • Reply 15 of 20
    resres Posts: 711member
    Quote:
    Originally Posted by Mac Voyer View Post




    The second issue, though, is even more important. I currently listen to three sources for entertainment purposes. My speakers are JBL Spots, ($99), Shure e2c plugs, ($99), and B & O Form 2 phones, ($99). See a pattern? They are all $99 products designed for computer and portable audio. The vast majority of that audio is highly compressed. They are not designed to the same specs as premium home entertainment speakers and pro musician gear. I contend that even if it was possible to hear the difference in well encoded music recorded at different bit rates, most people still could not hear it because the speakers, plugs, and phones they use cannot play back the frequencies in question. It does not matter how much more data you pump into speakers that cannot play the data. Watching a hi-def movie on a standard-def screen, if it plays at all, will only be standard-def. You cannot get more information from a screen or from speakers that were not designed to play it.



    Extremely true. When I'm listening to music on my iPod I cannot tell the difference between 128 kbps or 320 kbps (which is what I ripped most of my CDs at). Nor can I tell the difference when using my MacBook Pros spekers. It is only when I send the music on my computer through my MH Mobile I/O 2882 into my Event monitors that I can make out the difference.
  • Reply 16 of 20
    dentondenton Posts: 725member
    Quote:
    Originally Posted by hmurchison View Post


    blind would have been nice. He'd really screw people up if tomorrow he alerts people that he purposely reversed the files labeling the 256k files "128k" and the inverse.



    That would screw a lot of the "Golden Ears" up. I thought I could perceive slightly more present decay on the REM cymbals. Although it's so close I wonder if I'm just straining to hear the difference.



    I would guess that this is what you are doing. As you say, if it was revealed tomorrow that the files were purposely mislabelled, how surprised would you be (even though you can check that this is or is not the case by simply checking the file-sizes).



    The PC World blogger, Dahl, is no scientist. His survey is completely useless because, as Shetline says, there is no attempt at blinding. By knowing up front which is the suposed better file, you can search for features that would back up this knowledge. And even when there is no difference, one can convince themselves that there is. I'm not about to test this, but I would suspect that if you asked people to differentiate between identical clips and told them that the first is 128 and the second is 256, and gave them the choice of "no difference," "first is better" and "second is better," the "second is better" choice would be indicated more than 2-1 over the "first is better" choice.



    Quote:
    Originally Posted by shetline View Post


    Here's listening test I threw together a few years ago...



    http://www.shetline.com/music/listening_test.html



    It's not about 128 vs. 256 kbps, it's about first vs. second generation 128 kbps compression.



    Now, Shetline on the other hand: good test. I only had to listen to once to hear the difference, whereas I can't tell the difference between between the 128/256 encodings. I hope I'm correct that the first and fourth are the 2nd gen rips. If you're not a scientist, you may have missed your calling.



    physguy also attempted to create a better version of the PC World test in this thread. The only hiccough was that in the poll you can only choose one option, rather than the two that he intended.
  • Reply 17 of 20
    I had a chance to compare the 160 kbps AAC VBR versus 256 kbps AAC VBR "rips" on Jean Michel Jarre's new album Téo & Téa on my 2G iPod nano using my new Etymotic Research ER-6i headphones and believe me, while the bass quality is almost the same, the treble quality is distinctly superior in 256 kbps AAC, especially with the extensive synthesizer sounds Jarre uses on the new album.



    (For those who don't know, Etymotic Research makes some of the best in-ear headphones around, period.)
  • Reply 18 of 20
    Quote:
    Originally Posted by Mac Voyer View Post


    The second issue, though, is even more important. I currently listen to three sources for entertainment purposes. My speakers are JBL Spots, ($99), Shure e2c plugs, ($99), and B & O Form 2 phones, ($99). See a pattern? They are all $99 products designed for computer and portable audio. The vast majority of that audio is highly compressed. They are not designed to the same specs as premium home entertainment speakers and pro musician gear. I contend that even if it was possible to hear the difference in well encoded music recorded at different bit rates, most people still could not hear it because the speakers, plugs, and phones they use cannot play back the frequencies in question. It does not matter how much more data you pump into speakers that cannot play the data. Watching a hi-def movie on a standard-def screen, if it plays at all, will only be standard-def. You cannot get more information from a screen or from speakers that were not designed to play it.



    Methinks you might try a better set of headphones for your portable music player. I use the well-regarded Etymotic Research ER-6i and given the neutral, balanced sound and excellent noise isolation of the ER-6i, you can definitely tell the difference between 128 kbps AAC and 256 kbps AAC, especially with treble sounds.
  • Reply 19 of 20
    mac voyermac voyer Posts: 1,294member
    Quote:
    Originally Posted by SactoMan01 View Post


    Methinks you might try a better set of headphones for your portable music player. I use the well-regarded Etymotic Research ER-6i and given the neutral, balanced sound and excellent noise isolation of the ER-6i, you can definitely tell the difference between 128 kbps AAC and 256 kbps AAC, especially with treble sounds.



    I know there are better headphones out there. I have had the privilege of playing with exotic studio gear. I have listened to speakers that were $10,000 a pair. For certain types of things, I do need better speakers. My point is that the vast majority of music consumers listening to music on a computer or mp3 player really do not have the gear to tell the difference in most music.



    There is a factor much more important than bit rate imo. All engineers use exotic, and precise monitors for mixing. Few regular human beings have such monitors to listen to the music at home. Therefore, you will never hear what the engineer hears. It is simply not possible. Also, not all engineers are created equally. Just considering CDs, some music sounds quite aurally pleasing. Other music sounds like garbage. Some of the CDs from my favorite bands growing up, sounds worse than some of the stuff I mixed when I was first learning how. There are still other songs that I absolutely hate that provide a true aural experience. The difference has absolutely nothing to do with compression bit rates. It has everything to do with the skill of the engineer and the sound font he was trying to produce. Again, the highest possible bit rate of a poorly produced song will sound worse than a masterfully mastered song at 128. People are putting too much faith in bit rate without understanding everything else that goes into the audio listening equation.
  • Reply 20 of 20
    mac voyermac voyer Posts: 1,294member
    A bit of grist for the mill...



    http://www.reghardware.co.uk/2007/04...quality_claim/



    Ads police say 128Kbps AAC is CD quality





    By Tony Smith [More by this author]

    25th April 2007 13:05 GMT

    Get the latest from us in your inbox

    Nokia - unlike Creative - has been allowed by UK advertising watchdog the Advertising Standards Authority (ASA) to claim that its Nokia 5300 Xpress Music phone can deliver CD quality sound from compressed, lossy audio formats.



    The ASA today said it had ruled against a complaint that Nokia's claim was misleading. The complainant said compressed files played on the 5300 did not deliver the same bit-rate as a CD does and therefore the sound can't be said to be CD quality.





    That's essentially the complaint raised against advertising employed by Creative to promote its Xmod sound system, though to be fair Creative over-egged it a tad by alleging its kit delivered "better than CD quality". Tha ASA ruled against Creative.



    Most folk will say a 128Kbps AAC file can't possibly deliver the same audio quality as the 1411Kbps CD, but Nokia claimed that since an ISO survey, Report on the MPEG-2 AAC Stereo Verification Tests, found listeners largely unable to distinguish between the two, 128Kbps AAC could be said to be of CD quality - and a 160Kbps playable on the 5300 certainly would.







    "We considered that readers [of the ad] would interpret the claim 'CD quality sound' to mean that when they listened to files played on the Nokia 5300 Xpress Music the sound would be indistinguishable from CD sound," the ASA said.



    "We considered the tests provided proved that most listeners were unable to distinguish between compressed AAC files encoded at 128Kbps and CD sound. We concluded that Nokia had substantiated the claim 'CD quality sound' and it was unlikely to mislead." So there, cloth-ears...
Sign In or Register to comment.