eh, there's a lot of debate about how harmonics that the ear can't pick up effect frequencies you can hear. I tend to come down on the "they don't do anything" side. However, its controversial enough that it causes engineers to not mess with them (well, that and the fact that its a royal PITA to mess with harmonics). As to the lower range stuff, Id certainly not take them off if I was listening through speakers with a LOT of power. However, nothing you're going to listen to in a residential area is going to produce enough power for you to really feel 10 htz waves.
I think it has to do with additive properties of waves and subtle phase cancellation effects.
Thank goodness my neighbors don't have that system. What they do have is almost a
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
I think it has to do with additive properties of waves and subtle phase cancellation effects.
Thank goodness my neighbors don't have that system. What they do have is almost a
military weapon as it is.
One last thought, then I'm done. If you're listening through stock apple earbuds, you just plain aren't going to hear the difference between 128 and 256. Those things won't re-produce the differences.
I'm not saying its necessarily right, just saying that engineers don't cut those frequencies for the most part. Almost every CD you can find has a lot of stuff going on above and below 20-20000 htz.
And you can definitely feel a 15 htz wave with enough power.
I am guessing 256kbps will just be the defualt now. Most people don't know that you can import at pretty much whatever bitrate you want and a whole slew of options.
I wonder why they don't use VBR as the standard though. I guess people feel more assured if they see a single number next to all their files instead of a varying number.
It has been VBR for awhile now, you just see the easy settings. I think version 8 made the default as VBR.
One last thought, then I'm done. If you're listening through stock apple earbuds, you just plain aren't going to hear the difference between 128 and 256. Those things won't re-produce the differences.
I get the feeling that many people tend to think that AAC and mp3 encoders just randomly throw away digital audio information. They tend to think that 256kbps is "twice as good" as 128 kbps, etc
It doesn't really work this way. AAC (and mp3) uses a psycho acoustic model when it converts. 70% of what it ditches is nothing at all. And by nothing, I mean nothing. .wav puts down the same amount of information for pure black silence as it does for a raging guitar solo. mp3 and AAC don't work this way (well, they do if you don't use VBR, but that just means that they throw away less info when less info is required to encode the main audible parts) AAC amd current mp3 encoders throw out a bunch of stuff that's well outside the human audible range. The first thing they do is throw out black silence. There is a surprisingly large amount of pure non-info encoded in .wav. They then dispense with the higher frequencies that are inaudible and then the lower inaudible range (since sometimes the lower inaudible frequencies can be felt, they're cut second). Then it starts cutting out extremely light noises within nano-seconds of very loud noises, since thousands of studies have shown that humans can't hear very quick and large changes in dynamic level (ie it takes some time for the ear to "rebound" after a loud noise). Just these changes alone will often times get you into the 256 kbps range. And if you burned this to .wav and then re-ripped, you'd lose no additional audible information, since it would realize that "hey, I can encode this exact file with just 256 kbps by throwing away all this info that is either black silence or looks exactly like an oddly .wav'd up 256 kbps file."
There's actually a great deal more thrown away than what you're claiming in your first two paragraphs. The *lossless* formats usually have zero problem compressing the silence and that's how *they* achieve the bulk of their compression. The lossy formats, as you mention in that last paragraph and beyond, throw away frequency ranges that would usually not be heard by most people.
That said, 256 kbps would usually retain most of the important information. But it's still throwing away data. Silence is data. 0 isn't null. Every self-respecting music enthusiast would rip its music in a lossless format to retain *ALL* the information.
Of course, people that just don't care, just don't care, so I don't care what they do.
I'm not saying its necessarily right, just saying that engineers don't cut those frequencies for the most part. Almost every CD you can find has a lot of stuff going on above and below 20-20000 htz.
And you can definitely feel a 15 htz wave with enough power.
Here's a book that might help a lot of folks out there:
There's actually a great deal more thrown away than what you're claiming in your first two paragraphs. The *lossless* formats usually have zero problem compressing the silence and that's how *they* achieve the bulk of their compression. The lossy formats, as you mention in that last paragraph and beyond, throw away frequency ranges that would usually not be heard by most people.
That said, 256 kbps would usually retain most of the important information. But it's still throwing away data. Silence is data. 0 isn't null. Every self-respecting music enthusiast would rip its music in a lossless format to retain *ALL* the information.
Of course, people that just don't care, just don't care, so I don't care what they do.
"Coke! No, Pepsi NO! Coke!"
IMHO, there's a simple solution to this whole debate.... test YOURSELVES!
Rip one of the most scintillating pieces of music you have with different codecs and different bit rates. Do a blind audio test or better yet, a double blind test. If you don't have someone to tell help you with testing, you can "blindfold" yourself by playing the pieces on shuffle, and take notes. After you're done, flip back through what you've played and compare to your notes. Voila, you've found your very own personal bit-rate threshhold.
I actually do this every now and then with test tones, hoping that one day I'll actually hear 17.5k again.
We could already rip AAC and MP3 in just about any bit rate forever.
It was never called iTunes+, a term that was reserved for "AAC Purchased" files as compared to "AAC Protected" files. I.e. commercially purchased AAC files without DRM.
But being without DRM didn't mean the files don't contain a string that identifies which iTMS account purchased the account, i.e. if you buy an iTunes+ tune and then share it with the entire world, the RIAA lawyers still can sue you, because they know who bought and made available the track.
So for something to be called iTunes+ I'd expect it to change from "AAC" to "AAC purchased" format, and possibly have embedded into it your iTMS account name.
Maybe that's one of the ways Apple got the labels to drop DRM on the purchased songs, by now tagging all your ripped songs with your iTMS account name?
For people who don't distribute ripped music all over the internet, that shouldn't matter, but others might possibly find themselves in hot water, if that's what's really implied by the iTunes+ designation.
IMHO, there's a simple solution to this whole debate.... test YOURSELVES!
Rip one of the most scintillating pieces of music you have with different codecs and different bit rates. Do a blind audio test or better yet, a double blind test. If you don't have someone to tell help you with testing, you can "blindfold" yourself by playing the pieces on shuffle, and take notes. After you're done, flip back through what you've played and compare to your notes. Voila, you've found your very own personal bit-rate threshhold.
I actually do this every now and then with test tones, hoping that one day I'll actually hear 17.5k again.
Cheers,
Dave
The best test is a 3-way test. Example: use two copies of the lossless, and one copy of the compressed (or one lossless and two compressed). Then try to tell which one is different from the other two.
It's a much better test than one-to-one. In a one-to-one, someone may hear a difference, but may not be able to guess which one is the lossless. That would be logged as a failure, when in fact they did hear a difference.
On another point, it's been shown (through frequency analysis) that compression adds artifacts to the silence in the music. If you're talking Apple standard earbuds, no big deal. If you're talking $20,000 stereo system, very big deal.
Is your name Carol Anne and aren't you a character from Poltergeist?
Back in the day I could walk into a room and tell you if there was a monitor on. End of the day I just had to open the lab door to hear if anything was still on. Now I have to go look. My hearing is clipped at about 13,600 Hz. My students think it's a riot. That's when I crank the generator up to 17kHz and then they play nice. Played through a loudspeaker in public it's known as a "yob deterrent" in the UK.
This has to be a mistake in the article in that I have been doing it at 256 kbs for ages.
256 would actually be "better than (Lossless)" however in that your ears cannot tell the difference and you would save oodles of space. Unless you are purposely creating a digital archive of your CD collection as a backup, Lossless makes little sense as a format.
For you, maybe.
For me, not very likely, see, I use my Mac Pro as a Media server and have every single CD I own, and some vinyl too, ripped to it as a lossless AIFF file so I can play it back through an obscenely expensive DAC, and then off to the rest of my obscenely expensive audio equipment.
I would bet 10,000 US Dollars you would be able to hear the difference between an AIFF file and the same song at 256kb within 30 seconds.
iTunes 8.1 is now faster and more responsive. You will enjoy noticeable improvements when working with large libraries, browsing the iTunes Store, preparing to sync with iPod or iPhone, and optimizing photos for syncing.
In addition, iTunes 8.1 provides many other improvements and bug fixes, including:
• Supports syncing with iPod shuffle (3rd generation)
• Allows friends to request songs for iTunes DJ
• Adds Genius sidebar for your Movies and TV Shows
• Improves performance when downloading iTunes Plus songs
• Provides AutoFill for manually managed iPods
• Allows CDs to be imported at the same sound quality as iTunes Plus
• Includes many accessibility improvements
• Allows iTunes U and the iTunes Store to be disabled separately using Parental Controls
Comments
eh, there's a lot of debate about how harmonics that the ear can't pick up effect frequencies you can hear. I tend to come down on the "they don't do anything" side. However, its controversial enough that it causes engineers to not mess with them (well, that and the fact that its a royal PITA to mess with harmonics). As to the lower range stuff, Id certainly not take them off if I was listening through speakers with a LOT of power. However, nothing you're going to listen to in a residential area is going to produce enough power for you to really feel 10 htz waves.
I think it has to do with additive properties of waves and subtle phase cancellation effects.
Thank goodness my neighbors don't have that system. What they do have is almost a
military weapon as it is.
you generally don't cut 10-20 htz, because you can "feel" those waves, and it makes things sound a touch fuller if you are listening through a very large sound system (think 15 inch subwoofers powered by at least 10k watts). Also, you get harmonics outside of 20k, because engineers don't like messing with harmonics, and they have no real reason to anyway.
Pure BS.
I think it has to do with additive properties of waves and subtle phase cancellation effects.
Thank goodness my neighbors don't have that system. What they do have is almost a
military weapon as it is.
One last thought, then I'm done. If you're listening through stock apple earbuds, you just plain aren't going to hear the difference between 128 and 256. Those things won't re-produce the differences.
Pure BS.
I'm not saying its necessarily right, just saying that engineers don't cut those frequencies for the most part. Almost every CD you can find has a lot of stuff going on above and below 20-20000 htz.
And you can definitely feel a 15 htz wave with enough power.
I am guessing 256kbps will just be the defualt now. Most people don't know that you can import at pretty much whatever bitrate you want and a whole slew of options.
I wonder why they don't use VBR as the standard though. I guess people feel more assured if they see a single number next to all their files instead of a varying number.
It has been VBR for awhile now, you just see the easy settings. I think version 8 made the default as VBR.
One last thought, then I'm done. If you're listening through stock apple earbuds, you just plain aren't going to hear the difference between 128 and 256. Those things won't re-produce the differences.
I agree with this qualification.
I get the feeling that many people tend to think that AAC and mp3 encoders just randomly throw away digital audio information. They tend to think that 256kbps is "twice as good" as 128 kbps, etc
It doesn't really work this way. AAC (and mp3) uses a psycho acoustic model when it converts. 70% of what it ditches is nothing at all. And by nothing, I mean nothing. .wav puts down the same amount of information for pure black silence as it does for a raging guitar solo. mp3 and AAC don't work this way (well, they do if you don't use VBR, but that just means that they throw away less info when less info is required to encode the main audible parts) AAC amd current mp3 encoders throw out a bunch of stuff that's well outside the human audible range. The first thing they do is throw out black silence. There is a surprisingly large amount of pure non-info encoded in .wav. They then dispense with the higher frequencies that are inaudible and then the lower inaudible range (since sometimes the lower inaudible frequencies can be felt, they're cut second). Then it starts cutting out extremely light noises within nano-seconds of very loud noises, since thousands of studies have shown that humans can't hear very quick and large changes in dynamic level (ie it takes some time for the ear to "rebound" after a loud noise). Just these changes alone will often times get you into the 256 kbps range. And if you burned this to .wav and then re-ripped, you'd lose no additional audible information, since it would realize that "hey, I can encode this exact file with just 256 kbps by throwing away all this info that is either black silence or looks exactly like an oddly .wav'd up 256 kbps file."
There's actually a great deal more thrown away than what you're claiming in your first two paragraphs. The *lossless* formats usually have zero problem compressing the silence and that's how *they* achieve the bulk of their compression. The lossy formats, as you mention in that last paragraph and beyond, throw away frequency ranges that would usually not be heard by most people.
That said, 256 kbps would usually retain most of the important information. But it's still throwing away data. Silence is data. 0 isn't null. Every self-respecting music enthusiast would rip its music in a lossless format to retain *ALL* the information.
Of course, people that just don't care, just don't care, so I don't care what they do.
Look up any test comparison- there are other factors involved. The playback of a CD simply sounds better- sampling rates, D/A converters, etc.
And if you're using the same D/A converter and the same sampling rate? What makes it better then - magical elves?
I'm not saying its necessarily right, just saying that engineers don't cut those frequencies for the most part. Almost every CD you can find has a lot of stuff going on above and below 20-20000 htz.
And you can definitely feel a 15 htz wave with enough power.
Here's a book that might help a lot of folks out there:
http://www.amazon.com/Mastering-Audi.../dp/0240805453
Dave
There's actually a great deal more thrown away than what you're claiming in your first two paragraphs. The *lossless* formats usually have zero problem compressing the silence and that's how *they* achieve the bulk of their compression. The lossy formats, as you mention in that last paragraph and beyond, throw away frequency ranges that would usually not be heard by most people.
That said, 256 kbps would usually retain most of the important information. But it's still throwing away data. Silence is data. 0 isn't null. Every self-respecting music enthusiast would rip its music in a lossless format to retain *ALL* the information.
Of course, people that just don't care, just don't care, so I don't care what they do.
"Coke! No, Pepsi NO! Coke!"
IMHO, there's a simple solution to this whole debate.... test YOURSELVES!
Rip one of the most scintillating pieces of music you have with different codecs and different bit rates. Do a blind audio test or better yet, a double blind test. If you don't have someone to tell help you with testing, you can "blindfold" yourself by playing the pieces on shuffle, and take notes. After you're done, flip back through what you've played and compare to your notes. Voila, you've found your very own personal bit-rate threshhold.
I actually do this every now and then with test tones, hoping that one day I'll actually hear 17.5k again.
Cheers,
Dave
Hopefully, rather than adding gobs of new "features", they tweaked all the current "features" to work correctly/better.
Thanks for the heads up. I'll be sure to adjust my life to fit your needs.
Actually you should. His is better advice to live your (audio) life by.
That said, 256 kbps would usually retain most of the important information. But it's still throwing away data. Silence is data. 0 isn't null.
Erm, no. In digital 0 *is* null, in the sense that it can be compressed away losslesy (sp?).
Every self-respecting music enthusiast would rip its music in a lossless format to retain *ALL* the information.
Audio enthusiast != music enthusiast.
Audio enthusiast != music enthusiast.
QFT!
I don't sit around critiqueing the playback I care most about whether the music moves me or not.
It was never called iTunes+, a term that was reserved for "AAC Purchased" files as compared to "AAC Protected" files. I.e. commercially purchased AAC files without DRM.
But being without DRM didn't mean the files don't contain a string that identifies which iTMS account purchased the account, i.e. if you buy an iTunes+ tune and then share it with the entire world, the RIAA lawyers still can sue you, because they know who bought and made available the track.
So for something to be called iTunes+ I'd expect it to change from "AAC" to "AAC purchased" format, and possibly have embedded into it your iTMS account name.
Maybe that's one of the ways Apple got the labels to drop DRM on the purchased songs, by now tagging all your ripped songs with your iTMS account name?
For people who don't distribute ripped music all over the internet, that shouldn't matter, but others might possibly find themselves in hot water, if that's what's really implied by the iTunes+ designation.
Time will tell...
"Coke! No, Pepsi NO! Coke!"
IMHO, there's a simple solution to this whole debate.... test YOURSELVES!
Rip one of the most scintillating pieces of music you have with different codecs and different bit rates. Do a blind audio test or better yet, a double blind test. If you don't have someone to tell help you with testing, you can "blindfold" yourself by playing the pieces on shuffle, and take notes. After you're done, flip back through what you've played and compare to your notes. Voila, you've found your very own personal bit-rate threshhold.
I actually do this every now and then with test tones, hoping that one day I'll actually hear 17.5k again.
Cheers,
Dave
The best test is a 3-way test. Example: use two copies of the lossless, and one copy of the compressed (or one lossless and two compressed). Then try to tell which one is different from the other two.
It's a much better test than one-to-one. In a one-to-one, someone may hear a difference, but may not be able to guess which one is the lossless. That would be logged as a failure, when in fact they did hear a difference.
On another point, it's been shown (through frequency analysis) that compression adds artifacts to the silence in the music. If you're talking Apple standard earbuds, no big deal. If you're talking $20,000 stereo system, very big deal.
how about importing songs....will that be sped up...???
Yes, if you buy the optional $6K Nehalem-based Mac Pro.
Is your name Carol Anne and aren't you a character from Poltergeist?
Back in the day I could walk into a room and tell you if there was a monitor on. End of the day I just had to open the lab door to hear if anything was still on. Now I have to go look. My hearing is clipped at about 13,600 Hz. My students think it's a riot. That's when I crank the generator up to 17kHz and then they play nice. Played through a loudspeaker in public it's known as a "yob deterrent" in the UK.
This has to be a mistake in the article in that I have been doing it at 256 kbs for ages.
256 would actually be "better than (Lossless)" however in that your ears cannot tell the difference and you would save oodles of space. Unless you are purposely creating a digital archive of your CD collection as a backup, Lossless makes little sense as a format.
For you, maybe.
For me, not very likely, see, I use my Mac Pro as a Media server and have every single CD I own, and some vinyl too, ripped to it as a lossless AIFF file so I can play it back through an obscenely expensive DAC, and then off to the rest of my obscenely expensive audio equipment.
I would bet 10,000 US Dollars you would be able to hear the difference between an AIFF file and the same song at 256kb within 30 seconds.
http://appldnld.apple.com.edgesuite....nes64Setup.exe
http://appldnld.apple.com.edgesuite....TunesSetup.exe
Edit: Now available from the download page!
From the Read Me:
iTunes 8.1 is now faster and more responsive. You will enjoy noticeable improvements when working with large libraries, browsing the iTunes Store, preparing to sync with iPod or iPhone, and optimizing photos for syncing.
In addition, iTunes 8.1 provides many other improvements and bug fixes, including:
• Supports syncing with iPod shuffle (3rd generation)
• Allows friends to request songs for iTunes DJ
• Adds Genius sidebar for your Movies and TV Shows
• Improves performance when downloading iTunes Plus songs
• Provides AutoFill for manually managed iPods
• Allows CDs to be imported at the same sound quality as iTunes Plus
• Includes many accessibility improvements
• Allows iTunes U and the iTunes Store to be disabled separately using Parental Controls