Rumor: Apple to offer hi-res 24-bit tracks on iTunes in coming months

123457

Comments

  • Reply 121 of 154
    Quote:

    Originally Posted by Benjamin Frost View Post



    Analogue is better than digital.

     

    Probably, but we'll never know since every analogue delivery medium is so fraught with fatal flaws.

  • Reply 122 of 154
    The super-deluxe editions of the first 3 Zep albums coming out in June also have a code to download a digital copy of the album and the extra tracks on the second disc. It would be great if, like the rumour, the code allowed us to download a hi-res digital version from iTunes!

    Or somebody has been reading the comments on this forum and decide to spin it into a rumour.

    As if you need a high quality version of some jangling guitars!
  • Reply 123 of 154
    AAAAAAAAAAUUUUUUGHHHH! Not again! I thought we resolved all this crap in the 90s?

    I am so SICK of people who have never even READ Nyquist's sampling theorem, who have ZERO understanding of the math, regurgitating utter nonsense like "rounding off the steps" as if there were some validity to their meaningless remarks!

    Here's the bottom line: Increasing the sample rate above 44.1 KHz increases the highest frequency the system can store. That's it. It does not improve the resolution of lower frequencies, reduce the size of the steps or any other meaningless gobbledygook. This increase in high frequencies does NOT affect audible frequencies through additional frequency interactions because those, if they existed, and if they were audible, were CAPTURED BY THE MICROPHONE AT THE TIME OF RECORDING. If you think there's something going on above 22 KHz (this world contains almost nothing over 10 KHz much less 20) and you think you can hear it (and still thing so after listening to a tone generator sweep up to that point), by all means, buy high sample rate recordings. Otherwise, refuse to be sucked in by marketing bullshit.

    More bottom line: Increasing the word length from 16 bits to 24 bits lowers the volume at which the signal turns to noisy hash. Period, The End, nothing else. For a classical piece with 100 dB dynamic range this can be beneficial because the really, really, really, really, really quiet parts will sound less grainy (and if you can hear them, you better have a seven thousand watt amplifier for when the loud part comes in -- do the math: twice the power for every 3 db). For a Pop piece with the dynamics compressed so hard that the waveform looks like a cylinder, the net benefit of more bits is zero zilch nada FA poodly. Nothing. Increasing dynamic range doesn't magically improve other characteristics.

    If you want something that sounds better, start with reducing the or removing the data compression. Apple's move from 128K to 256K some years ago was a really good one. Another step like that would be good.

    But If someone tells you a recording sounds better because it has more bits and higher sampling rate, tell them to save that bullshit for distant recordings of bats. You don't have to have golden ear training to see though the marketing hype, you just need a working calculator. Don't let them sucker you.

    In theory, anything above 20K or so shouldn’t make any difference because we can't hear it. But I don't think it's quite as simple as that. Sound is a complex thing, and harmonics affect each other across the spectrum. So my feeling is that very high frequencies probably have an effect on the lower ones and vice versa. Just because you can't hear those high notes doesn't mean that they don't alter your perception of the lower ones. Similarly for very low frequencies-with those, you can sometimes feel them, even if you can't hear them.

    But yes, you'd probably get far more improvement from reducing the compression.
  • Reply 124 of 154
    solipsismx wrote: »
    I just want to know what is the best way to listen to Mylie Cyrus and Justin Bieber. (expecting some fun answers)

    In hell, at least for the former.
  • Reply 125 of 154
    NO!!! Don't! You'll be sorry!

    Seriously. I came home from a night at the symphony orchestra all excited about one of the pieces I'd heard so I put it up on the stereo.

    ew...

    The stereo system I love sounded as much like that orchestra as a photograph looks like a person. Blech. I was so disappointed.

    That's what comes from being greedy. You should have savoured the performance, rather than sully its memory with an inferior recording.
  • Reply 126 of 154
    mpantone wrote: »
    Improvements in recording technologies primarily benefit styles that feature a wide sonic range. By definition, that is classical music and to a lesser degree jazz.

    If you listen to contemporary music (rock, pop, rap, country, whatever), the benefits are nearly non-existent. 

    For sure. This is really all about classical music. Pop music was invented for small tinny speakers in noisy environments. It couldn't be better suited to compression, like speech.
  • Reply 127 of 154
    Quote:

    Originally Posted by drblank View Post

     

    You are asking the impossible.  Most of the 16 bit CDs that have been on the market were done years ago compared to the newer 24 Bit masters, so you asking for something that's almost impossible to verify.  the only comparison I can do is listen to a 16 Bit CD vs a 24 Bit version on the same stereo side by side at the same volume level.  If it's noticeable, then it's noticeable and every one I've compared so far has been noticeably better sounding.  Most of the time it's DRASTICALLY better sounding.   Some of the recordings I can hear the buzz coming from the guitar amps in the faint background right before a song starts. They aren't using hardly any compression. I asked one of the mastering engineers that has done HD Tracks and he told me his process.  He said there was very little, if any compression used, almost no eq, he was very careful as to not tarnish the transfer.  Bob Ludwig also has been interviewed as he did the entire  Rolling Stones 24 Bit remasters.  A lot of these mastering studios have been upgrading the converters with much better dynamic range, S/N ratios, etc. etc.  But I look at the final end result from a consumer's standpoint.  Does the 24 bit AIFF sound better than a 16 AIFF from CD?  If so that's all I need to compare and the differences I've heard is so noticeable from the first couple of seconds.  Some blew me away at how much better they sound. Cymbals aren't harsh and distorted, transients are much clearer, it's like a whole new experience.  Obviously, how much better will sometimes depend on how good your DAC, speakers, amps, etc. are, but my current system I've been listening on for the past 6 months is nothing special.  Just a decent DAC running through Shift A2's with decent RCA interconnect cables and this is in my bedroom/office.  I also listen to recordings in the 80 dB to 90dB range with peaks at around 95dB on occasion.  Some of the 16 bit recordings sound horrible when played at the higher volume levels whereas the 24 bit don't.  But I generally don't crank my system up, most of the time it's hovering around 78 to 85dB, which is normal listening levels.  


    It's not impossible. You just have to know which sources were transferred under conditions as close as possible. The Rolling Stones remasters were issued on CD/SACD hybrid discs. If you want to hear for yourself any difference from just the resolution, that would be a good starting point, as would any other CD/SACD hybrid, since those transfers were typically done in the same mastering session. The releases from Mobile Fidelity are a great A-B comparison, because the CD and SACD layers are created from the same master source during the same session. My other reference is Neil Young's Greatest Hits CD/DVD combo. That is a session in which both the 16-bit and 24-bit masters were encoded simultaneously.  Both sound fantastic, and both sound very similar. But, it's not a "night and day" difference when you can isolate the variables to just the bit-depth.

     

    The problem with your thesis is that EVERY single one of the attributes that you ascribe to 24-bit resolution can also be applied to improvements made by remastering. One of my other reference discs is a classical recording that Classic Records issued as a 96/24 disc. I've A-B'd this against Mobile Fidelity's CD/SACD version. The CD layer on the Mobile Fidelity release blows the Classic Records release away. 

     

    Why?  Editorial decisions made during the mastering sessions.

     

    Classic Records used a first generation vinyl pressing as their reference. The equipment and settings they used in the mastering session were intended to emulate the sound of the original release as closely as possible. Mobile Fidelity, on the other hand, uses a highly customized playback rig and tweaks the settings to optimize the sound quality as they see it, without targeting the original release as the reference. Both releases sound great, and both improve upon the original CD release (which I also own). But, the differences are almost entirely due to decisions made at the mastering stage, rather than the release format.

     

    Think about it this way, have you ever heard big differences in sound quality between two versions of the same CD release? If so, then you've already negated your argument about 24-bit because you've confirmed the audible influence of mastering when the bit depth is not one of the variables.

     

    Quote:


    Let me ask you a couple of questions.

     

    1. Are you trying to compare in a studio environment just the difference between 16 bit and 24 bit with everything else the same? 

    2.  If so, what speakers are you using?

    3.  What AD and DA converters are you using?

    4.  What other equipment that you are routing everything through are you using?

     

    Why do I ask?  The reason is that some equipment limits what you can hear and you might have some equipment that's not allowing you to hear any differences.



     

    Ah, the "your equipment might not be good enough" retort.  FYI, I'm using a midrange Yamaha receiver, with a Sony ES SACD player and Denon universal player. Both of the players use the same Burr-Brown 17XX series DACs, and when I do this kind of comparison listening, I use the analog outs to the receiver, which has an analog bypass when the bass management is switched off. The speakers are Paradigm Studio 40s (2 1/2 way standmounts) and I use a SPL meter to level match the output. In addition, I use acoustic panels in my living room, and have measured their attenuation levels.

  • Reply 128 of 154
    drblankdrblank Posts: 3,385member
    Quote:

    Originally Posted by Woochifer View Post

     

    It's not impossible. You just have to know which sources were transferred under conditions as close as possible. The Rolling Stones remasters were issued on CD/SACD hybrid discs. If you want to hear for yourself any difference from just the resolution, that would be a good starting point, as would any other CD/SACD hybrid, since those transfers were typically done in the same mastering session. The releases from Mobile Fidelity are a great A-B comparison, because the CD and SACD layers are created from the same master source during the same session. My other reference is Neil Young's Greatest Hits CD/DVD combo. That is a session in which both the 16-bit and 24-bit masters were encoded simultaneously.  Both sound fantastic, and both sound very similar. But, it's not a "night and day" difference when you can isolate the variables to just the bit-depth.

     

    The problem with your thesis is that EVERY single one of the attributes that you ascribe to 24-bit resolution can also be applied to improvements made by remastering. One of my other reference discs is a classical recording that Classic Records issued as a 96/24 disc. I've A-B'd this against Mobile Fidelity's CD/SACD version. The CD layer on the Mobile Fidelity release blows the Classic Records release away. 

     

    Why?  Editorial decisions made during the mastering sessions.

     

    Classic Records used a first generation vinyl pressing as their reference. The equipment and settings they used in the mastering session were intended to emulate the sound of the original release as closely as possible. Mobile Fidelity, on the other hand, uses a highly customized playback rig and tweaks the settings to optimize the sound quality as they see it, without targeting the original release as the reference. Both releases sound great, and both improve upon the original CD release (which I also own). But, the differences are almost entirely due to decisions made at the mastering stage, rather than the release format.

     

    Think about it this way, have you ever heard big differences in sound quality between two versions of the same CD release? If so, then you've already negated your argument about 24-bit because you've confirmed the audible influence of mastering when the bit depth is not one of the variables.

     

     

    Ah, the "your equipment might not be good enough" retort.  FYI, I'm using a midrange Yamaha receiver, with a Sony ES SACD player and Denon universal player. Both of the players use the same Burr-Brown 17XX series DACs, and when I do this kind of comparison listening, I use the analog outs to the receiver, which has an analog bypass when the bass management is switched off. The speakers are Paradigm Studio 40s (2 1/2 way standmounts) and I use a SPL meter to level match the output. In addition, I use acoustic panels in my living room, and have measured their attenuation levels.

     


    The reason why I asked is there are recording studio engineers that frequently use these NS-10's which are just awful speakers.  So I wasn't sure what people are using. Then there are a lot of computer users that only have $250 or less "computer speakers" that I wouldn't recommend investing in high res audio.  It's just to get some kind of idea of where the person is coming from and not to put them down.  I've had heated arguments with industry veterans  that have been around the world in top end studios and they seem to love these NS-10's and they told me they are flat, colorless, etc. and I told them they were "full of it" and then I let them listen to a better system and they realized they were wrong.  So, I've just had those discussions and it's not a personal attack, it's just straightening them out.  Playback systems in the studio is so critical, but there are a lot of marginal speakers that people use that I wouldn't use in the home and in the studio they are fatiguing to listening to.  These audio engineers taylor the sound for THEIR speakers and I wouldn't want to use THEIR speakers in my listening environment.   it's very much what BOSE does with their demo CDs.  They make their BOSE speakers sound great with purposely mastered demo CDs meant to make BOSE sound good, but that's not what the rest of the music is meant for.   Some people using computers simply don't have their system together for playing 24 bit audio and there's going to be a learning curve for a lot of folks. Yeah, I don't think someone with a pair of $50 computer speakers will be useful for high res audio.  They'll probably argue they can't hear a difference.  well, yes, I would agree with them if that's what they are listening through. 



    I wish we could get more in depth background on each 24 bit recording as to where they got the source and any other information, but unfortunately not all digital download sites give us this information. These days, we are lucky to even know who wrote the songs and who the musicians are on each track.

  • Reply 129 of 154
    Quote:



    Originally Posted by Benjamin Frost View Post

     

    So my feeling is 


     

    Whoa whoa whoa, you don't get to do that. You can't substitute intuition for demonstrable fact. It doesn't matter how much it FEELS like the Sun orbits the Earth, the fact is it doesn't and your feelings don't change that.

     

    Now in this case you happen to be right, but the point still stands -- it doesn't matter what your gut says if the math proves otherwise.

     


    Quote:

    Originally Posted by Benjamin Frost View Post

     

    So my feeling is that very high frequencies probably have an effect on the lower ones and vice versa.


     

    Yes, the interaction creates what are called "beat frequencies." If we assume there's something in the air that's higher than 20 KHz, and it interacts with something in the air that lands in the audible spectrum, it will be picked up by the microphone at the time of the recording! The sound is recorded, stored and played back, not created in the air in your listening room.

     

    So the existence of beat tones is not an argument for high bandwidth playback.

  • Reply 130 of 154
    drblankdrblank Posts: 3,385member
    Quote:
    Originally Posted by Lorin Schultz View Post

     

     

    Whoa whoa whoa, you don't get to do that. You can't substitute intuition for demonstrable fact. It doesn't matter how much it FEELS like the Sun orbits the Earth, the fact is it doesn't and your feelings don't change that.

     

    Now in this case you happen to be right, but the point still stands -- it doesn't matter what your gut says if the math proves otherwise.

     

     

    Yes, the interaction creates what are called "beat frequencies." If we assume there's something in the air that's higher than 20 KHz, and it interacts with something in the air that lands in the audible spectrum, it will be picked up by the microphone at the time of the recording! The sound is recorded, stored and played back, not created in the air in your listening room.

     

    So the existence of beat tones is not an argument for high bandwidth playback.


     Here's a site that discusses "BEATING". I'm hoping that this is what you are referring to.  http://hyperphysics.phy-astr.gsu.edu/hbase/sound/beat.html

  • Reply 131 of 154
    drblankdrblank Posts: 3,385member
    Quote:

    Originally Posted by Lorin Schultz View Post

     

     

    Whoa whoa whoa, you don't get to do that. You can't substitute intuition for demonstrable fact. It doesn't matter how much it FEELS like the Sun orbits the Earth, the fact is it doesn't and your feelings don't change that.

     

    Now in this case you happen to be right, but the point still stands -- it doesn't matter what your gut says if the math proves otherwise.

     

     

    Yes, the interaction creates what are called "beat frequencies." If we assume there's something in the air that's higher than 20 KHz, and it interacts with something in the air that lands in the audible spectrum, it will be picked up by the microphone at the time of the recording! The sound is recorded, stored and played back, not created in the air in your listening room.

     

    So the existence of beat tones is not an argument for high bandwidth playback.


    I think the best argument for high bandwidth playback is to give the listener a better sense of realism in what they are listening to.



    Example.  If you have two instruments playing the same note or passage together, the way our ears distinguish one from the other is based on the timbre of each instrument as long as they are both in tune and playing together.  But if we have a large degree of resolution in terms of hearing these upper frequencies more accurately, we can better distinguish one from the other and it's these upper frequencies that give us that, but at what point are they really distinguishable?  Can it be done good enough by reproducing 20Hz to 20Khz?   That's the all important question.  I've read interviews with various engineers that design expensive recording equipment that argued that our ears may not "HEAR" above 20Khz in the best scenario, but our bones various parts of our body can sense it.  I know very low bass frequencies certainly can rattle our insides as even our body fat vibrates when we have bass notes played at loud volumes while sitting down or even standing up.  We, as humans, FEEL the music at some level, and do these upper frequencies make our outer layer of skin inside our ear canal vibrate at some level that we aren't conscious of that no one has measured yet? I don't know the answer to that one.

     

    The problem for many years is most recordings don't use microphones that really capture above 20KHz that well, if at all.  There are a few on the market that can do that for a few years, but they aren't normally used.  But they do exist and are used more and more.  Earthworks is one brand that has microphones that are capable of capturing above 30Khz, I believe.  Then there is the rest of the chain on down.  Can the cables accurately send frequencies that high without destroying the information?  Do the pre amp, etc. able to allow those frequencies to pass through properly?



    Every piece of equipment that records and plays back audio signals is actually a filter on some level as audio signals are sent through it, and it's just trying to send the signal through it's path with as little destruction along the way by means of noise, artifacts, and other methods.

     

    I know at some point it become ridiculously expensive to make equipment that don't destroy audio signals on even the most minute level, but there seems to be more and more companies pushing the upper levels of the envelope since there are better measurement tools that can be used to test more minute changes in resistance, etc.

     

    I know this discussion gets futile at some point. Have we exceeded it yet?  Maybe a long time ago. :-)

  • Reply 132 of 154
    solipsismxsolipsismx Posts: 19,566member
    I found this interesting

    [QUOTE]Healthy, young humans are able to hear sounds over a frequency range from 20 Hz to 20 kHz. We are most sensitive to frequencies between 2000 to 4000 Hz which is the frequency range of spoken words. The frequency resolution is 0.2% which means that one can distinguish between a tone of 1000 Hz and 1002 Hz. A sound at 1 kHz can be detected if it deflects the tympanic membrane (eardrum) by less than 1 Angstrom, which is less than the diameter of a hydrogen atom. This extreme sensitivity of the ear may explain why it contains the smallest bone that exists inside a human body: the stapes (stirrup). It is 0.25 to 0.33 cm long and weighs between 1.9 and 4.3 mg.

    [LIST]
    [*] http://en.wikibooks.org/wiki/Sensory_Systems/Auditory_System
    [/LIST][/QUOTE]

    An Angstrom is 1.0 × 10^-10 meters or 1.0e-10 meters or one ten-billionth of a meter or or 0.1 nanometer.

    Meaning…

    [QUOTE]When registering the quietest sounds, the bones of the middle ear vibrate by less than the diameter of a hydrogen atom.[/QUOTE]
  • Reply 133 of 154
    I'll take an iPhone 6 with a FPGA DAC please...
  • Reply 134 of 154
    haarhaar Posts: 563member
    AAAAAAAAAAUUUUUUGHHHH! Not again! I thought we resolved all this crap in the 90s?

    I am so SICK of people who have never even READ Nyquist's sampling theorem, who have ZERO understanding of the math, regurgitating utter nonsense like "rounding off the steps" as if there were some validity to their meaningless remarks!

    Here's the bottom line: Increasing the sample rate above 44.1 KHz increases the highest frequency the system can store. That's it. It does not improve the resolution of lower frequencies, reduce the size of the steps or any other meaningless gobbledygook. This increase in high frequencies does NOT affect audible frequencies through additional frequency interactions because those, if they existed, and if they were audible, were CAPTURED BY THE MICROPHONE AT THE TIME OF RECORDING. If you think there's something going on above 22 KHz (this world contains almost nothing over 10 KHz much less 20) and you think you can hear it (and still thing so after listening to a tone generator sweep up to that point), by all means, buy high sample rate recordings. Otherwise, refuse to be sucked in by marketing bullshit.

    More bottom line: Increasing the word length from 16 bits to 24 bits lowers the volume at which the signal turns to noisy hash. Period, The End, nothing else. For a classical piece with 100 dB dynamic range this can be beneficial because the really, really, really, really, really quiet parts will sound less grainy (and if you can hear them, you better have a seven thousand watt amplifier for when the loud part comes in -- do the math: twice the power for every 3 db). For a Pop piece with the dynamics compressed so hard that the waveform looks like a cylinder, the net benefit of more bits is zero zilch nada FA poodly. Nothing. Increasing dynamic range doesn't magically improve other characteristics.

    If you want something that sounds better, start with reducing the or removing the data compression. Apple's move from 128K to 256K some years ago was a really good one. Another step like that would be good.

    But If someone tells you a recording sounds better because it has more bits and higher sampling rate, tell them to save that bullshit for distant recordings of bats. You don't have to have golden ear training to see though the marketing hype, you just need a working calculator. Don't let them sucker you.


    I believe you forgot to consider phase response... (and the antialiasing filter really does play a part )
    The Nyquist theorem is about infinite number of samples to represent a waveform... is very difficult to encode two frequencies (or more, such as harmonic's ) being played together in a short amount of time if there aren't enough samples to average the data...
    Nyquist sampling theory means you can encode any frequency under half the sampling frequency, it just doesn't mention how many samples you need to encode the harmonics exactly... that is where higher sampling frequencies are useful because you have more samples to reconstruct the frequencies or the Hamonic's in a shorter amount of time.
  • Reply 135 of 154
    Quote:
    Originally Posted by haar View Post

     
    Nyquist sampling theory means you can encode any frequency under half the sampling frequency, it just doesn't mention how many samples you need to encode the harmonics exactly...


     

    Nope, that's incorrect. Nyquist does dictate exactly how many samples are required to reconstruct ANY waveform, including harmonics: 2x+1 the frequency being recorded.

     

    You don't have one set of ears for the fundamental and another for harmonics, it's all one sound. When you place a microphone in front of a guitar you record not just the fundamental frequency of each string, but also all the harmonics and beat tones it creates. They combine to create a complex waveform. That's what the sound of a guitar *IS*.

     

    Quote:
    Originally Posted by haar View Post

     
    […] that is where higher sampling frequencies are useful because you have more samples to reconstruct the frequencies or the Hamonic's in a shorter amount of time.


     

    Incorrect. A higher sampling rate does not add samples in the audible spectrum, it simply EXTENDS the spectrum. A 96 KHz recording has exactly the same number of samples at 5000 Hz as a 44.1 KHz recording.

  • Reply 136 of 154
    Quote:

    Originally Posted by Lorin Schultz View Post

     

     

    Whoa whoa whoa, you don't get to do that. You can't substitute intuition for demonstrable fact. It doesn't matter how much it FEELS like the Sun orbits the Earth, the fact is it doesn't and your feelings don't change that.

     

    Now in this case you happen to be right, but the point still stands -- it doesn't matter what your gut says if the math proves otherwise.

     

     

    Yes, the interaction creates what are called "beat frequencies." If we assume there's something in the air that's higher than 20 KHz, and it interacts with something in the air that lands in the audible spectrum, it will be picked up by the microphone at the time of the recording! The sound is recorded, stored and played back, not created in the air in your listening room.

     

    So the existence of beat tones is not an argument for high bandwidth playback.


    Ah, ok. You obviously know your stuff. I'm not an engineer, just a musician. You're probably right; I'd love to have a longer conversation with you about it! It just takes a bit of getting the head around. For instance, when I think of two piano strings reverberating together and all the harmonics created, it strikes me that the very high frequencies will change the sound lower down. But as you say, if that is captured, then I can see how it might not matter if the high stuff isn't recorded. But I need my brain to work on the concept more.

  • Reply 137 of 154
    haarhaar Posts: 563member
    Nope, that's incorrect. Nyquist does dictate exactly how many samples are required to reconstruct ANY waveform, including harmonics: 2x+1 the frequency being recorded.

    You don't have one set of ears for the fundamental and another for harmonics, it's all one sound. When you place a microphone in front of a guitar you record not just the fundamental frequency of each string, but also all the harmonics and beat tones it creates. They combine to create a complex waveform. That's what the sound of a guitar *IS*.


    Incorrect. A higher sampling rate does not add samples in the audible spectrum, it simply EXTENDS the spectrum. A 96 KHz recording has exactly the same number of samples at 5000 Hz as a 44.1 KHz recording.

    um, incorrect!... I sample rate of 96 kHz equals one sample every 10.41667 Microseconds, therefore recording a 5 Khz waveform sampled at 96khz will give you 19.2 samples, a 5khz waveform sampled at 44.1 khz will give you 8.82 samples... thus you have more samples at a higher sample rate.


    Nyquist theorem assumes a steady state sinewave...
    if you record two sine waves phase shifted from each other by 15° how many samples will you require to have in order to accurately represent that phase difference ?...
    more than 2X plus one... (in the real world due to the antialiasing filters required for D/A converters)
    but this does not break the Nyquist theorem it just means that you need a continuing series of samples at in order to represent that phase difference...
    you also need a higher sampling rate because the rollout filter that you need to prevent antialiasing distortion can be shallower...
    meaning you need to rolloff all frequencys before the cut off frequency...
    which for audio is approximately 20 kHz, so with a frequency of sample frequency of 44.1 kHz you need to rolloff all frequencies within one octave....
    which is a 16th order filter...(96 db's Attenuation in one octave)....and a 176.4 khz sample rate in FOUR octaves...
    have you look at the phase response of the 16th order filter?... not pretty!...
    The phase shift in a fourth order filter is 360°, "which sort of gets you back to where you started"... but because phase response of a low pass filter is nonlinear it subtly change the sound of the music... which is a minor reason why the master tapes sound better than the digital copy ... because the music hasn't gone through an antialiasing filter.


    TL;DR... a 4x sample rate (176.4Khz) of music (20Khz) sounds better because the low pass filter required to antialias the signal requires a Lower order filter that has less phase shift , and that makes the music sound better.
  • Reply 138 of 154
    Quote:

    Originally Posted by haar View Post



    um, incorrect!... I sample rate of 96 kHz equals one sample every 10.41667 Microseconds, therefore recording a 5 Khz waveform sampled at 96khz will give you 19.2 samples, a 5khz waveform sampled at 44.1 khz will give you 8.82 samples... thus you have more samples at a higher sample rate.

     

    I phrased that really poorly, didn't I? My bad.

     

    The point is that increasing the sample rate doesn't increase the resolution at lower frequencies, it just raises the limit on the highest frequency that can be captured. 

     

    Quote:
    Originally Posted by haar View Post



    Nyquist theorem assumes a steady state sinewave...

     

    It does? According to whom? Neither Nyquist nor Shannon say that. I'm not aware of any reason the Nyquist theorem wouldn't apply to complex waveforms, but I'm prepared to be enlightened.

     


    Quote:
    Originally Posted by haar View Post



    you also need a higher sampling rate because the rollout filter that you need to prevent antialiasing distortion can be shallower...

    meaning you need to rolloff all frequencys before the cut off frequency...

    which for audio is approximately 20 kHz, so with a frequency of sample frequency of 44.1 kHz you need to rolloff all frequencies within one octave....

    which is a 16th order filter...(96 db's Attenuation in one octave)....and a 176.4 khz sample rate in FOUR octaves...

    have you look at the phase response of the 16th order filter?... not pretty!...

     

    Except that nobody uses analog filters. Oversampling with a digital filter means zero phase distortion from a converter with a nominal rate of 44.1 kHz.

  • Reply 139 of 154
    Lorin Schultz, finally we have a thinking person who knows how digital audio works and still participates in such discussions! I gave up long time ago fighting against stair-steps in discrete signal ;) it's so hard to explain that the listening room brings so much noise that people can't even tell 12-bits from 16 bits. I admire your patience!
  • Reply 140 of 154
    hillstones wrote: »

    Does it surprise me that they don't talk about iTunes Match?

    I'm not sure what you're talking about. I spoke directly with Apple recently concerning this. They sent me an e-mail saying they were going to charge me again to renew my service. I paid the renewal fee. My bank has statements confirming my communication with Apple, Inc.

    My wallet says that it's not some rumor on this site. It's a real thing.

    Not sure why you gave iTunes Match a "rumor type" response...
Sign In or Register to comment.