Apple details headphone jack improvements on new MacBook Pro
The new 14-inch MacBook Pro and revised 16-inch MacBook Pro feature an improved headphone jack, which Apple says adapts to low- and high-impedance headphones without external amps.
Apple has improved the headphone jack on the new MacBook Pro
During the launch of the 14-inch and 16-inch MacBook Pro, Apple referred to how the improved audio features included a revised headphone jack. Now the company has detailed what those revisions are, and how they benefit users.
"Use high-impedance headphones with new MacBook Pro models," is a new support document about the improvements.
"When you connect headphones with an impedance of less than 150 ohms, the headphone jack provides up to 1.25 volts RMS," it says. "For headphones with an impedance of 150 to 1k ohms, the headphone jack delivers 3 volts RMS. This may remove the need for an external headphone amplifier."
As well as this impedance detection and adaptive voltage output, the new 3.5mm headphone jack features "a built-in digital-to-analog converter." Apple says this supports up to 96kHz, and means users "can enjoy high-fidelity, full-resolution audio."
Read on AppleInsider
Apple has improved the headphone jack on the new MacBook Pro
During the launch of the 14-inch and 16-inch MacBook Pro, Apple referred to how the improved audio features included a revised headphone jack. Now the company has detailed what those revisions are, and how they benefit users.
"Use high-impedance headphones with new MacBook Pro models," is a new support document about the improvements.
"When you connect headphones with an impedance of less than 150 ohms, the headphone jack provides up to 1.25 volts RMS," it says. "For headphones with an impedance of 150 to 1k ohms, the headphone jack delivers 3 volts RMS. This may remove the need for an external headphone amplifier."
As well as this impedance detection and adaptive voltage output, the new 3.5mm headphone jack features "a built-in digital-to-analog converter." Apple says this supports up to 96kHz, and means users "can enjoy high-fidelity, full-resolution audio."
Read on AppleInsider
Comments
I assume that is not the case here?
How do they calculate the bit rate? Is the encoder 96kHz or 96 kHz per channel? If it’s 96 kHz per channel it would be 192 kHz ‘combined’ between the two channels. THey also don’t comment on the quality of the built in D-A converter since that can be more important than the actual bit rate.
If you’re someone for whom the difference between 96 kHz and 192 kHz actually matters, you’re probably better off using a digital output and your own D-A converter.
Do these hardware divisions even talk to each other?
96kHz isn't of note really either, my 2019 iMac has 96kHz sample rate support both for audio in and audio out - and pretty sure my 2015 MBP has 96kHz too. The notable point is solely that the jack can support high-impedance headphones.
Come on guys, this is supposed to be a tech blog and you obviously don't understand what you're writing about, this is pretty basic stuff.
Now where this does relate to human hearing's maximum frequency, is the fact that humans can't generally hear more than 20kHz. Sampling at double that rate means there will be no aliasing errors in the audio - where parts of the audio could be "missed" essentially, as the samples might fall on two sides of a frequency peak. This is known as the Nyquist rate. The sound between the samples is effectively interpolated (averaged), and of course the higher sample rates mean there's less averaging going on. Audiophiles claim they can hear this, but double blind tests have shown that almost no one can actually tell the difference. And the Nyquist rate says 44kHz is plenty high enough to accurately reconstruct a 20kHz signal, proving that high sample rates are pointless.
The bit rate is inversely related to how much of the original audio is thrown away, and how much the MP3/AAC/whatever decoder has to "guess" to reconstruct the audio.
Regarding audio quality and this 192 kHz sample rate: earlier this year, announced immediate availability of ’s music catalog in 192 kHz/24 bit Hi-Res Lossless format (although limited to a subset of the catalog at first)—at no extra cost!!
What is mind-boggling is that ’s hardware is limited to 96 kHz—why?
—my antique +20 year old Denon AV-receiver happily supports uncompressed multi-channel audio in 192 kHz, but neither my TV 4K 2nd gen., nor my Mac mini M1 is able to take advantage of Music’s reference-class format! AFAIK, there is no technical reason for this HDMI-output buzz kill 🤒
The Nyquist theorem states that the digital sampling frequency must be at least twice the maximum analogue frequency to be reproduced. Because higher analogue frequencies sampled may lead to 'aliasing', i.e. lower frequency artefacts. a low-pass filter must be used at source. Hence 44.1 KHz was chosen for the CD standard - enough to reproduce audio up to 20KHz with a little room for (rather harsh) low-pass filters.
Higher sampling rates are used for audiophile applications 1) in order to use less harsh low-pass filters when recording and 2) because of the timing difference between the ears. At 20Khz, the wavelength of sound is around 1.6cm. Although you may not be able to hear 20KHz, your brain can certainly determine a 1.6cm phase difference from a source - at all frequencies. Hence high-resolution audio is exactly that - not for the frequency range but for for the timing to the ears.
24 bit is much like HDR. You need to listen to it at very high volume in a quiet room to appreciate any difference from 16 bit just as you need a display capable of great brightness in a dim room to appreciate it over SDR. Then no microphone is capable of 24 bit (144dB) signal-to-noise ratio so any audio created to take full advantage of the extra dynamic range is going to be somewhat contrived and artificial - much like HDR movies!
There is no practical benefit whatsoever from going above 96kHz. That sampling rate can accurately reproduce any signal up to 48 kHz.
In fact, unless you are talking about super high-end converters, going to extremely high sample rates is likely to introduce ADDITIONAL intermodulation distortion that can affect the high frequencies. All double-blind tests where people have been able to distinguish super-high sample rates were cases where people were actually hearing distortion.