Apple takes media on tour of audio lab in run-up to HomePod launch
Apple recently took a well-known blog on a tour of its audio labs, using the occasion to talk up its first smartspeaker, the HomePod, shipping on Feb. 9 for $349.
The labs are the same as those used to test products like iPhones, iPads, and AirPods, Apple marketing head Phil Schiller explained to The Loop. Devices are sent there several times during their development until sound performs to Apple's expectations.
The HomePod project dates back six years, and was born of the idea of making a speaker that would sound good in any room it was placed in, according to Apple's Hardware Engineering VP, Kate Bergeron. Development was originally limited to a small team, but executive approval allowed it to pull in teams handling aspects like thermals and wireless technology.
"We think we've built up the biggest acoustics and audio team on the planet," claimed the company's senior director of Audio Design and Engineering, Gary Geaves.
"We've drawn on many of the elite audio brands and universities to build a team that's fantastic. The reason we wanted to build that team was certainly for HomePod, but to also to double-down on audio across all of Apple's products."
For the HomePod, though, the company built a new anechoic test chamber, said to be one of the biggest in the U.S. and resting on springs to minimize outside vibrations. A second chamber was built to help with Siri voice detection, since the speaker needs to be able to hear Siri commands even over loud music or ambient noise.
A third chamber on the tour is meant to detect any unwanted noise from the speaker itself, such as a buzz. It rests on 28 tons of concrete, with foot-thick walls weighing another 27 tons. Between the chamber and the concrete slab are 80 isolating mounts. The combination of factors pushes the chamber to -2 dBA, below human hearing.
Work on the HomePod is feeding into other projects, Geaves said, without offering specific hints.
"There's been certain catalysts in the development of HomePod that are feeding other products," he remarked. "That's one of our advantages -- we work on a bunch of different areas of audio."
The labs are the same as those used to test products like iPhones, iPads, and AirPods, Apple marketing head Phil Schiller explained to The Loop. Devices are sent there several times during their development until sound performs to Apple's expectations.
The HomePod project dates back six years, and was born of the idea of making a speaker that would sound good in any room it was placed in, according to Apple's Hardware Engineering VP, Kate Bergeron. Development was originally limited to a small team, but executive approval allowed it to pull in teams handling aspects like thermals and wireless technology.
"We think we've built up the biggest acoustics and audio team on the planet," claimed the company's senior director of Audio Design and Engineering, Gary Geaves.
"We've drawn on many of the elite audio brands and universities to build a team that's fantastic. The reason we wanted to build that team was certainly for HomePod, but to also to double-down on audio across all of Apple's products."
For the HomePod, though, the company built a new anechoic test chamber, said to be one of the biggest in the U.S. and resting on springs to minimize outside vibrations. A second chamber was built to help with Siri voice detection, since the speaker needs to be able to hear Siri commands even over loud music or ambient noise.
A third chamber on the tour is meant to detect any unwanted noise from the speaker itself, such as a buzz. It rests on 28 tons of concrete, with foot-thick walls weighing another 27 tons. Between the chamber and the concrete slab are 80 isolating mounts. The combination of factors pushes the chamber to -2 dBA, below human hearing.
Work on the HomePod is feeding into other projects, Geaves said, without offering specific hints.
"There's been certain catalysts in the development of HomePod that are feeding other products," he remarked. "That's one of our advantages -- we work on a bunch of different areas of audio."
Comments
And 1 Infinite Loop isn't going away. My understanding is that it's Apple's leased facilities around Cupertino that will be phased out.
I will never use Alexa at my home, nor Google Home even if they paid me.
actually most places just set up a sound rooms, they do not bother with full anechoic chambers. Very few of these exist in the world, Apple uses them due to the echo cancelation on the phones as well as the dynamic sound adjustments of the homepod. To just test a speaker or a mic a sound room is good enough. I was in one full anechoic isolated sound chamber once and when the close the door it so quite you can hear your own heart beat it is the weirdest experience you will ever have.
Phil told a little white lie, this is biggest in the world since you can put full size cars in them
https://eckelusa.com/anechoic-chamb
2) Why would a real world use case make someone an "internet wantabe"? Apple sends out review units to these people for a reason. Note: We've seen Apple that one of their shortcomings is not doing enough real world testing, but I don't think that will be an issue with this product.
A sound room is completely different than a chamber when it come to doing sound analysis. Most speaker companies do test for frequency response, amplitude and resonances in sound rooms which is just room with absorbers on the wall. But in order to do real times signal processing and analysis you have to be in a perfect room, and then introduce object to see how the algorithm responds. Apple is doing something completely different than pushing out sound they changing that sound in real time. I suspect they showing this off so the competitors can not say me too.
The comment about the wantabe is more around they trying to do a side by side but they not controlling any thing in the environment. The first thing is simply how good is their hearing and can they actually determine difference other than which one is lower, can they determine distortion in sound. I gone into sound rooms to listen to speaker and some you can not ever tell them apart, others behave very differently but that does not mean one is better than another and when you have other people listening, others hear things you do not. The internet blogger just say they like this one better can not say why
Siri could be used by 700 million people and Alexa by 7 million, but until or unless Siri becomes at least as useful as Alexa for me, I'll continue using the latter.
I love me some "Apple juice", but Siri reminds me of a "dumb blonde" while Alexa reminds me of a "smart brunette". Google and Cortana are the red-headed step-sisters.
The expected beam patterns for the speaker arrays can be roughly determined through mathematics and modeling but they must still be verified in an actual speaker enclosure sitting on a representative reflective surface that the speaker will encounter in actual use, since the enclosure and table surface directly influence real world beamforming performance in ways that are difficult to predict with a high degree of accuracy beforehand. Each individual microphone and speaker, i.e., transducer, has a known spatial response at specific frequencies but the beamforming process, by definition, involves the very critical time based combination of signals from multiple overlapping transducers physically oriented in the actual speaker enclosure to correctly form beams. Also, music and natural sounds like human voices are unpredictable and complex waveforms composed of multiple overlapping frequencies and transducers are frequency dependent, so there is really no way to really know how well the beamforming is working unlessmeasurements are done in an anechoic chamber with a lot of sample data.
Only after the baseline beamforming characteristics of the arrays are established and verified through DSP programming and measurements in the chamber can the engineers tune their adaptive algorithms to influence how the beamforming and sound intensity is used to influence the spatial performance of the microphone and speaker arrays when additional reflective and partially reflective surfaces are introduced into the test chamber in a very controlled manner.
Obviously this is much more controlled than anything you’d ever be able to do with a sound/listening room. But it’s very necessary if you want to have a more predictable response when the speaker is placed in a real world environment. Apple could never “tune” the HomePod for all possible customer deployments but they can make it much easier for them to predict and explain why the HomePod operates how it does in just about given environment. It will never be perfect, but it will never be random either.
Amazon and Alexa are massive in the U.K. for a start.