Apple's Siri & rivals being hampered by poor microphone tech
Apple's Siri -- and other voice assistants from the likes of Amazon, Google, and Microsoft -- are being held back by the state microphone hardware, a report noted on Thursday.

Since the launch of the iPhone 5 in 2012, microphone technology hasn't advanced significantly, IHS Markit analyst Marwan Boustany explained to Bloomberg. As a result, mics still have difficulty picking up distant voices and filtering out background sounds. Even without these issues, keeping a mic on all the time -- needed for voice triggers like "Hey Siri" -- can sometimes consume too much battery life.
Companies like Apple are said to be looking for better mics from suppliers to fix these problems, as well as achieve a higher acoustic overload point less likely to be tripped. At the same time, size and power consumption need to be kept under control.
Partly to compensate for poor pickup, device makers have gradually been adding more microphones in recent years. While the first iPhone had a single mic, there are three in the iPhone 6, and four in the iPhone 6s. The Amazon Echo has seven, being a device that needs to hear users from anywhere in a room.
Some possible solutions may include mics with built-in audio processing algorithms, or ones using piezoelectric technology.
Microphones could become extremely important to Apple if the company is indeed developing an Echo competitor, whether in the form of a standalone product or an upgraded Apple TV. It's unknown how many mics the "iPhone 7" might be equipped with.

Since the launch of the iPhone 5 in 2012, microphone technology hasn't advanced significantly, IHS Markit analyst Marwan Boustany explained to Bloomberg. As a result, mics still have difficulty picking up distant voices and filtering out background sounds. Even without these issues, keeping a mic on all the time -- needed for voice triggers like "Hey Siri" -- can sometimes consume too much battery life.
Companies like Apple are said to be looking for better mics from suppliers to fix these problems, as well as achieve a higher acoustic overload point less likely to be tripped. At the same time, size and power consumption need to be kept under control.
Partly to compensate for poor pickup, device makers have gradually been adding more microphones in recent years. While the first iPhone had a single mic, there are three in the iPhone 6, and four in the iPhone 6s. The Amazon Echo has seven, being a device that needs to hear users from anywhere in a room.
Some possible solutions may include mics with built-in audio processing algorithms, or ones using piezoelectric technology.
Microphones could become extremely important to Apple if the company is indeed developing an Echo competitor, whether in the form of a standalone product or an upgraded Apple TV. It's unknown how many mics the "iPhone 7" might be equipped with.
Comments
Siri is completely disabled when not connected.
This is exactly what Apple should've fixed by now... on-device real-time processing of voice, even if it's the initial stages. A dedicated low-power chip could do that, much like their M-series chips for tracking motion. I call it the V-series chip, and let's hope that Apple is wise enough to implement something like this!
That might be a lot harder than you think. For one, the phone probably would not have enough storage space required for all the language files. Two, you would not get any improvements or fixes until the next OS update. With a server they can be updating it continuously. Furthermore, I think you probably underestimate the massive processing power required, not to mention the battery drain.
Microphones do not filter frequencies or distinguish between foreground and background noises.
A microphone 'just' registers a frequency spectrum with a specific dynamic range, noting more noting less.
Interpretation (parsing) of this spectrum is 'calculated' and must be done by a CPU or a dedicated piece of hardware; so registering sound is more or less passive and doesn't use energy, interpreting it needs processing and costs energy.
I would be surprised if the dynamic range of current microphones isn't enough to parse correctly, it is more likely that the filters used are insufficient and a human with the same input would understand it perfectly.
One of the reasons to direct Siri to the cloud servers is that it has a lot more computational power and memory than an iPhone, so it could 'crack' the registered sounds much better in that way.
Unfortunately neural nets used to parse it are unreliable and non linear in learning (that means that more training doesn't lead to better results) and far from optimal, also servers bombarded with iPhone Siri requests have less computational power per request than your local iPhone.
Something to think about ...
state of microphone hardware?
That doesn't mean that Siri being disconnected from the Internet means that it automatically work for any command in iOS 10 (so far, it's still down when you're in Airplane mode), but it also doesn't mean that Apple couldn't make certain aspects of Siri work offline. When Apple first incorporated Siri into iOS 5 for the iPhone 4S they canned their previous iOS voice commands (e.g.: Call Benjamin Frost.) from working in any capacity if Siri was enabled on your system; they could have allowed old commands to be interpreted first and then passed to Siri, if not understood, but they didn't. Perhaps that will start to see a hybrid model that will allow for offline and online Siri functionality as early as OS 11.
I have also been thinking that The new iPhone and even the Apple Watch 2 would feel a little thin. Hopefully there is something else that hasn't been leaked like Apple TV unit better than Echo and Alexis.
Lastly I'll reall be impressed if October sees new iMacs and MacBooks with TouchId ( not just the MBP that has been leaked all over).