Apple's Siri & rivals being hampered by poor microphone tech

Posted:
in General Discussion
Apple's Siri -- and other voice assistants from the likes of Amazon, Google, and Microsoft -- are being held back by the state microphone hardware, a report noted on Thursday.




Since the launch of the iPhone 5 in 2012, microphone technology hasn't advanced significantly, IHS Markit analyst Marwan Boustany explained to Bloomberg. As a result, mics still have difficulty picking up distant voices and filtering out background sounds. Even without these issues, keeping a mic on all the time -- needed for voice triggers like "Hey Siri" -- can sometimes consume too much battery life.

Companies like Apple are said to be looking for better mics from suppliers to fix these problems, as well as achieve a higher acoustic overload point less likely to be tripped. At the same time, size and power consumption need to be kept under control.

Partly to compensate for poor pickup, device makers have gradually been adding more microphones in recent years. While the first iPhone had a single mic, there are three in the iPhone 6, and four in the iPhone 6s. The Amazon Echo has seven, being a device that needs to hear users from anywhere in a room.

Some possible solutions may include mics with built-in audio processing algorithms, or ones using piezoelectric technology.

Microphones could become extremely important to Apple if the company is indeed developing an Echo competitor, whether in the form of a standalone product or an upgraded Apple TV. It's unknown how many mics the "iPhone 7" might be equipped with.
«1

Comments

  • Reply 1 of 38
    volcanvolcan Posts: 1,799member
    Although there are multiple mics, I'm pretty sure Siri only uses one of them.
  • Reply 2 of 38
    SoliSoli Posts: 10,035member
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
  • Reply 3 of 38
    This is one of the few areas where Nokia was ahead pre-Microsoft. They had tech in the mic hardware that allowed for an extreme dynamic range, some thing that very few devices have these days. An example of this device would be thr Pureview 808 and Lumia 1020. It allowed for you to be at the front stage of a large concert and the loudest sounds would be as clear as the softest sounds without clipping. Something phones, including Apple and Samsung, still get wrong today. Imagine a few more years of development especially without Microsoft.
    brian greencnocbui
  • Reply 4 of 38
    volcanvolcan Posts: 1,799member
    Soli said:
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is analyzed by the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. I always give Siri the first crack because it is just a home button hold, but more often than not I end up launching Google, because much of the time Siri just can't deliver. Siri is excellent at some things though - setting calendars, reminders, alarms, call, texting and surprisingly getting accurate up to the minute sports scores.
    edited August 2016 coolfactorbrian green
  • Reply 5 of 38
     "Hey Siri" on my iPad pro 9.7 is pretty bad. I literally have to shout at it from about 2 inches away most of the time or the room has to be dead silent. Now i just use it  to piss off my wife. That's basically the only negative thing about my ipad. 
    edited August 2016
  • Reply 6 of 38
    I'm sure Apple is working on their own mic tech and not just witing for manufacturers to come up with something better. 
    edited August 2016 calitopper24hoursDeelronbrian greenlolliverwaverboyjony0
  • Reply 7 of 38
    knowitallknowitall Posts: 1,648member
    volcan said:
    Soli said:
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is sent to the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. I always give Siri the first crack because it is just a home button hold, but more often than not I end up launching Google, because much of the time Siri just can't deliver. Siri is excellent at some things though - setting calendars, reminders, alarms, call, texting and surprisingly getting accurate up to the minute sports scores.
    Sound is send to the server and is parsed there, not on your device.
    Siri is completely disabled when not connected.
  • Reply 8 of 38
    coolfactorcoolfactor Posts: 2,239member
    knowitall said:
    volcan said:
    Soli said:
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is sent to the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. I always give Siri the first crack because it is just a home button hold, but more often than not I end up launching Google, because much of the time Siri just can't deliver. Siri is excellent at some things though - setting calendars, reminders, alarms, call, texting and surprisingly getting accurate up to the minute sports scores.
    Sound is send to the server and is parsed there, not on your device.
    Siri is completely disabled when not connected.

    This is exactly what Apple should've fixed by now... on-device real-time processing of voice, even if it's the initial stages. A dedicated low-power chip could do that, much like their M-series chips for tracking motion. I call it the V-series chip, and let's hope that Apple is wise enough to implement something like this!
    brian greenjony0
  • Reply 9 of 38
    volcanvolcan Posts: 1,799member
    knowitall said:

    Sound is send to the server and is parsed there, not on your device.
    Siri is completely disabled when not connected.
    I was making an assumption that the server parsing the sound was a different server than the one searching the internet for the request.
  • Reply 10 of 38
    calicali Posts: 3,494member
    knowitall said:
    volcan said:
    Soli said:
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is sent to the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. I always give Siri the first crack because it is just a home button hold, but more often than not I end up launching Google, because much of the time Siri just can't deliver. Siri is excellent at some things though - setting calendars, reminders, alarms, call, texting and surprisingly getting accurate up to the minute sports scores.
    Sound is send to the server and is parsed there, not on your device.
    Siri is completely disabled when not connected.

    This is exactly what Apple should've fixed by now... on-device real-time processing of voice, even if it's the initial stages. A dedicated low-power chip could do that, much like their M-series chips for tracking motion. I call it the V-series chip, and let's hope that Apple is wise enough to implement something like this!
    Interesting that you bring that up because I've been wondering what Apple could fill the headphone jack space with if they indeed were to remove it.
  • Reply 11 of 38
    volcanvolcan Posts: 1,799member

    coolfactor said:

    This is exactly what Apple should've fixed by now... on-device real-time processing of voice, even if it's the initial stages. A dedicated low-power chip could do that, much like their M-series chips for tracking motion. I call it the V-series chip, and let's hope that Apple is wise enough to implement something like this!
    That might be a lot harder than you think. For one, the phone probably would not have enough storage space required for all the language files. Two, you would not get any improvements or fixes until the next OS update. With a server they can be updating it continuously. Furthermore, I think you probably underestimate the massive processing power required, not to mention the battery drain.
    calilolliverpatchythepiratebaconstangnolamacguy
  • Reply 12 of 38
    knowitallknowitall Posts: 1,648member
    Better not listen to this analyst.
    Microphones do not filter frequencies or distinguish between foreground and background noises.
    A microphone 'just' registers a frequency spectrum with a specific dynamic range, noting more noting less.
    Interpretation (parsing) of this spectrum is 'calculated' and must be done by a CPU or a dedicated piece of hardware; so registering sound is more or less passive and doesn't use energy, interpreting it needs processing and costs energy.
    I would be surprised if the dynamic range of current microphones isn't enough to parse correctly, it is more likely that the filters used are insufficient and a human with the same input would understand it perfectly.
    One of the reasons to direct Siri to the cloud servers is that it has a lot more computational power and memory than an iPhone, so it could 'crack' the registered sounds much better in that way.
    Unfortunately neural nets used to parse it are unreliable and non linear in learning (that means that more training doesn't lead to better results) and far from optimal, also servers bombarded with iPhone Siri requests have less computational power per request than your local iPhone.
    Something to think about ...
    elijahg
  • Reply 13 of 38
    calicali Posts: 3,494member
    knowitall said:
    Better not listen to this analyst.
    Microphones do not filter frequencies or distinguish between foreground and background noises.
    A microphone 'just' registers a frequency spectrum with a specific dynamic range, noting more noting less.
    Interpretation (parsing) of this spectrum is 'calculated' and must be done by a CPU or a dedicated piece of hardware; so registering sound is more or less passive and doesn't use energy, interpreting it needs processing and costs energy.
    I would be surprised if the dynamic range of current microphones isn't enough to parse correctly, it is more likely that the filters used are insufficient and a human with the same input would understand it perfectly.
    One of the reasons to direct Siri to the cloud servers is that it has a lot more computational power and memory than an iPhone, so it could 'crack' the registered sounds much better in that way.
    Unfortunately neural nets used to parse it are unreliable and non linear in learning (that means that more training doesn't lead to better results) and far from optimal, also servers bombarded with iPhone Siri requests have less computational power per request than your local iPhone.
    Something to think about ...
    This is the first thing I thought of when I read that Nokia comment about their microphones. I don't think Nokias microphones compress sound, avoid clipping and eq the whole mix.
  • Reply 14 of 38
    revenantrevenant Posts: 621member
    "are being held back by the state microphone hardware"

     state of microphone hardware?


    brian greendysamoriatallest skil
  • Reply 15 of 38
    SoliSoli Posts: 10,035member
    knowitall said:
    volcan said:
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is sent to the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. I always give Siri the first crack because it is just a home button hold, but more often than not I end up launching Google, because much of the time Siri just can't deliver. Siri is excellent at some things though - setting calendars, reminders, alarms, call, texting and surprisingly getting accurate up to the minute sports scores.
    Sound is send to the server and is parsed there, not on your device.
    Siri is completely disabled when not connected.
    Those statements are related, but not necessarily connected. Here's an article where Apple is using localized DNNs for speech recognition in iOS 10.

    That doesn't mean that Siri being disconnected from the Internet means that it automatically work for any command in iOS 10 (so far, it's still down when you're in Airplane mode), but it also doesn't mean that Apple couldn't make certain aspects of Siri work offline. When Apple first incorporated Siri into iOS 5 for the iPhone 4S they canned their previous iOS voice commands (e.g.: Call Benjamin Frost.) from working in any capacity if Siri was enabled on your system; they could have allowed old commands to be interpreted first and then passed to Siri, if not understood, but they didn't. Perhaps that will start to see a hybrid model that will allow for offline and online Siri functionality as early as OS 11.
    gatorguy
  • Reply 16 of 38
    I'm sure Apple is working on their own mic tech and not just witing for manufacturers to come up with something better. 
    I certainly hope so. Given the extremely small size of the microphone, one would think that advances would need to be made in order to really gets some good range out of them. On a similar note, I certainly hope that the speaker used in the ear piece slot for the iPhone gets better and louder. I have the thing turned up all the way and it's still difficult to hear in everyday city environments. I guess this is the price we pay for Apple shrinking the thickness of the phone year after year.
    baconstang
  • Reply 17 of 38
    SoliSoli Posts: 10,035member
    I'm sure Apple is working on their own mic tech and not just witing for manufacturers to come up with something better. 
    I certainly hope so. Given the extremely small size of the microphone, one would think that advances would need to be made in order to really gets some good range out of them. On a similar note, I certainly hope that the speaker used in the ear piece slot for the iPhone gets better and louder. I have the thing turned up all the way and it's still difficult to hear in everyday city environments. I guess this is the price we pay for Apple shrinking the thickness of the phone year after year.
    My "held up to the ear" speaker became weaker to the point that even in a quite room it was very difficult to understand what was being said. I took my 6S Plus into the Apple Store and they replaced it under warranty. Now it's very audible, and I'm guessing even better than the original, much thicker iPhone from 2007.
    edited August 2016 brian green
  • Reply 18 of 38
    sdw2001sdw2001 Posts: 18,015member
    My issue is how SIRI functions through CarPlay and the car's mic.  FAR less accurate than just the phone 
  • Reply 19 of 38
    SIRI Sucks! No matter what situation I am in, it never works, can't find what I need or can't understand me. I use a BT headset, BT in the car, talk directly to the phone. Yet when I use Google Now on my iPhone it works without issue, finds directions, calls a contact, looks up a company and it doesn't require six attempts. Apple made a great move buying SIRI, then was as reckless as a teenager with their new BMW and totaled it. Why even bother buying the company if they stripped out most of the features and now we can't do squat with it. Say what you want about Android, at least Google allows you to change the Virtual Assistant to any other one if you want to on Android OS. Apple never will allow that.
  • Reply 20 of 38
    k2kwk2kw Posts: 2,075member
    knowitall said:
    volcan said:
    Soli said:
    I believe it. Amazon Echo is amazing with what it accurately understands, and I really don't think Amazon's speech-to-text technology is better than Apple's.
    I rarely have any trouble with Siri parsing the words. I can tell because it gets printed to the screen very accurately. The trouble usually happens after that text is sent to the server where the actual intent is often completely misunderstood. I've never tried Echo but I find Google is much better than Siri at returning the desired result. 
    This is exactly what Apple should've fixed by now... on-device real-time processing of voice, even if it's the initial stages. A dedicated low-power chip could do that, much like their M-series chips for tracking motion. I call it the V-series chip, and let's hope that Apple is wise enough to implement something like this!
    Hopefully Apple comes out with a big upgrade to Siri with the iphone7.   I expect the new phone to have wide color and increased screen resolution (1K and quadHD).   Hopefully with Stereo speakers and more and better Microphones.   They need a got-to-have-it factor since they are taking away the headphone jack.

    I have also been thinking that The new iPhone and even the Apple Watch 2 would feel a little thin.   Hopefully there is something else that hasn't been leaked like Apple TV unit better than Echo and Alexis.

    Lastly I'll reall be impressed if October sees new iMacs and MacBooks with TouchId ( not just the MBP that has been leaked all over).
Sign In or Register to comment.