Google Home Mini preview hardware defect causes near-constant listening, uploads to server...

Posted:
in General Discussion
The first batch of Google Home Mini hardware distributed at assorted release events had a serious flaw that caused the device to listen in on every noise generated in a house -- and has since been pinned down as a hardware problem with a key feature now temporarily disabled after a firmware update.




A multi-day account from Android Police detailed experience with a Home Mini given out at the Google launch event. After installation, the Google Home Mini provided to the author was turning itself on several times a minute, listened to what was going on in the background or on the television, and attempted to respond -- generally saying that it didn't understand.

However, following examination of the upload logs, author Artem Russakovskii discovered "thousands of items" uploaded to Google's servers, all attributed to the development codename for the device, "Mushroom."

Google's approach to voice recognition is different than Siri's. Google stores everything you tell your assistant on its servers, unless the user explicitly disables the feature. If the feature is disabled, Google claims that there is a negative impact on recognition and parsing.

With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation.

"The Home Mini quietly turns on, flashes its lights, then shuts off after recording every sound. When the volume increases, it actually attempts to respond to random queries," wrote Russakovskii. "I was even able to get it to turn on just by knocking on the wall."

Following a return of the unit at Google's request, the company determined that the touch sensor on the device was faulty -- and wasn't an isolated incident. Google rolled out a software update on Oct. 7 to temporarily disable the long-press on the device to activate Google Assistant functionality.

On Oct. 10, Google declared that the flaw was limited to the 4,000 Home Minis distributed by Google at pop-up events, and to attendees of the launch event. Google says that it has deleted all activity generated by the spurious touch control hardware -- and assures pre-orderers that their hardware is unaffected by the flaw.

AppleInsider contacted Google support on Wednesday morning, and was told that the feature would be enabled for customers that have pre-ordered the device without elaborating any further -- suggesting that another firmware update is imminent.

Google unveiled the Google Home Mini at an event on Oct. 4. The pebble-shaped speaker was designed as a lower-cost way for users to get a Google Home into more homes, and offers 360-degree, room filling sound according to Google.

At $49, the Google Home Mini is a direct competitor with Amazon's Echo Dot -- and arrives on Oct. 17. It also significantly undercuts Apple's HomePod, which will launch this December for $349.
«1

Comments

  • Reply 1 of 36
    "Defect", huh?
    mike1jbdragonbrucemcceek74fotoformatredgeminipawatto_cobra
  • Reply 2 of 36
    zroger73 said:
    "Defect", huh?
    No, a feature. We just aren't ready for Big Brother(a.k.a. Google) to listen to our every move just yet. This feature will probably get enabled again by this time next year.
    Room 101 awaits you!

    [Deleted User]jbdragonStrangeDaysfotoformatairnerdwatto_cobra
  • Reply 3 of 36
    macxpressmacxpress Posts: 5,930member
    This is exactly why you don't buy Google crap. 
    mike1jbdragonGG1fotoformatwilliamlondonredgeminipaairnerdhmurchisonwatto_cobra
  • Reply 4 of 36
    Rayz2016Rayz2016 Posts: 6,957member
    I wonder why it needs to record sounds a few seconds before you start speaking. 

    Sounds a bit odd to me. 

    Well, rather than speculate, I’ll just wait for the Gatorsplanation. 
    edited October 2017 williamlondonquinneyhubbaxairnerdwatto_cobra
  • Reply 5 of 36
    If it were any other company I'd accept it was a mistake, but not google.
  • Reply 6 of 36
    gatorguygatorguy Posts: 24,624member
    zroger73 said:
    "Defect", huh?
    This was discovered because it was relatively quick and easy for Artem to take a look at his Google account to review every voice request it had recorded, and of course the activity indicator lights were a dead giveaway. That made it obvious what was happening. Since it was possible other field units distributed at the event also had a defective tap switch (perhaps fabric stretched too tight across the switch? Dunno but that was one of the guesses) they disabled that hardware button on all of them. Shipping units will supposedly have that fixed before they are available for sale.

     FWIW for those who don't understand how "always listening" microphones like those used on some Apple, Google and Amazon devices actually work they "listen" for a keyword. Until then there is no connection to a server so Apple, Amazon and Google could not actually be "listening" but the device itself is using onboard hardware, waiting for a wake word, or in the case of some devices a tap of a button, before requesting a server connection at Apple/Google/Amazon. That button caused the Home problem in the preview units. Good thing that both Google and Apple ( and I think Amazon too but not certain) release a small number of review units in advance of retail versions to help catch stuff like this.
    edited October 2017 muthuk_vanalingamfotoformaturaharaairnerd
  • Reply 7 of 36
    gatorguygatorguy Posts: 24,624member
    Rayz2016 said:
    I wonder why it needs to record sounds a few seconds before you start speaking. 

    Sounds a bit odd to me. 

    Well, rather than speculate, I’ll just wait for the Gatorsplanation. 
    It's for diagnostics in the event of a failure to recognize and/or properly parse a request. If there is a particular sound or tone that consistently interferes with understanding user activation or action requests it can be "tuned out" via software fixes and updates. For example the new Google Home Max can reportedly recognize the user's voice request spoken at a normal volume from across the room even with music blasting from the speaker. Google's AI understands the difference between the sounds it hears.

    Tuning out extraneous noise is essential to a good user experience and so it's important to discover "noise" that interferes with that. That's why Apple does the same thing, uploading and storing voice requests and related sound samples so that the Siri experience can be improved.
    fotoformatretrogustorevenant
  • Reply 8 of 36
    Rayz2016Rayz2016 Posts: 6,957member
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
  • Reply 9 of 36
    gatorguygatorguy Posts: 24,624member
    Rayz2016 said:
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
    Common sense would say: On board hardware. Some chip is digitizing, recording and examining a second or so of sound all the time. How else to recognize a keyword since it relies on a word/phrase that was already spoken?

    Another example of the "always listening but not sending to a server" is the song recognition feature in the new Pixels. Yes that on-board microphone is always "listening" but without a connection to Google servers. All the recognition in done directly on the device with no need for any data sent to Google. But if the user then wants to get more information, perhaps read up on the artist, see what else they've got, maybe start a radio station using that song as the basis they can actively request a server connection to accomplish those actions. 

    edited October 2017
  • Reply 10 of 36
    eightzeroeightzero Posts: 3,142member
    I can't find a use for any of this @internetofshit I see the commercials on TV, and none of it looks interesting or desirable. 

    One notable exception: Siri on CarPlay. In my house? Meh. 

    In another thread, I recounted the number of Apple branded devices in my home. Upwards of 20. Number of google devices? 0 
    dacharairnerdwatto_cobra
  • Reply 11 of 36
    Rayz2016Rayz2016 Posts: 6,957member
    gatorguy said:
    Rayz2016 said:
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
    Common sense would say: On board hardware. Some chip is digitizing, recording and examining a second or so of sound all the time. How else to recognize a keyword since it relies on a word/phrase that was already spoken?

    Another example of the "always listening but not sending to a server" is the song recognition feature in the new Pixels. Yes that on-board microphone is always "listening" but without a connection to Google servers. All the recognition in done directly on the device with no need for any data sent to Google. But if the user then wants to get more information, perhaps read up on the artist, see what else they've got, maybe start a radio station using that song as the basis they can actively request a server connection to accomplish those actions. 

    Right. 

    Yes, I got the part about the “listening”, I just didn’t realise it was recording all the time too. I thought it only recorded stuff when you started speaking. 

    Blimey. 

    airnerdwatto_cobra
  • Reply 12 of 36
    brucemcbrucemc Posts: 1,541member
    zroger73 said:
    "Defect", huh?
    Got caught...
    evilutionairnerdwatto_cobra
  • Reply 13 of 36
    maestro64maestro64 Posts: 5,043member

    So a Software company discovers you can not always fix hardware issue with software and the its good enough we can fix it later business model could be a issue.

    Imagine that!!! Google the greater had to cripple their product so it did not fill up all their hard drives.

    watto_cobra
  • Reply 14 of 36
    gatorguygatorguy Posts: 24,624member
    Rayz2016 said:
    gatorguy said:
    Rayz2016 said:
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
    Common sense would say: On board hardware. Some chip is digitizing, recording and examining a second or so of sound all the time. How else to recognize a keyword since it relies on a word/phrase that was already spoken?

    Another example of the "always listening but not sending to a server" is the song recognition feature in the new Pixels. Yes that on-board microphone is always "listening" but without a connection to Google servers. All the recognition in done directly on the device with no need for any data sent to Google. But if the user then wants to get more information, perhaps read up on the artist, see what else they've got, maybe start a radio station using that song as the basis they can actively request a server connection to accomplish those actions. 

    Right. 

    Yes, I got the part about the “listening”, I just didn’t realise it was recording all the time too. I thought it only recorded stuff when you started speaking. 

    Blimey. 

    I doubt it goes back to this morning. :) It's not unlike Live Photos. It keeps the prior second of sound as part of the voice activation process diagnostics. 
  • Reply 15 of 36
    StrangeDaysStrangeDays Posts: 13,085member
    gatorguy said:
    Rayz2016 said:
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
    Another example of the "always listening but not sending to a server" is the song recognition feature in the new Pixels. Yes that on-board microphone is always "listening" but without a connection to Google servers. All the recognition in done directly on the device with no need for any data sent to Google. But if the user then wants to get more information, perhaps read up on the artist, see what else they've got, maybe start a radio station using that song as the basis they can actively request a server connection to accomplish those actions. 
    How does the Pixel analyze heard media without talking to a server of audio fingerprints for matching? There’s a local copy of these fingerprints on every Pixel? 
    airnerdwatto_cobra
  • Reply 16 of 36
    radarthekatradarthekat Posts: 3,901moderator
    zroger73 said:
    "Defect", huh?
    No, a feature. We just aren't ready for Big Brother(a.k.a. Google) to listen to our every move just yet. This feature will probably get enabled again by this time next year.
    Room 101 awaits you!

    BBKit (Big Brother Kit)
    lkruppGG1watto_cobra
  • Reply 17 of 36
    tzeshantzeshan Posts: 2,351member
    adm1 said:
    If it were any other company I'd accept it was a mistake, but not google.
    Yes, Google is god. It cannot do wrong. If Google removed the 3.5 mm jack from Pixel 2, it means god has approved what Apple did to iPhone 7.  Question ended. 
    watto_cobra
  • Reply 18 of 36
    slurpyslurpy Posts: 5,389member
    eightzero said:
    I can't find a use for any of this @internetofshit I see the commercials on TV, and none of it looks interesting or desirable. 

    One notable exception: Siri on CarPlay. In my house? Meh. 

    In another thread, I recounted the number of Apple branded devices in my home. Upwards of 20. Number of google devices? 0 
    I use Siri for Homekit tasks in my place dozens of times a day. That's a good usecase. 

    watto_cobra
  • Reply 19 of 36
    lkrupplkrupp Posts: 10,557member
    Isn’t it ironic that those who castigate Apple for bugs being in GM releases, those that cackle on about how Apple has too may irons in the fire, those who solemnly pronounce Apple’s QA is declining, suddenly find themselves in the same position with the object of their worship, Google? 
    tmaywilliamlondonwatto_cobra
  • Reply 20 of 36
    gatorguygatorguy Posts: 24,624member
    gatorguy said:
    Rayz2016 said:
    With the feature enabled, Google sends not only what the user says when a keyword is spoken or a device is activated -- but the sound recorded a few seconds before the actuation

    How can it record sounds before it was activated?
    Another example of the "always listening but not sending to a server" is the song recognition feature in the new Pixels. Yes that on-board microphone is always "listening" but without a connection to Google servers. All the recognition in done directly on the device with no need for any data sent to Google. But if the user then wants to get more information, perhaps read up on the artist, see what else they've got, maybe start a radio station using that song as the basis they can actively request a server connection to accomplish those actions. 
    How does the Pixel analyze heard media without talking to a server of audio fingerprints for matching? There’s a local copy of these fingerprints on every Pixel? 
    Yes there is. Approx. 20 thousand of them which will get updated as new music comes out. All song recognition is done on-device with zero data sent to Google.
    edited October 2017
Sign In or Register to comment.