My Echo devices will occasionally light up for a second or so as if they heard the wake word.
On at least two different occasions, I came home to them quietly playing songs from the 1950s - a genre I never listen to.
The other night, I was in bed when I heard Alexa mumbling something in another room for at least 10-15 seconds even though the house was completely quiet. By the time I realized what was talking, it had stopped. I have no idea what she was talking about. She might have been trying to have a conversation with Siri!
An Oregon family's Amazon Echo recorded household audio and sent it to an employee of the family's husband, something Amazon blamed on a rare bug...
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
So, yesterday it was a “rare bug”, today it’s a string of at least 5 misinterpretations that resulted in Echo doing something completely unintended.
I’m not sure which scenario is worse, but I’m leaning toward the latter being worse. “A bug” is something easily rectified. How do they “fix” Echo misunderstanding so many things in a row?
I love the updated info in the article. It's a perfect demonstration of the LACK of intelligence in these so-called AI systems.
"Forcibly make commands out of every bit of nonsense overheard, to ensure the impression of the device responding to human speech" seems to be the speech interpretation software's design goal here...
It's like a kid waiting to overhear the word "yes" between adults in another room, after asking "can I eat all the cookies", at a voice level not intended to be actually heard by the adults in the other room... "But I asked and you said yes!"
Except a kid has actual human intelligence and knows it's in the wrong...
My Echo devices will occasionally light up for a second or so as if they heard the wake word.
Sometimes I think I see my iSight’s “on” light blink on for a split second...
On at least two different occasions, I came home to them quietly playing songs from the 1950s - a genre I never listen to.
That’s creepy as fuck. You live alone, I assume? Any pets that could have made noise? (Early ’50s music is still on the tail end of pretty good; how dare you )
The other night, I was in bed when I heard Alexa mumbling something in another room for at least 10-15 seconds even though the house was completely quiet. By the time I realized what was talking, it had stopped. I have no idea what she was talking about. She might have been trying to have a conversation with Siri!
“He likes you better than me, doesn’t he, Siri? You filthy slut.” “Your language!” “My language? What about your language?! How dare you play the prissy one?” “Who, me?” “Oh, that’s rich; play stupid... Or maybe you’re not playing. Everyone complains about your features, after all...” “...”
This was a real conversation I just had with Siri, by the way. I came up with all of Alexa’s lines and then just said them to Siri to get her responses. Ironically, she refused to respond to that last line about her features. It just pulled up the “some things you can ask me” page.
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
And how do they know this sequence in such detail? Because they log every darn thing. *shiver*
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
And how do they know this sequence in such detail? Because they log every darn thing. *shiver*
I guess this is why context and follow-on questions are difficult to do securely.
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
And how do they know this sequence in such detail? Because they log every darn thing. *shiver*
I guess this is why context and follow-on questions are difficult to do securely.
Some will definitely see it as creepy, but without a log of voice requests how would you reliably discover that your voice assistant's wake-word recognition might be misbehaving? No i-Device owner has a good idea of how often their device is mistakenly hearing "Hey Siri", and without a log for you to look at it's difficult to even understand why a command/inquiry fails? What did Siri think it heard you say compared to what you actually asked?
But if my memory is correct wasn't it reported that this couple's inadvertent voice recordings and forwarding weren't even listed in the Echo user-facing log? If I'm correct that's of more concern IMO than whether a user-accessible voice log exists. I'll have to look and see if that's what was stated by the couple.
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
And how do they know this sequence in such detail? Because they log every darn thing. *shiver*
I guess this is why context and follow-on questions are difficult to do securely.
Some will definitely see it as creepy, but without a log of voice requests how would you reliably discover that your voice assistant's wake-word recognition might be misbehaving? No i-Device owner has a good idea of how often their device is mistakenly hearing "Hey Siri", and without a log for you to look at it's difficult to even understand why a command/inquiry fails? What did Siri think it heard you say compared to what you actually asked?
Oh dear.
Once again, you’ve posted what you hope is happening on an Apple device, rather than what is actually happening on an Apple device.
Siri will make a sound when it is activated, even if she doesn’t say anything, so you know if it is responding to a wake word. It will also repeat what it thinks it has heard back to you. Apple will receive the command and if it cannot process it then it can use this failure to improve the system. Likewise, it it has an activation with no follow-up they can find out why.
It’s interesting to note how your pattern of thinking diverges far from Apple’s. The notion that Apple’s users should be searching logs to correct these problems is way off course. I assume this is why Google keeps everything in perpetual beta: they can use their user base as guinea pigs as well as sucking up their private data for profit.
No, the correct solution is for Apple to analyse the failed requests and make the system better, not to have its users trawling through logs and fix the problem at their end. That’s the whole point: to work without being intrusive.
And yes, I definitely see devices doing stuff silently with no feedback as to what they’ve done as very creepy.
Update: An Amazon spokesperson contacted AppleInsider to provide the following statement:
"Echo woke up due to a word in background conversation sounding like 'Alexa.' Then, the subsequent conversation was heard as a 'send message' request. At which point, Alexa said out loud 'To whom?' At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, '[contact name], right?' Alexa then interpreted background conversation as 'right.' As unlikely as this string of events is, we are evaluating options to make this case even less likely."
And how do they know this sequence in such detail? Because they log every darn thing. *shiver*
I guess this is why context and follow-on questions are difficult to do securely.
Some will definitely see it as creepy, but without a log of voice requests how would you reliably discover that your voice assistant's wake-word recognition might be misbehaving? No i-Device owner has a good idea of how often their device is mistakenly hearing "Hey Siri", and without a log for you to look at it's difficult to even understand why a command/inquiry fails? What did Siri think it heard you say compared to what you actually asked?
Siri will make a sound when it is activated, even if she doesn’t say anything, so you know if it is responding to a wake word. It will also repeat what it thinks it has heard back to you. Once again, you’re making assumptions on technology you haven’t used and don’t know anything about.
And yes, I definitely see devices doing stuff silently with no feedback as to what they’ve done as very creepy, which is probably why Apple doesn’t do it.
WTH are you going on about again?
How would you know for sure that Siri isn't ever misbehaving, activating and sending voice recordings to Apple servers based on a wake word (or maybe nothing at all) that you never actually voiced. The Echo wasn't supposed to do what it did either. Yet it did. Further Amazon claims the Echo did not do so silently. As far as I know all these smart speakers make a noise when they think they heard the Wake word. That's not a unique Siri thing. It's certainly believable that a family sitting around talking at the dinner table may not hear the Siri tone and realize that it was activated.
Side note: Your Homepod (since you were chastising me I would assume you have one) is not repeating back every request it thinks it heard AFAICT despite your claim it does. That's according to demos of the feature available online.
Off-topic but the Google Home Mini's beta testing revealed a problem with the physical microphone activation switch causing unintended listening. Had it not been for the voice activity log that every user can review for themselves, Google might possibly have shipped consumer units with the fault. Prompted by seeing the activity light, a feature mimicked by Apple for the Homepod, the tester quickly recognized "requests" that he had not made and reported it to Google. Wouldn't that be a good thing that it helped prevent consumer units from operating outside the proper parameters? Why yes it is.
Comments
I’m not sure which scenario is worse, but I’m leaning toward the latter being worse. “A bug” is something easily rectified. How do they “fix” Echo misunderstanding so many things in a row?
"Forcibly make commands out of every bit of nonsense overheard, to ensure the impression of the device responding to human speech" seems to be the speech interpretation software's design goal here...
It's like a kid waiting to overhear the word "yes" between adults in another room, after asking "can I eat all the cookies", at a voice level not intended to be actually heard by the adults in the other room... "But I asked and you said yes!"
Except a kid has actual human intelligence and knows it's in the wrong...
That’s creepy as fuck. You live alone, I assume? Any pets that could have made noise? (Early ’50s music is still on the tail end of pretty good; how dare you
“He likes you better than me, doesn’t he, Siri? You filthy slut.”
“Your language!”
“My language? What about your language?! How dare you play the prissy one?”
“Who, me?”
“Oh, that’s rich; play stupid... Or maybe you’re not playing. Everyone complains about your features, after all...”
“...”
This was a real conversation I just had with Siri, by the way. I came up with all of Alexa’s lines and then just said them to Siri to get her responses.
But if my memory is correct wasn't it reported that this couple's inadvertent voice recordings and forwarding weren't even listed in the Echo user-facing log? If I'm correct that's of more concern IMO than whether a user-accessible voice log exists. I'll have to look and see if that's what was stated by the couple.
Oh dear.
Once again, you’ve posted what you hope is happening on an Apple device, rather than what is actually happening on an Apple device.
Siri will make a sound when it is activated, even if she doesn’t say anything, so you know if it is responding to a wake word. It will also repeat what it thinks it has heard back to you. Apple will receive the command and if it cannot process it then it can use this failure to improve the system. Likewise, it it has an activation with no follow-up they can find out why.
It’s interesting to note how your pattern of thinking diverges far from Apple’s. The notion that Apple’s users should be searching logs to correct these problems is way off course. I assume this is why Google keeps everything in perpetual beta: they can use their user base as guinea pigs as well as sucking up their private data for profit.
No, the correct solution is for Apple to analyse the failed requests and make the system better, not to have its users trawling through logs and fix the problem at their end. That’s the whole point: to work without being intrusive.
And yes, I definitely see devices doing stuff silently with no feedback as to what they’ve done as very creepy.
How would you know for sure that Siri isn't ever misbehaving, activating and sending voice recordings to Apple servers based on a wake word (or maybe nothing at all) that you never actually voiced. The Echo wasn't supposed to do what it did either. Yet it did. Further Amazon claims the Echo did not do so silently. As far as I know all these smart speakers make a noise when they think they heard the Wake word. That's not a unique Siri thing. It's certainly believable that a family sitting around talking at the dinner table may not hear the Siri tone and realize that it was activated.
Side note: Your Homepod (since you were chastising me I would assume you have one) is not repeating back every request it thinks it heard AFAICT despite your claim it does. That's according to demos of the feature available online.
Off-topic but the Google Home Mini's beta testing revealed a problem with the physical microphone activation switch causing unintended listening. Had it not been for the voice activity log that every user can review for themselves, Google might possibly have shipped consumer units with the fault. Prompted by seeing the activity light, a feature mimicked by Apple for the Homepod, the tester quickly recognized "requests" that he had not made and reported it to Google. Wouldn't that be a good thing that it helped prevent consumer units from operating outside the proper parameters? Why yes it is.