Google Assistant voice recordings reviewed by humans include private conversations

Posted:
in General Discussion
Some voice snippets uploaded by Google Assistant are being reviewed by human contractors, as with Amazon Alexa, and can potentially include sensitive conversations, reports say.

Google Home Max


Belgium's public broadcaster, VRT, recently gained access to over 1,000 audio files from a Google contractor tasked with reviewing Assistant audio, according to Wired. The audio was captured from devices such as phones, smartspeakers, and security cameras.

Google has acknowledged anonymously transcribing 0.2% of uploads to improve its technology, but instances in which Assistant is triggered accidentally appear to be capturing private information. In the case of a couple in Waasmunster, Belgium, audio included their address, their status as grandparents, and the voices of their son and grandchild.

Google's practices may violate the European Union's General Data Protection Regulations, scholars suggested. One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases.

A Google spokesperson said that the company has launched an investigation into the contractor mentioned by VRT, on the basis that it violated data security policies.

While Amazon and Google have been subject to the most scrutiny for their voice assistant policies, a technology policy researcher at the Alan Turing Institute in London -- Michael Veale -- has filed an Irish complaint regarding Apple's Siri, saying it violates the GDPR because users can't access uploaded audio. Both Amazon and Google let users review and delete voice samples.

Apple has responded, according to Veale, by saying that its systems handle data well enough that his recordings don't qualify as personal data.
«1

Comments

  • Reply 1 of 33
    gatorguygatorguy Posts: 21,106member
    While the recordings supplied for transcription are anonymized and not connected to any accounts it plainly would not mean that someone with time, money, sources and a reason to do so would not be able to connect dots within specific snippets. In this case some contractor illegally supplied anonymous recordings to a journalist (?) for the purpose of attempting to do just that. They were seemingly successful in at least one case where the owners address was mentioned. 

    Anonymous may not mean you cannot be identified given the time and resources. I'd also be shocked if humans were not involved in understanding problematic Siri recordings as well, and yes those are also anonymized and not connected to specific accounts. If "machines" misunderstood what was said or misinterpreted a wake word then using another machine to understand why it happened hardly seems appropriate. 
    edited July 11 Carnagejony0
  • Reply 2 of 33
    leeeh2leeeh2 Posts: 29member
    “A Google spokesperson said that the company has launched an investigation into the contractor mentioned by VRT, on the basis that it violated data security policies. “

    This statement can be interpreted in many ways:

    But one interpretation feels like Google murdered someone with a knife and then handed the subcontractor the bloody knife and then accusing them of murder. 

    Another interpretation is that Google is investigating a whistle blower, as violating some term in an agreement to not share what Google provided to them. 
    AppleExposedFileMakerFellermagman1979cornchipRayz2016lolliverwatto_cobrajony0
  • Reply 3 of 33
    jungmarkjungmark Posts: 6,716member
    It's a feature for that more personal touch. Any wonder why I don't have these things in my house. 
    magman1979watto_cobra
  • Reply 4 of 33
    haml87haml87 Posts: 1member
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    Carnage
  • Reply 5 of 33
    It is pretty simple really.

    Google (and others) will do anything they can to gather data on each and everyone of us. Legal or not, it does not matter. The revelations that they keep all those purchase confirmation emails even if you delete your gmail account should be proof that they don't care about the legality of the data they have collected.

    Stop using Google directly for anything. Starve them of useful data. Delete your gmail accounts, use a different search engine. etc etc.

    I don't use Google unless I have no alternative. The same applies to the likes of Amazon.
    macplusplusGG1magman1979lolliverpscooter63watto_cobrajony0
  • Reply 6 of 33
    AppleExposedAppleExposed Posts: 1,520unconfirmed, member
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.

    I'm shocked android users still make excuses for Google.

    And no, I'm not listening to your conversations. I just know you have an iKnockoff by the pathetic excuses you provide for Daddy Google.
    magman1979lolliverwatto_cobra
  • Reply 7 of 33
    AppleExposedAppleExposed Posts: 1,520unconfirmed, member
    It is pretty simple really.

    Google (and others) will do anything they can to gather data on each and everyone of us. Legal or not, it does not matter. The revelations that they keep all those purchase confirmation emails even if you delete your gmail account should be proof that they don't care about the legality of the data they have collected.

    Stop using Google directly for anything. Starve them of useful data. Delete your gmail accounts, use a different search engine. etc etc.

    I don't use Google unless I have no alternative. The same applies to the likes of Amazon.

    The problem is, Google alternatives SUCK.

    There's a Youtube backlash right now and people are migrating to BitChute. Besides having a terrible name that site is hideous.

    Apple needs to enter these spaces now.
    magman1979Carnagewatto_cobra
  • Reply 8 of 33
    MacProMacPro Posts: 18,368member
    gatorguy said:
    While the recordings supplied for transcription are anonymized and not connected to any accounts it plainly would not mean that someone with time, money, sources and a reason to do so would not be able to connect dots within specific snippets. In this case some contractor illegally supplied anonymous recordings to a journalist (?) for the purpose of attempting to do just that. They were seemingly successful in at least one case where the owners address was mentioned. 

    Anonymous may not mean you cannot be identified given the time and resources. I'd also be shocked if humans were not involved in understanding problematic Siri recordings as well, and yes those are also anonymized and not connected to specific accounts. If "machines" misunderstood what was said or misinterpreted a wake word then using another machine to understand why it happened hardly seems appropriate. 

    https://www.cnbc.com/2019/07/11/google-admits-leaked-private-voice-conversations.html


    Gogle admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

    These conversations are used by companies such as Google and Amazon -- which takes clips from the Amazon Echo -- to improve voice responses from their smart assistants. They are supposed to be kept confidential.

    But Belgian news site VRT said on Wednesday that a contractor provided it with samples of these sound samples, which VRT then used to identify some of the people in the clips. It also examined the sorts of conversations that Google collects when people say "OK Google," into a phone or a Google Home product. Among other things, VRT heard customer addresses. Sources who talked to the publication also described hearing recordings of a woman in distress and people talking about medical conditions.

    Rayz2016watto_cobra
  • Reply 9 of 33
    gatorguygatorguy Posts: 21,106member
    MacPro said:
    gatorguy said:
    While the recordings supplied for transcription are anonymized and not connected to any accounts it plainly would not mean that someone with time, money, sources and a reason to do so would not be able to connect dots within specific snippets. In this case some contractor illegally supplied anonymous recordings to a journalist (?) for the purpose of attempting to do just that. They were seemingly successful in at least one case where the owners address was mentioned. 

    Anonymous may not mean you cannot be identified given the time and resources. I'd also be shocked if humans were not involved in understanding problematic Siri recordings as well, and yes those are also anonymized and not connected to specific accounts. If "machines" misunderstood what was said or misinterpreted a wake word then using another machine to understand why it happened hardly seems appropriate. 

    https://www.cnbc.com/2019/07/11/google-admits-leaked-private-voice-conversations.html


    Gogle admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

    These conversations are used by companies such as Google and Amazon -- which takes clips from the Amazon Echo -- to improve voice responses from their smart assistants. They are supposed to be kept confidential.

    But Belgian news site VRT said on Wednesday that a contractor provided it with samples of these sound samples, which VRT then used to identify some of the people in the clips. It also examined the sorts of conversations that Google collects when people say "OK Google," into a phone or a Google Home product. Among other things, VRT heard customer addresses. Sources who talked to the publication also described hearing recordings of a woman in distress and people talking about medical conditions.

    Did you not read the AI article?
    That was all in there including blue links to the source report (which wasn't CNBC)
    edited July 11
  • Reply 10 of 33
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    lolliverCarnagewatto_cobra
  • Reply 11 of 33
    gatorguygatorguy Posts: 21,106member
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
  • Reply 12 of 33
    lolliverlolliver Posts: 351member
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    watto_cobra
  • Reply 13 of 33
    gatorguygatorguy Posts: 21,106member
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    Google doesn't need defending in this case from what I can tell. Human input is a vital piece of improving the service and I'd be shocked if even the privacy-minded Apple isn't using real humans to listen to and transcribe Siri recordings to determine the speakers actual intent and identify cases where Siri thought it "heard" a wake phrase but did not. A machine would certainly not be dependable for discovering another machine heard wrong and why. Apple is almost assuredly doing the same thing whether with their own employees or with contracted companies. 

    EDIT: Just read Apple's white paper on the subject and yes they too acknowledge having humans transcribing what they hear in a users recording compared to what Siri thought was said. I would seriously doubt Apple is sharing what they hear with authorities even if a gunshot followed by a scream is heard. Anonymity and user privacy is of utmost importance. 
    edited July 11
  • Reply 14 of 33
    lolliverlolliver Posts: 351member
    gatorguy said:
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    Google doesn't need defending in this case from what I can tell. Human input is a vital piece of improving the service and I'd be shocked if even the privacy-minded Apple isn't using real humans to listen to and transcribe Siri recordings to determine the speakers actual intent and identify cases where Siri thought it "heard" a wake phrase but did not. A machine would certainly not be dependable for discovering another machine heard wrong and why. Apple is almost assuredly doing the same thing whether with their own employees or with contracted companies. 
    First you complain that someone's rant is based on source information obtained by a journalist directly from a contractor with first hand knowledge of the situation only to go on your own rant based on what you "think" Apple does with no evidence whatsoever? Hypocrite much?


    Also, the issues around if Google has a responsibility to act on what the contractor believes they heard are very complex and we really don't have enough information to know for sure what the right/moral thing would be in this case or if Google already has appropriate policies in place or not. I'm not arguing that. 

    I'm just saying your weak excuse that if there was a crime being overheard it was in the past and therefore didn't matter anymore is deplorable. If the woman was being attacked as the contractor suspected then how do we know it wasn't an ongoing situation? Even if it wasn't an ongoing situation we don't give up on prosecting crimes because "that was in the past". All crimes were committed in the past. Are you suggesting that unless Google has some sort of Minority Report level of technology then it's irrelevant because the crime has already occurred? That is just ridiculous. There are many factors as to why it may not be possible/appropriate for Google to act on what is overheard by these contractors but you really missed the mark on this one.


    I think it's important to consider all sides of an argument and as this site can tend to be very pro apple your comments do sometimes help to broaden the discussion. Sure you tend to be overly pro Google and make comments that ignore sound reasoning on occasion but then there are plenty that make comments on Apple with the same overly biased attitude towards that company. But this goes beyond a slight bias into absurdity. According to this logic should we just remove the notion of a statute of limitations on crimes because if anything has occurred in the past it's too late to act on it now anyway???
    bestkeptsecretkevin keewatto_cobra
  • Reply 15 of 33
    lolliverlolliver Posts: 351member
    gatorguy said:
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 


    EDIT: Just read Apple's white paper on the subject and yes they too acknowledge having humans transcribing what they hear in a users recording compared to what Siri thought was said. I would seriously doubt Apple is sharing what they hear with authorities even if a gunshot followed by a scream is heard. Anonymity and user privacy is of utmost importance. 
    Glad you are actually researching even if it is in hindsight. But again, my response was about your lame defence of Google in relation to the recorded conversation being "old stuff" that didn't matter anymore. It's interesting that when you get called out on something like this (as I have done in the past with other arguments of yours) you try to ignore what you have actually been called out on. You then respond over and over again, not responding to the initial point until several responses later. 

    Can we just skip that whole process this time and admit that stating that this was "old stuff" was neither relevant to the conversation or a moral/appropriate argument to make? 
    watto_cobra
  • Reply 16 of 33
    gatorguygatorguy Posts: 21,106member
    lolliver said:
    gatorguy said:
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    Google doesn't need defending in this case from what I can tell. Human input is a vital piece of improving the service and I'd be shocked if even the privacy-minded Apple isn't using real humans to listen to and transcribe Siri recordings to determine the speakers actual intent and identify cases where Siri thought it "heard" a wake phrase but did not. A machine would certainly not be dependable for discovering another machine heard wrong and why. Apple is almost assuredly doing the same thing whether with their own employees or with contracted companies. 
    ...the issues around if Google has a responsibility to act on what the contractor believes they heard are very complex and we really don't have enough information to know for sure what the right/moral thing would be in this case or if Google already has appropriate policies in place or not. I'm not arguing that. 

    I'm just saying your weak excuse that if there was a crime being overheard it was in the past and therefore didn't matter anymore is deplorable. If the woman was being attacked as the contractor suspected then how do we know it wasn't an ongoing situation? Even if it wasn't an ongoing situation we don't give up on prosecting crimes because "that was in the past". All crimes were committed in the past. Are you suggesting that unless Google has some sort of Minority Report level of technology then it's irrelevant because the crime has already occurred? That is just ridiculous. There are many factors as to why it may not be possible/appropriate for Google to act on what is overheard by these contractors but you really missed the mark on this one...
    Apple has acknowledged using human curators to determine the intent of certain Siri recordings and whether a person's voice was accurately heard and appropriately handled. Do you believe based on Apple's vaunted protection of user's privacy at (nearly) all cost that they would voluntarily pivot to sending private voice recordings to authorities for investigation into possible crimes? Would that be deplorable of Apple as well if they do not? I think that's speaking directly to the crux of the matter as you requested. 
    edited July 11
  • Reply 17 of 33
    lolliverlolliver Posts: 351member
    gatorguy said:
    lolliver said:
    gatorguy said:
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    Google doesn't need defending in this case from what I can tell. Human input is a vital piece of improving the service and I'd be shocked if even the privacy-minded Apple isn't using real humans to listen to and transcribe Siri recordings to determine the speakers actual intent and identify cases where Siri thought it "heard" a wake phrase but did not. A machine would certainly not be dependable for discovering another machine heard wrong and why. Apple is almost assuredly doing the same thing whether with their own employees or with contracted companies. 
    ...the issues around if Google has a responsibility to act on what the contractor believes they heard are very complex and we really don't have enough information to know for sure what the right/moral thing would be in this case or if Google already has appropriate policies in place or not. I'm not arguing that. 

    I'm just saying your weak excuse that if there was a crime being overheard it was in the past and therefore didn't matter anymore is deplorable. If the woman was being attacked as the contractor suspected then how do we know it wasn't an ongoing situation? Even if it wasn't an ongoing situation we don't give up on prosecting crimes because "that was in the past". All crimes were committed in the past. Are you suggesting that unless Google has some sort of Minority Report level of technology then it's irrelevant because the crime has already occurred? That is just ridiculous. There are many factors as to why it may not be possible/appropriate for Google to act on what is overheard by these contractors but you really missed the mark on this one...
    Apple has acknowledged using human curators to determine the intent of certain Siri recordings and whether a person's voice was accurately heard and appropriately handled. Do you believe based on Apple's vaunted protection of user's privacy at (nearly) all cost that they would voluntarily pivot to sending private voice recordings to authorities for investigation into possible crimes? Would that be deplorable of Apple as well if they do not? I think that's speaking directly to the crux of the matter as you requested. 
    No it's not speaking to the crux of the matter. You keep trying to bring it back to if Google (or Apple as you later brought into the discussion) should or shouldn't be required to disclose this information. As I stated right from the beginning that is a complex question and not one I was attempting to answer based on the limited information provided in the original story. I simply stated that saying the recorded call would only contain "old stuff" implied that it didn't matter what may or may not have happened to the woman as it was in the past. This is what I took issue with.

    You continue to ignore/detract from what you are being called out for which unfortunately is your usually approach. I never said it was deplorable for Google not to send the information to the police as you are now trying to imply I did. In fact I said we don't know what policies Google already have in place around this. I acknowledged right from the start that it's a complex issue and yet you continue to try and argue with me about points I never made while ignoring the comment I did make. Or even worse trying to twist it into something I never said. 

    I'm not trying to attack Google or compare them to Apple, although you seem to be very pre-occupied with the whole Apple vs Google battle. I'm simply saying that the fact the recorded call occurred in the past and only contained "old stuff" is not a reason for it to be dismissed. You have come up with a lot more intelligent talking points after your initial quick jerk reaction but still will not address what you initially said. 

    It speaks volumes that you still continue to ignore the one critique I raised at the very beginning. 
    muthuk_vanalingamkevin keewatto_cobra
  • Reply 18 of 33
    croprcropr Posts: 961member
    gatorguy said:
    Google doesn't need defending in this case from what I can tell.

    I've read original Dutch article which has a lot more comments and references than the translated English article and if this is correct then Google clearly violated the GDPR regulations.    The Belgian privacy commission has already opened an inquiry and let people make a formal complaint if they feel that Google has violated their privacy. 

    If you think that Google does not need to defend itself, you are naive.  The fines in GDPR violations are huge.
    muthuk_vanalingamlolliverwatto_cobra
  • Reply 19 of 33
    gatorguygatorguy Posts: 21,106member
    lolliver said:
    gatorguy said:
    lolliver said:
    gatorguy said:
    lolliver said:
    gatorguy said:
    haml87 said:
    It’s really surprising that people are shocked by this. People who are involved in the design and development of the product need to know it’s working and also need to find ways to improve it. If companies only relied on customer feedback on how to improve it then improvements wouldn’t be as quick. They need to be proactive so by listening to recordings enables them to do this.
    To an extent, I agree with you. But the general public are often surprised when they discover more details about how something works - a lot of the time, when something provides a feature, I don't care much how it works under the hood because I assume that the process follows the dictates of my personal morality. Should I discover that a different moral standard is being applied, I worry about it and seek alternatives.

    One of the concerns I have about Google is that it appears to be decidedly amoral - it treats data as data and seeks understanding without applying judgement. In a lot of situations, this is fine. But as this article and the source show, the real world contains a lot of ambiguity and judgement needs to be applied so that our society continues to function. The most disturbing line to me was:

    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but also knows that there is a moral obligation to help someone in serious trouble. It's tempting to place blame on the worker, but we don't know how certain he was about what had occurred, how afraid he was of affecting his employment, and a host of other factors. He made the judgement call that it was somebody else's decision but he didn't know how to escalate it.

    So, why does Google not have a procedure for dealing with situations like this? Every adult on the planet knows that this sort of thing occurs (as much as we wish it wouldn't), how does the management at Google treat this as a low priority issue or refuse to acknowledge that it is relevant?

    On the one hand, we see Google argue that their mission is merely to organise the world's information: OK, fine, if that's the activity being undertaken then that clearly means there is no obligation to act, regardless of the information being managed. But on the other hand, Google is using the analysis of the data to provide services - that too is fine, taken in separation, but it clearly negates the neutrality implied by the stated mission. If you want to collect and analyse ever more intrusive amounts of information and then act on it, you need to act in a manner that benefits society as a whole - do that and you've earned the right to make a profit. If not, then you're a surveillance mechanism that will be resisted to the best of our ability.

    <end rant>
    You're basing your "rant" on a journalists second hand report of what a single former contract worker had to say , that Google didn't have any policies in place on how certain things were to be dealt with. Was he/she even in a position to know?  FWIW the contractors weren't listening to live streams anyway. Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard. 
    This would have to be your worst defence of google ever. 
    Google doesn't need defending in this case from what I can tell. Human input is a vital piece of improving the service and I'd be shocked if even the privacy-minded Apple isn't using real humans to listen to and transcribe Siri recordings to determine the speakers actual intent and identify cases where Siri thought it "heard" a wake phrase but did not. A machine would certainly not be dependable for discovering another machine heard wrong and why. Apple is almost assuredly doing the same thing whether with their own employees or with contracted companies. 
    ...the issues around if Google has a responsibility to act on what the contractor believes they heard are very complex and we really don't have enough information to know for sure what the right/moral thing would be in this case or if Google already has appropriate policies in place or not. I'm not arguing that. 

    I'm just saying your weak excuse that if there was a crime being overheard it was in the past and therefore didn't matter anymore is deplorable. If the woman was being attacked as the contractor suspected then how do we know it wasn't an ongoing situation? Even if it wasn't an ongoing situation we don't give up on prosecting crimes because "that was in the past". All crimes were committed in the past. Are you suggesting that unless Google has some sort of Minority Report level of technology then it's irrelevant because the crime has already occurred? That is just ridiculous. There are many factors as to why it may not be possible/appropriate for Google to act on what is overheard by these contractors but you really missed the mark on this one...
    Apple has acknowledged using human curators to determine the intent of certain Siri recordings and whether a person's voice was accurately heard and appropriately handled. Do you believe based on Apple's vaunted protection of user's privacy at (nearly) all cost that they would voluntarily pivot to sending private voice recordings to authorities for investigation into possible crimes? Would that be deplorable of Apple as well if they do not? I think that's speaking directly to the crux of the matter as you requested. 
    No it's not speaking to the crux of the matter. You keep trying to bring it back to if Google (or Apple as you later brought into the discussion) should or shouldn't be required to disclose this information. As I stated right from the beginning that is a complex question and not one I was attempting to answer based on the limited information provided in the original story. I simply stated that saying the recorded call would only contain "old stuff" implied that it didn't matter what may or may not have happened to the woman as it was in the past. This is what I took issue with.

    You continue to ignore/detract from what you are being called out for which unfortunately is your usually approach. I never said it was deplorable for Google not to send the information to the police as you are now trying to imply I did. In fact I said we don't know what policies Google already have in place around this. I acknowledged right from the start that it's a complex issue and yet you continue to try and argue with me about points I never made while ignoring the comment I did make. Or even worse trying to twist it into something I never said. 

    I'm not trying to attack Google or compare them to Apple, although you seem to be very pre-occupied with the whole Apple vs Google battle. I'm simply saying that the fact the recorded call occurred in the past and only contained "old stuff" is not a reason for it to be dismissed. You have come up with a lot more intelligent talking points after your initial quick jerk reaction but still will not address what you initially said. 

    It speaks volumes that you still continue to ignore the one critique I raised at the very beginning. 
    @lolliver ;
     My apologies if I misunderstood the point you were attempting to make when you wrote:
    "One worker screening audio said he encountered a recording in which it sounded like a woman was being physically attacked, but that Google didn't have clear guidelines on what to do in such cases."

    This is a sign of dysfunction. Firstly, there's a conflict experienced by the worker: he knows that sharing this personal information is contrary to the contractual obligation with Google, but (he) also knows that there is a moral obligation to help someone in serious trouble."

    I read that as a concern on your part that an active crime might be underway and no one was helping. My response to you was that it was not a current conversation being transcribed but one from the past, saying and I quote:
    "Whatever was heard in those short snippets they were tasked with transcribing was almost certainly old stuff and not actionable for preventing or interfering with whatever was heard." 

    What wasn't accurate based on what I understood you to be saying? There was nothing that could be done at that time to prevent or interfere with the possible crime taking place. Now if you want to question whether Google (or Apple or Amazon) should report the old snippet to the police for investigation that's an entirely separate conversation. 
    edited July 12
  • Reply 20 of 33
    gatorguygatorguy Posts: 21,106member
    cropr said:
    gatorguy said:
    Google doesn't need defending in this case from what I can tell.

    I've read original Dutch article which has a lot more comments and references than the translated English article and if this is correct then Google clearly violated the GDPR regulations.    The Belgian privacy commission has already opened an inquiry and let people make a formal complaint if they feel that Google has violated their privacy. 

    If you think that Google does not need to defend itself, you are naive.  The fines in GDPR violations are huge.
    IMO Google doesn't need defending anymore than Apple does, both being looked at for the same failures to clearly disclose the fact that humans are listening to ANONYMIZED voice snippets. Apple's is actually an active and official Irish investigation. The Dutch appear to be inquiring about Google at this point but yes they might start an official investigation too. In neither case do I think Apple and Google have inadvertently and certainly not intentionally thumbed their nose at EU privacy laws.

    There is a difference in the amount of identifiable information supplied to those human transcribers between Amazon and Google/Apple according to reports.  The former  attached the customers first name, whether male or female, account number, and device serial number to the transcribers screen. Both Apple and Google use only anonymized numbered snippets for the human transcription process. 
    edited July 12
Sign In or Register to comment.