Mark Zuckerberg may have lied to Congress about who can see WhatsApp messages

Posted:
in General Discussion edited September 7
The Facebook-owned WhatsApp regularly boasts of using end-to-end encryption and keeping communications between users private, but a report alleges that some monitoring of messages does take place, and that Mark Zuckerberg may not have told the truth to the U.S. Senate.




In 2016, WhatsApp announced it was using end-to-end encryption for all communications on its platform, covering everything from messages to file transfers. The use of end-to-end encryption is intended to offer users a level of privacy and security, but it seems that may not be true for the messaging app.

In a report by ProPublica, it is claimed WhatsApp employs more than 1,000 contract workers in Austin, Texas, Dublin, and Singapore, specifically for examining "millions of pieces of users' content." The workers "use special Facebook software" to look through messages and content that have been flagged by WhatsApp users, and have been screened by AI systems.

The reviews occur in spite of an assurance that appears in the app before users send messages for the first time, claiming "No one outside of this chat, not even WhatsApp, can read or listen to them."

In 2018 testimony to the U.S. Senate, Facebook CEO Macrk Zuckerberg claimed "We don't see any of the content in WhatsApp."

The report adds that the claims are bolstered by a whistleblower complaint filed with the U.S. Securities and Exchange Commission in 2020. The complaint said about WhatsApp's use of external contractors, AI, and account information for monitoring user messages, images, and videos, and that WhatsApp's claims about protecting user privacy are false.

"We haven't seen this complaint," said a WhatsApp spokesperson. The SEC has also, so far, not acted on the complaint in public.

While the claims of monitoring user messages may be an indication that WhatsApp's end-to-end encryption may not be completely secure, there may still be some truth to its privacy credentials.

As content is encrypted, automated systems are unable to scan the content by default. Instead, after a user reports a message, it and four previous messages, as well as supporting content, are sent to WhatsApp in a non-encrypted form.

In effect, end-to-end encryption is maintained with no backdoors in the encryption itself, but WhatsApp's app is able to leak the information out at either end of the communication.

In a statement, WhatsApp said it builds the app "in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive." WhatsApp also emphasized the trust and safety team, the work of security experts, and its introduction of new privacy features.

This is not the first time WhatsApp had to deal with allegations surrounding its encrypted messaging service. In 2017, there were claims a backdoor was discovered that allowed Facebook to see the contents of encrypted messages. At the time, WhatsApp denied there was a backdoor in use.

In 2021, privacy policy changes caused another headache for WhatsApp, one that had WhatsApp updating business chat logs so they could be stored on Facebook servers. Users were wary of the change, insisting it was a grab by Facebook for personal data.

Updated at 11:36 A.M. Eastern: Facebook reached out to AppleInsider shortly after publication to reiterate points that we already made in the article. In turn, we asked how WhatsApp's moderation squares with Zuckerberg's testimony before Congress in 2018.

"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat," Facebook said in a return email. "This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption."

Read on AppleInsider

Comments

  • Reply 1 of 14
    crowleycrowley Posts: 8,888member
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    muthuk_vanalingamrundhvidolsliketheskyFileMakerFeller
  • Reply 2 of 14
    MplsPMplsP Posts: 3,356member
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    rinosaurelijahgauxiobluefire1williamlondonn2itivguyjeffharriswatto_cobraols
  • Reply 3 of 14
    So if I forward a screen snap of my WhatsApp conservation to somebody else, does that mean that Mark Z is lying to Congress under oath and is committing a felony?
    muthuk_vanalingamwilliamlondonwatto_cobraFileMakerFeller
  • Reply 4 of 14
    That’s why I deleted Whatsapp, Instagram and Facebook accounts and apps from all my devices. Never regretted this move.
    GG1watto_cobraols
  • Reply 5 of 14
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    The whole idea with end-to-end encryption is that the content is only available between the communicating parties, so WhatsApp and Facebook should not have access to the content.
    williamlondonGG1n2itivguywatto_cobraols
  • Reply 6 of 14
    gatorguygatorguy Posts: 23,252member
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    The whole idea with end-to-end encryption is that the content is only available between the communicating parties, so WhatsApp and Facebook should not have access to the content.
    In a followup on another site it's being reported the messages Facebook sees are actually provided by the recipient when they complain about a spammy, vulgar, or offensive message. It's not either Facebook or WhatsApp seeing them for themselves using some backdoor. The E2EE appears to be truthful. 
    muthuk_vanalingamAlex_Vlikethesky
  • Reply 7 of 14
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    ….that never gets punished. 
    watto_cobra
  • Reply 8 of 14
    hexclock said:
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    ….that never gets punished. 
    Sometimes it does. The government filed charges against Roger Clemens for lying to Congress during the steroid scandal in baseball. I know Clemens ended up being found not guilty though. 
    williamlondonwatto_cobra
  • Reply 9 of 14
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    Sure. And no one lied. Because no one reads end-to-end encrypted messages. 
    Facebook only reads reports the users send. The same would be true if I made a screenshot of the message and posted it on a forum - now everyone can read the information on the screenshot, but not the original message (unless it is shared as e.g. screenshot or report etc.).
    muthuk_vanalingamn2itivguyexceptionhandlerliketheskyFileMakerFeller
  • Reply 10 of 14
    urahara said:
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    MplsP said:
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    Except lying to Congress under oath is a felony…
    Sure. And no one lied. Because no one reads end-to-end encrypted messages. 
    Facebook only reads reports the users send. The same would be true if I made a screenshot of the message and posted it on a forum - now everyone can read the information on the screenshot, but not the original message (unless it is shared as e.g. screenshot or report etc.).
    Exactly.  Computer security when you distill it down to its core is all about trust.  Do you trust that the technology was developed well? Do you trust the claims about using e2e encryption? Do you trust that they are using industry standard algorithms? Do you trust the other party(ies) on the other end not to disseminate the message(s) imparted to them after they have been decrypted?
  • Reply 11 of 14
    crowleycrowley Posts: 8,888member
    crowley said:
    So it only gets triggered when one of the conversation participants reports the contact?  That seems entirely in line with expectations, what else would a user expect to happen when they report content?  If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.

    No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
    The whole idea with end-to-end encryption is that the content is only available between the communicating parties, so WhatsApp and Facebook should not have access to the content.
    And they don’t. Except when a user takes a specific deliberate action to report a contact. The user could just as easily take a screenshot of the conversation and send it to Mark Zuckerberg.

    I think the spirit of what Mark Z said to Congress was true; Facebook and WhatsApp have no ability to snoop on private conversations. The fact that conversations can be user forwarded to them doesn’t change that or make him a liar. 
    muthuk_vanalingamIreneWliketheskyFileMakerFeller
  • Reply 12 of 14
    Of course only thing he does is lie. 
  • Reply 13 of 14
    Of course only thing he does is lie. 
    Except, as @Crowley and few others have pointed out multiple times in this thread, he hasn't lied on this specific topic.
  • Reply 14 of 14
    gatorguygatorguy Posts: 23,252member
    ProPublica's orginal hitpiece that lead to an uproar over Nothing first said this:

    "An assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

    Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute […]

    Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment."


    But then today ProPublica says this:

    "A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption."

    Unintended confusion? Seriously?

    Of course that's AFTER some number of readers will have only seen ProPublica claiming Facebook was lying about E2EE, and other articles around the web parroting it. Writers need to be far more careful about fluffing up claims and even intentional misinformation for no reason other than attracting eyeballs. 

    muthuk_vanalingam
Sign In or Register to comment.