Mark Zuckerberg may have lied to Congress about who can see WhatsApp messages
The Facebook-owned WhatsApp regularly boasts of using end-to-end encryption and keeping communications between users private, but a report alleges that some monitoring of messages does take place, and that Mark Zuckerberg may not have told the truth to the U.S. Senate.
In 2016, WhatsApp announced it was using end-to-end encryption for all communications on its platform, covering everything from messages to file transfers. The use of end-to-end encryption is intended to offer users a level of privacy and security, but it seems that may not be true for the messaging app.
In a report by ProPublica, it is claimed WhatsApp employs more than 1,000 contract workers in Austin, Texas, Dublin, and Singapore, specifically for examining "millions of pieces of users' content." The workers "use special Facebook software" to look through messages and content that have been flagged by WhatsApp users, and have been screened by AI systems.
The reviews occur in spite of an assurance that appears in the app before users send messages for the first time, claiming "No one outside of this chat, not even WhatsApp, can read or listen to them."
In 2018 testimony to the U.S. Senate, Facebook CEO Macrk Zuckerberg claimed "We don't see any of the content in WhatsApp."
The report adds that the claims are bolstered by a whistleblower complaint filed with the U.S. Securities and Exchange Commission in 2020. The complaint said about WhatsApp's use of external contractors, AI, and account information for monitoring user messages, images, and videos, and that WhatsApp's claims about protecting user privacy are false.
"We haven't seen this complaint," said a WhatsApp spokesperson. The SEC has also, so far, not acted on the complaint in public.
While the claims of monitoring user messages may be an indication that WhatsApp's end-to-end encryption may not be completely secure, there may still be some truth to its privacy credentials.
As content is encrypted, automated systems are unable to scan the content by default. Instead, after a user reports a message, it and four previous messages, as well as supporting content, are sent to WhatsApp in a non-encrypted form.
In effect, end-to-end encryption is maintained with no backdoors in the encryption itself, but WhatsApp's app is able to leak the information out at either end of the communication.
In a statement, WhatsApp said it builds the app "in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive." WhatsApp also emphasized the trust and safety team, the work of security experts, and its introduction of new privacy features.
This is not the first time WhatsApp had to deal with allegations surrounding its encrypted messaging service. In 2017, there were claims a backdoor was discovered that allowed Facebook to see the contents of encrypted messages. At the time, WhatsApp denied there was a backdoor in use.
In 2021, privacy policy changes caused another headache for WhatsApp, one that had WhatsApp updating business chat logs so they could be stored on Facebook servers. Users were wary of the change, insisting it was a grab by Facebook for personal data.
Updated at 11:36 A.M. Eastern: Facebook reached out to AppleInsider shortly after publication to reiterate points that we already made in the article. In turn, we asked how WhatsApp's moderation squares with Zuckerberg's testimony before Congress in 2018.
"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat," Facebook said in a return email. "This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption."
Read on AppleInsider
In 2016, WhatsApp announced it was using end-to-end encryption for all communications on its platform, covering everything from messages to file transfers. The use of end-to-end encryption is intended to offer users a level of privacy and security, but it seems that may not be true for the messaging app.
In a report by ProPublica, it is claimed WhatsApp employs more than 1,000 contract workers in Austin, Texas, Dublin, and Singapore, specifically for examining "millions of pieces of users' content." The workers "use special Facebook software" to look through messages and content that have been flagged by WhatsApp users, and have been screened by AI systems.
The reviews occur in spite of an assurance that appears in the app before users send messages for the first time, claiming "No one outside of this chat, not even WhatsApp, can read or listen to them."
In 2018 testimony to the U.S. Senate, Facebook CEO Macrk Zuckerberg claimed "We don't see any of the content in WhatsApp."
The report adds that the claims are bolstered by a whistleblower complaint filed with the U.S. Securities and Exchange Commission in 2020. The complaint said about WhatsApp's use of external contractors, AI, and account information for monitoring user messages, images, and videos, and that WhatsApp's claims about protecting user privacy are false.
"We haven't seen this complaint," said a WhatsApp spokesperson. The SEC has also, so far, not acted on the complaint in public.
While the claims of monitoring user messages may be an indication that WhatsApp's end-to-end encryption may not be completely secure, there may still be some truth to its privacy credentials.
As content is encrypted, automated systems are unable to scan the content by default. Instead, after a user reports a message, it and four previous messages, as well as supporting content, are sent to WhatsApp in a non-encrypted form.
In effect, end-to-end encryption is maintained with no backdoors in the encryption itself, but WhatsApp's app is able to leak the information out at either end of the communication.
In a statement, WhatsApp said it builds the app "in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive." WhatsApp also emphasized the trust and safety team, the work of security experts, and its introduction of new privacy features.
This is not the first time WhatsApp had to deal with allegations surrounding its encrypted messaging service. In 2017, there were claims a backdoor was discovered that allowed Facebook to see the contents of encrypted messages. At the time, WhatsApp denied there was a backdoor in use.
In 2021, privacy policy changes caused another headache for WhatsApp, one that had WhatsApp updating business chat logs so they could be stored on Facebook servers. Users were wary of the change, insisting it was a grab by Facebook for personal data.
Updated at 11:36 A.M. Eastern: Facebook reached out to AppleInsider shortly after publication to reiterate points that we already made in the article. In turn, we asked how WhatsApp's moderation squares with Zuckerberg's testimony before Congress in 2018.
"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat," Facebook said in a return email. "This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption."
Read on AppleInsider
Comments
No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
Facebook only reads reports the users send. The same would be true if I made a screenshot of the message and posted it on a forum - now everyone can read the information on the screenshot, but not the original message (unless it is shared as e.g. screenshot or report etc.).
I think the spirit of what Mark Z said to Congress was true; Facebook and WhatsApp have no ability to snoop on private conversations. The fact that conversations can be user forwarded to them doesn’t change that or make him a liar.
"An assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”
Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute […]
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment."
But then today ProPublica says this:
"A previous version of this story caused unintended confusion about the extent to which WhatsApp examines its users’ messages and whether it breaks the encryption that keeps the exchanges secret. We’ve altered language in the story to make clear that the company examines only messages from threads that have been reported by users as possibly abusive. It does not break end-to-end encryption."
Unintended confusion? Seriously?
Of course that's AFTER some number of readers will have only seen ProPublica claiming Facebook was lying about E2EE, and other articles around the web parroting it. Writers need to be far more careful about fluffing up claims and even intentional misinformation for no reason other than attracting eyeballs.