Apple's Federighi says child protection message was 'jumbled,' 'misunderstood'
Craig Federighi has said that Apple was wrong to release three child protection features at the same time, and wishes the confusion had been avoided.
Apple's Craig Federighi talking to Joanna Stern of the Wall Street Journal
Apple senior vice president of software engineering, Craig Federighi, has spoken out about the controversy over the company's new child protection features. In detailing how the two iCloud Photos and Messages features are different, he confirms what AppleInsider previously reported.
But, he also talked about how the negative reaction has been seen within Apple.
"We wish that this had come out a little more clearly for everyone because we feel very positive and strongly about what we're doing, and we can see that it's been widely misunderstood," he told the Wall Street Journal in an interview.
"I grant you, in hindsight, introducing these two features at the same time was a recipe for this kind of confusion," he continued. "It's really clear a lot of messages got jumbled up pretty badly. I do believe the soundbite that got out early was, 'oh my god, Apple is scanning my phone for images.' This is not what is happening."
Federighi then stepped through the processes involved in both the iCloud Photos scanning of uploaded CSAM (Child Sexual Abuse Material) images, and what happens when a child is sent such an image over Messages.
Criticisms of the features centered on the perception that Apple was analyzing photos on users' iPhones. "That's a common but really profound misunderstanding," said Federighi.
"This is only being applied as part of a process of storing something in the cloud," he continued. "This isn't some processing running over the images you store in Messages, or Telegram... or what you're browsing over the web. This literally is part of the pipeline for storing images in iCloud."
Asked whether there had been pressure to release this feature now, Federighi said there hadn't.
"No, it really came down to... we figured it out," he said. "We've wanted to do something but we've been unwilling to deploy a solution that would involve scanning all customer data."
Read on AppleInsider
Apple's Craig Federighi talking to Joanna Stern of the Wall Street Journal
Apple senior vice president of software engineering, Craig Federighi, has spoken out about the controversy over the company's new child protection features. In detailing how the two iCloud Photos and Messages features are different, he confirms what AppleInsider previously reported.
But, he also talked about how the negative reaction has been seen within Apple.
"We wish that this had come out a little more clearly for everyone because we feel very positive and strongly about what we're doing, and we can see that it's been widely misunderstood," he told the Wall Street Journal in an interview.
"I grant you, in hindsight, introducing these two features at the same time was a recipe for this kind of confusion," he continued. "It's really clear a lot of messages got jumbled up pretty badly. I do believe the soundbite that got out early was, 'oh my god, Apple is scanning my phone for images.' This is not what is happening."
Federighi then stepped through the processes involved in both the iCloud Photos scanning of uploaded CSAM (Child Sexual Abuse Material) images, and what happens when a child is sent such an image over Messages.
Criticisms of the features centered on the perception that Apple was analyzing photos on users' iPhones. "That's a common but really profound misunderstanding," said Federighi.
"This is only being applied as part of a process of storing something in the cloud," he continued. "This isn't some processing running over the images you store in Messages, or Telegram... or what you're browsing over the web. This literally is part of the pipeline for storing images in iCloud."
Asked whether there had been pressure to release this feature now, Federighi said there hadn't.
"No, it really came down to... we figured it out," he said. "We've wanted to do something but we've been unwilling to deploy a solution that would involve scanning all customer data."
Read on AppleInsider
Comments
It is just like the 1000 gas powered cars that will catch on fire get not a work of national attention, but one Tesla goes up in flames and there's a world wide news bulletin like it's the end of the world.
- Mass surveillance of a billion iPhone users for what – now that every criminal has been warned?
- On the device so that every security scientist knows what happens – no, they don't know if there is more in iCloud
- Since it is on the device it looks like a first step, the second step could be a neural network detecting images
To reiterate myself, after buying a new iPhone every year since 2007, I will not update to iOS 15 and will not buy an iPhone 13 Pro until this is sorted out. Same applies to macOS Monterey.
On the other hand, Apple prides itself on protecting the privacy of their customers. A key sales point in buying into the Apple ecosystem is that Apple does everything they possibly can in order to protect your data. They even fight court orders requiring them to add back doors to iPhone local encryption.
Under Apple's new policy, every image you upload to your iCloud library will be scanned, and compared against a list of blacklisted images. If you have too many blacklisted images, you will be reported to the authorities.
Initially, the blacklist will only contain child porn images. I can easily imagine a narcissistic leader ordering Apple to add to that list images which are critical to the government. Imagine a photo of a President that makes him look foolish, shows him in a compromising position, or reveals a public statement to be a lie. Such a President would have a strong incentive to add these photos to the list. Remember, Apple doesn't know what the blacklisted photos look like, Apple only has digital fingerprints of these images (it would be illegal for Apple to posses Child Porn, even if it was for a good cause).
Apple fights to keep out back doors because they know that once those tools exist, at some point they will be used inappropriately by the government. While I applaud Apple's goal of fighting child abuse, I know that once this tool exists, at some point it will be used inappropriately by the government.
One of the key beliefs in the US Constitution is a healthy distrust of government. We have three branches of government, each to keep watch on the other two. We have a free press and free speech so that the public can stay informed as to what government is doing. We have free and fair elections so the people can act as the ultimate watchdog by voting out representatives that are not doing what they should be doing.
I strongly urge Apple to protect the privacy of our images by killing this feature. At best, it will get the criminals use other services for sharing their personal photos. At worst, it is a tool that can be used by the government to prevent free speech.
Let the legal system in place handle this stuff.
Apple has opened a can of worms.
You are free to not pay to use iCloud Photos and your problem is solved.
These are numeric hash comparisons to the hash signatures of known child porn. The hash is not the photo, it’s a numeric representation of the photo — in the same way that your iPhone doesn’t store your FaceID or TouchID images, but instead stores a numeric hash which is used for authentication.
Man how do you not get this yet.
There is nothing new here. You’re not forced to use any of these commercial cloud services to host your images online.
And reporting child porn libraries to the authorities is not some choice Apple makes — they’re required to by law. As are the other cloud storage services. You can’t store child porn, it’s against the law.
Fix the problem instead of this mea culpa farce, people.
It's not about CSAM.
No one was confused about anything here, Craig. Everyone knows exactly where this leads. Audit away, assholes.
Apple's system uses their own NeuralHash for matching, not PhotoDNA.
But the biggest difference is that Apple is running the scan on your phone.
It's odd, because with this much of a backlash, Google and Microsoft would've thrown in the towel and sat round the campfire for a rethink,
Apple keeps on insisting that the problem is the dissenters: we just don't understand. We understand just fine, Craig; we just disagree with you.
Apple is determined to drive this through, no matter what; and you have to wonder why. I mean they already scan images on their servers, so why are they so determined to get spyware running on your phone?
I think the reason is that, after a couple of false starts, Cupertino is ready to go all in on its next big product: advertising. But how do you do this while keeping up the 'privacy' mantra? How do you get into user tracking when you've spent the past three or four years crucifying all the other companies who make money doing it?
Well, to start with, you release a client-side tracker, give it a noble purpose, and then try to convince people that their privacy is protected because it is not moving around a real image; just a hashed representation of the image.
If you can get people to accept that, then it's a lot easier to get them to accept step 2; a client-side tracker that logs what you're doing on the phone, which developers and marketers can hook into and extract information. But here's the clever part: the info they extract is a machine-learned representation of you that gets a unique number so it can be tracked across applications. But it doesn't contain any real details; not your name, address, health records, nothing; because as long as they know that 884398443894398 exercises three times a week, goes to a lot of cookery classes and has a subscription to PornHub, that's all they really care about. Why do they need to know your real name? They can serve relevant ads to that person without knowing who they are. Only Apple knows that, and they will not allow that information out. The APIs to access this pseudo-you might even incur a subscription charge.
But to make this work, they would need the user base to be accept loggers running on their phones. And that's where we are now: Step 1. That's why the client-side tool cannot be dropped. Without it, the whole plan is screwed.
Of course, this would work for apps, but once you get out onto the web then there's no API, so for that to work, Apple would need some kind of private relay that could substitute your details with your avatar when you make web requests.
The message Apple is trying to get across is that your privacy is not compromised, because we're just dealing with a representation of your data, not the data itself.
No they don't. They perform the checks on their servers. They could certainly build something like that into Android, but it's open-source, so someone else would just build a version without it, or write an app to disable it.