Apple reportedly plans to make iOS detect child abuse photos
A security expert claims that Apple is about to announce photo identification tools that would identify child abuse images in iOS photo libraries.

Apple's iPhone
Apple has previously removed individual apps from the App Store over child pornography concerns, but now it's said to be about to introduce such detection system wide. Using photo hashing, iPhones could identify Child Sexual Abuse Material (CSAM) on device.
Apple has not confirmed this and so far the sole source is Matthew Green, a cryptographer and associate professor at Johns Hopkins Information Security Institute.
According to Green, the plan is initially to be client-side -- that is, have all of the detection done on a user's iPhone. He argues, however, that it's possible that it's the start of a process that leads to surveillance of data traffic sent and received from the phone.
"Eventually it could be a key ingredient in adding surveillance to encrypted messaging systems," continues Green. "The ability to add scanning systems like this to E2E [end to end encryption] messaging systems has been a major 'ask' by law enforcement the world over."
"This sort of tool can be a boon for finding child pornography in people's phones," he said. "But imagine what it could do in the hands of an authoritarian government?"
Green who, with his cryptography students, has previously reported on how law enforcement may be able to break into iPhones. He and Johns Hopkins University have also previously worked with Apple to fix a security bug in Messages.
Read on AppleInsider

Apple's iPhone
Apple has previously removed individual apps from the App Store over child pornography concerns, but now it's said to be about to introduce such detection system wide. Using photo hashing, iPhones could identify Child Sexual Abuse Material (CSAM) on device.
Apple has not confirmed this and so far the sole source is Matthew Green, a cryptographer and associate professor at Johns Hopkins Information Security Institute.
I've had independent confirmation from multiple people that Apple is releasing a client-side tool for CSAM scanning tomorrow. This is a really bad idea.
— Matthew Green (@matthew_d_green)
According to Green, the plan is initially to be client-side -- that is, have all of the detection done on a user's iPhone. He argues, however, that it's possible that it's the start of a process that leads to surveillance of data traffic sent and received from the phone.
"Eventually it could be a key ingredient in adding surveillance to encrypted messaging systems," continues Green. "The ability to add scanning systems like this to E2E [end to end encryption] messaging systems has been a major 'ask' by law enforcement the world over."
"This sort of tool can be a boon for finding child pornography in people's phones," he said. "But imagine what it could do in the hands of an authoritarian government?"
Green who, with his cryptography students, has previously reported on how law enforcement may be able to break into iPhones. He and Johns Hopkins University have also previously worked with Apple to fix a security bug in Messages.
Read on AppleInsider
This discussion has been closed.
Comments
The biggest problem with this story is how short it is on details. We really need to know how it will be used and what safeguards are put in place to protect privacy. We can ASSUME Apple has that covered, but I'll believe it when I see the details.
1. It's not in the hands of Apple. It's not in the hands of anyone. It's a, thus far, unsubstantiated rumor from a security researcher.
2. If it comes to fruition that Apple does enable the AI feature, wouldn't they be bound by the law to report the info to authorities (idk, ianal). If the offending data is stored in iCloud, then it would also be subject to worldwide government data requests. Requests that Apple has honored ~80% of the time on average.
3. Keeping in mind this is only a claim by a researcher, and not Apple, the question would then have to be asked: What constitutes child pornography to the AI? Is it reviewed by a human for higher level verification? If so, Apple employee or 3rd party source (like the original voice recordings)? What triggers reporting to authorities and who bears responsibility for errors?
A parent sending pics of the kids in bubble bath to grandparents. Photo of a young looking 18 girl topless at a nude beach. Scouts shirtless around a campfire.
Would any one of those trigger the AI? What if all three were on the same phone? It's entirely possible and not far fetched.
I can't stress enough this isn't Apple going after child abusers. This is a researcher making a claim. But if Apple were going to do so it would most definitely affect that "government access -authoritarian or otherwise- query made by the researcher, in myriad way not even addressed in my comment.
I’m mean, I’m sorry, but I don’t care WHAT type of government it is. CSM (child sex material) is CSM. Repressive governments definitely don’t have issues “taking out the trash”.
As far as finding the pics or videos, there is “hashing” or “fingerprinting”. Governments or organizations like ICAC (Internet Crimes Against Children) Task Force use these fingerprints to search the web (and I assume the dark web) for images like this. I assume that Apple would just use these hashes.
Still, I don’t think Apple should avoid taking on challenges where the penalty for imperfection is high when it’s the right thing to do. Being forthright up-front about the infallibility of what they are taking on will probably not make a difference later on when they are sued, and they most certainly will be sued at some point. That’s just a cost they’ll have to accept and the law schools will be grateful to Apple for their high minded decision.
Thus in Russia iPhones will come with the Russian approved software installation options, servers for Russian and Chinese users are in their respective countries, subject to access by the local authorities, and if laws are passed, that (once they build the capabilities) messages be scanned for certain content, Apple will comply, too.
Or do you seriously think they will drop the Chinese market over human rights concerns? 🤣
That’s exactly why jailed systems are bad: once something like this is baked in, the user has no chance at modifying the system to prevent it.
So, no, it’s not in the hands of Apple, at least not in any realistic sense of the word.
Going to be fun for security researchers, journalists, whistleblowers, law enforcement officers: how will the system distinguish between holding evidence and peddling?
Then you have excellent, verifiable answers to any questions and there won't be a problem
As opposed to the zero scrutiny they currently face? What has been lost here? Nothing you've described is a nightmare, all are very easily solvable with a simple conversation.
2,3. The system Dr. Green talked about isn't AI in any meaningful sense. It's fuzzy hashing. It matches only images which are close to existing known images. The concern is fuzzy hashing is, by its very nature, imprecise. While normal hashes match an exact chunk of data, fuzzy hashes are more likely to match "similar" data, and our idea of similar may not be the hash's idea.
These systems are normally used forensically, after someone is already suspected of possession of CSAM. The system directs investigators to specific files, then the investigators confirm. We don't have good studies of false positive rates, because when the systems are used, it's typically on drives which contain thousands of CSAM images. And the drives people use to store CSAM tend to contain little else, limiting false positives. What's a false positive here or there, when you confirm a hundred of the images are CSAM?
When a false positive can basically destroy your ability to live in modern society, we really need a solid understanding of how likely they are.
As for the authoritarian government angle, if such a capability were built, China would definitely demand it be used to report people in China who have a copy of Tank Man.
And these are the nice scenarios. I know some cops and a detective who fabricate evidence to get people in trouble. They would love this feature! Send a photo to someone and arrest them.
Yes this is a bad idea because it will then lead to other back doors. This is how the government operates, they use an excuse to advance to an end goal that isn’t moral.
I don’t buy it, and no one would buy another iPhone if they thought their lives could be ruined by buggy software.