German government wants Tim Cook to reconsider CSAM plans
The German parliament is wading into the Apple CSAM debate, with a Digital Agenda committee chief writing to Apple CEO Tim Cook to express concerns about mass surveillance.

Apple's CSAM scanning system continues to draw criticism over its existence, largely at a misinterpretation as to what the system does and is capable of doing. The criticism has now reached the German Bundestag, the national parliament, which is now stepping into the affair.
A letter sent by Digital Agenda committee chairman Manuel Hoferlin claims Apple is treading a "dangerous path" and is undermining "secure and confidential communication," according to Heise Online. The letter to Cook urges Apple to not implement the system, both to protect society's data, and to avoid "foreseeable problems" for the company itself.
The CSAM tools are considered the "biggest breach of the dam for the confidentiality of communication that we have seen since the invention of the Internet," a translated extract from Hoferlin's letter reads. Without confidential communication, the internet would become "the greatest surveillance instrument in history."
The proposed tools are actually for two separate tasks. The main tool is a scanner that checks hashes of images uploaded to iCloud Photos for matches against a database of known CSAM images, not the content of the files themselves.
The second is in Messages, which will warn young users of harmful material they may see in messages, and will inform family account administrators of the incident. While the second system uses on-device machine learning to inspect the images, Apple is not fed back any data about the scans.
Despite the narrow impact of the tools, and reassurances from Apple, the parliament member still insists a narrow backdoor is still a backdoor. Requests to open the backdoor to scan other types of content are inevitable, they add, and could have Apple risking access to major international markets if it rejects them.
The Bundestag letter arrives one day after a German journalist union called for the European Commission to look into the matter, due to the perceived "violation of the freedom of the press."
Read on AppleInsider

Apple's CSAM scanning system continues to draw criticism over its existence, largely at a misinterpretation as to what the system does and is capable of doing. The criticism has now reached the German Bundestag, the national parliament, which is now stepping into the affair.
A letter sent by Digital Agenda committee chairman Manuel Hoferlin claims Apple is treading a "dangerous path" and is undermining "secure and confidential communication," according to Heise Online. The letter to Cook urges Apple to not implement the system, both to protect society's data, and to avoid "foreseeable problems" for the company itself.
The CSAM tools are considered the "biggest breach of the dam for the confidentiality of communication that we have seen since the invention of the Internet," a translated extract from Hoferlin's letter reads. Without confidential communication, the internet would become "the greatest surveillance instrument in history."
The proposed tools are actually for two separate tasks. The main tool is a scanner that checks hashes of images uploaded to iCloud Photos for matches against a database of known CSAM images, not the content of the files themselves.
The second is in Messages, which will warn young users of harmful material they may see in messages, and will inform family account administrators of the incident. While the second system uses on-device machine learning to inspect the images, Apple is not fed back any data about the scans.
Despite the narrow impact of the tools, and reassurances from Apple, the parliament member still insists a narrow backdoor is still a backdoor. Requests to open the backdoor to scan other types of content are inevitable, they add, and could have Apple risking access to major international markets if it rejects them.
The Bundestag letter arrives one day after a German journalist union called for the European Commission to look into the matter, due to the perceived "violation of the freedom of the press."
Read on AppleInsider
Comments
Since scanning data on user's devices is prohibited in many countries, Apple is also in legal trouble (even when it officially is only in the USA), since a company plays the gatekeeper of this spyware and could change things at any time (even for specific users)!
Or I don't know, maybe people actually value their privacy and Apple just messed up big time?
It's not an objection to the scanning, it's an objection to the scanning being done on your device without you having the option to disable it.
It’s quite possible that any detection system that may actually work will receive a lot of criticism. Not because a loads of people are guilty of what is being looked for, but because most of us probably have files/photos which we would rather not be looked at as they maybe marginally or actually illegal.
The black economy is no doubt way bigger that child pornography and we fear the ability for this to be discovered by whatever government we are hiding from.
Politicians may be extra nervous.
It's no more a slippery slope than Facebook, Instagram, Google Photos, Youtube, et. al. each scanning content and hashes upon upload. To address the topic of people worried this would devolve into a tool to suppress political dissent: it's trivial to repurpose a political message to evade a hash by reinventing the content - indeed this happens naturally even without such restrictions, while on the other hand it's difficult to repurpose a series of photos or videos to evade filters (see also copyright filters.)
Now onto the hysterics: there is nothing preventing nefarious governments from requesting nefarious deeds from all of the above named services, so the suggestion that governments will uniquely lobby Apple to do this is a shaky argument (especially when considering the above services all have more users than iCloud Photo Library.)
I also don't buy the privacy argument - your Photo library already hashes your photos, that's why you can search your library descriptively (e.g. type "Cat" to get photos iOS "thinks" are cat photos in your library.) This is merely adding a new source of hashes and doing something about matches that should be getting attention. Furthermore it only occurs when using a specific online photo service provided by Apple.
Client-side CSAM detection is incontrovertibly better for privacy than server-side CSAM detection, which Apple currently does. To do the scanning on the server side, the server (and by extension, Apple) has to be able to see the photos you send to them. With client-side scanning, your photos can be encrypted before Apple ever gets them, so Apple can't ever see them. There have been several known incidents where employees of photo sync/sharing sites have saved customers' private images and shared them with other employees without the consent of the people in the photos. NSA employees are known to have done the same with photos caught in their surveillance dragnets. Client-side CSAM scanning and sending only encrypted images to Apple is specifically meant to address that type of issue.
Whether the scanning should happen at all is definitely worth debating. The legal teams of every major photo sharing site clearly believe US law currently requires this scanning to happen. Dropbox, Facebook, Flickr, Google, Instagram, OneDrive, and more all do exactly the same scanning server-side which Apple does today.
Here’s how Apple handle’s the same problem…
1. Your Apple device (not Apple) hashes a photo before it is uploaded to iCloud Photos and only when it is about to be uploaded to iCloud Photos.
2. That hash is compared against a local database of CSAM hashes and flagged if there’s a match
3. Apple then looks through iCloud photo libraries on their server for flagged photos.
4. If a threshold is hit (X number of flagged photos), then Apple manually checks the hashes with a database they keep on the server side. If these are valid CSAM hashes then the account is flagged and reported.
No where in any those steps does Apple ever scan your photos or perform any image recognition. They do not know what any of your images are or contain at any time. They do not look at the images, they only compare the hashes.
There is no way this system can be abused by any 3rd party, as the actual images are unknown to Apple. If the CSAM database on the local device was somehow hacked and filled with hashes from some other type of images, then on the server side they would not match with the secondary check unless that hacker was able to completely replace Apple’s CSAM hash database.
If I had to guess why Apple has developed this system, it is because they eventually plan on e2e encryption for all user files and data. As this would allow photos to be encrypted on device before being sent to the cloud and still enable them to check for CSAM material - which will probably become law soon - making online hosting/storage services liable for storing such material. This is not a back door… its a loophole. This allows Apple to be complaint with laws without giving law enforcement (or anyone else) access to all your data.
I am happy of the change, it means Apple doesn't need to see the contents of my pictures to thwart child abuse.
The Messages update may be more nefarious, don't want to notify authorities about what images I receive.
No data is scanned on user devices. This is similar to how data is encrypted. The encryption scheme doesn’t care what the data is, it just encrypts it. Creating a hash of data is the exact same thing. And this only happens when a photo is being uploaded to Apple’s iCloud Photos. It’s more secure and more privacy focused to do this before it’s uploaded as it enables the possibility of the photo itself to be encrypted on device before it gets to the server. This means Apple (or anyone else) would not be able to look at it.
Furthermore…
How do you think search works on devices? All data is scanned and indexed.
How do you think virus/malware protection works? Everything is scanned for a fingerprint.
How do you think data detectors work? Data is scanned and patterned matched.
How do you think hand writing recognition works? Images are scanned for text.
How do you think facial recognition works? Voice recognition? etc…
If it was illegal to scan data on user devices, most of these devices would be completely worthless paper weights.