Tests confirm macOS Finder isn't scanning for CSAM images
Apple isn't checking images viewed within the macOS Finder for CSAM content, an investigation into macOS Ventura has determined, with analysis indicating that Visual Lookup isn't being used by Apple for that particular purpose.

The macOS Finder isn't scanning your images for illegal material.
In December, Apple announced it had given up on plans to scan iPhone photos uploaded to iCloud for Child Sexual Abuse Material (CSAM), following considerable backlash from critics. However, rumors apparently lingered alleging that Apple was still performing checks in macOS Ventura 13.1, prompting an investigation from a developer.
According to Howard Oakley of Eclectic Light Co. in a blog post from January 18, a claim started to circulate that Apple was automatically sending "identifiers of images" that a user had browsed in Finder, doing so "without that user's consent or awareness."
The plan for CSAM scanning would've involved a local on-device check of images for potential CSAM content, using a hashing system. The hash of the image would then be sent off and checked against a list of known CSAM files.
While the idea of scanning images and creating a neural hash to be sent off to Apple to describe characteristics of image could feasibly be used for CSAM scanning, Oakley's testing indicates it's not actively being used in that way. Instead, it seems that Apple's Visual Lookup system, which allows macOS and iOS to identify people and objects in an image, as well as text, could be mistaken for conducting in this sort of behavior.
If the system was used for CSAM analysis, there would be repeated outgoing connections from the "mediaanalysisd" to an Apple server for each image. The mediaanalysisisd refers to an element used in Visual Lookup where Photos and other tools can display information about detected items in an image, such as "cat" or the names of objects.
The logs instead showed that there were no entries associated with mediaanalysisd at all. A further log extract was then found to be very similar to Visual Lookup as it appeared in macOS 12.3, in that the system hasn't materially changed since that release.
Typically, mediaanalysisd doesn't contact Apple's servers until very late in the process itself, as it requires neural hashes generated by image analysis beforehand. Once sent off and a response is received back from Apple's servers, the received data is then used to identify to the user elements within the image.
Further trials determined that there were some other attempts to send off data for analysis, but for enabling Live Text to function.
In his conclusion, Oakley writes that there is "no evidence that local images on a Mac have identifiers computed and uploaded to Apple's servers when viewed in Finder windows."
While images viewed in apps with Visual Lookup support have neural hashes produced, which can be sent to Apple's servers for analysis. Except that trying to harvest the neural hashes for detecting CSAM "would be doomed to failure for many reasons."
Local images in QuickLook Preview also go under normal analysis for Live Text, but "that doesn't generate identifiers that could be uploaded to Apple's servers."
Furthermore, Visual Lookup could be disabled by turning off Siri Suggestions. External mediaanalysiss look-ups could also be blocked using a software firewall configured to block port 443, though "that may well disable other macOS features."
Oakley concludes the article with a warning that "alleging that a user's actions result in controversial effects requires full demonstration of the full chain of causation. Basing claims on the inference that two events might be connected, without understanding the nature of either, is reckless if not malicious."
In December, the Australian e-Safety Commissioner attacked Apple and Google over a "clearly inadequate and inconsistent use of widely available technology to detect child abuse material and grooming."
Rather than directly scanning for existing content, which would be largely ineffective due to Apple using fully end-to-end encrypted photo storage and backups, Apple instead seems to want to go down a different approach. Namely, one that can detect nudity included in photos sent over iMessage.
Read on AppleInsider

The macOS Finder isn't scanning your images for illegal material.
In December, Apple announced it had given up on plans to scan iPhone photos uploaded to iCloud for Child Sexual Abuse Material (CSAM), following considerable backlash from critics. However, rumors apparently lingered alleging that Apple was still performing checks in macOS Ventura 13.1, prompting an investigation from a developer.
According to Howard Oakley of Eclectic Light Co. in a blog post from January 18, a claim started to circulate that Apple was automatically sending "identifiers of images" that a user had browsed in Finder, doing so "without that user's consent or awareness."
The plan for CSAM scanning would've involved a local on-device check of images for potential CSAM content, using a hashing system. The hash of the image would then be sent off and checked against a list of known CSAM files.
While the idea of scanning images and creating a neural hash to be sent off to Apple to describe characteristics of image could feasibly be used for CSAM scanning, Oakley's testing indicates it's not actively being used in that way. Instead, it seems that Apple's Visual Lookup system, which allows macOS and iOS to identify people and objects in an image, as well as text, could be mistaken for conducting in this sort of behavior.
No evidence in tests
As part of testing, macOS 13.1 was run within a virtual machine and the application Mints was used to scan a unified log of activities on the VM instance. On the VM, a collection of images were viewed for a period of one minute in Finder's gallery view, with more than 40,000 log entries captured and saved.If the system was used for CSAM analysis, there would be repeated outgoing connections from the "mediaanalysisd" to an Apple server for each image. The mediaanalysisisd refers to an element used in Visual Lookup where Photos and other tools can display information about detected items in an image, such as "cat" or the names of objects.
The logs instead showed that there were no entries associated with mediaanalysisd at all. A further log extract was then found to be very similar to Visual Lookup as it appeared in macOS 12.3, in that the system hasn't materially changed since that release.
Typically, mediaanalysisd doesn't contact Apple's servers until very late in the process itself, as it requires neural hashes generated by image analysis beforehand. Once sent off and a response is received back from Apple's servers, the received data is then used to identify to the user elements within the image.
Further trials determined that there were some other attempts to send off data for analysis, but for enabling Live Text to function.
In his conclusion, Oakley writes that there is "no evidence that local images on a Mac have identifiers computed and uploaded to Apple's servers when viewed in Finder windows."
While images viewed in apps with Visual Lookup support have neural hashes produced, which can be sent to Apple's servers for analysis. Except that trying to harvest the neural hashes for detecting CSAM "would be doomed to failure for many reasons."
Local images in QuickLook Preview also go under normal analysis for Live Text, but "that doesn't generate identifiers that could be uploaded to Apple's servers."
Furthermore, Visual Lookup could be disabled by turning off Siri Suggestions. External mediaanalysiss look-ups could also be blocked using a software firewall configured to block port 443, though "that may well disable other macOS features."
Oakley concludes the article with a warning that "alleging that a user's actions result in controversial effects requires full demonstration of the full chain of causation. Basing claims on the inference that two events might be connected, without understanding the nature of either, is reckless if not malicious."
CSAM still an issue
While Apple has gone off the idea of performing local CSAM detection processing, lawmakers still believe Apple isn't doing enough about the problem.In December, the Australian e-Safety Commissioner attacked Apple and Google over a "clearly inadequate and inconsistent use of widely available technology to detect child abuse material and grooming."
Rather than directly scanning for existing content, which would be largely ineffective due to Apple using fully end-to-end encrypted photo storage and backups, Apple instead seems to want to go down a different approach. Namely, one that can detect nudity included in photos sent over iMessage.
Read on AppleInsider
Comments
The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else. Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.
So how do law enforcers deal with law breakers? How they always have — which doesn't include privacy invasions like file scanning without a search warrant. It may not be the ideal approach in light of the tech we have today, but it's the only approach to protect citizen from unlawful search and seizure.
Sounds to me like you're the type who wants Big Government to force its hand pretty much anytime you personally feel it is "for the greater good." That is just as frightening as government getting tip-offs to the content of my hard drive. If you're happy with file scanning on your devices, great. But to go beyond that and force everyone else to capitulate to what makes YOU feel good, well, that's a different matter altogether. And that remains true regardless of what you wish to argue about "hashes."
There's a reason why Apple set a 30 "hit" limit, before an iOS user would be flagged, with their once proposed on device CSAM scanning. It was to reduce the chances of errors and false accusations because of inherent inaccuracy of the CSAM scanning software.
https://www.eff.org/deeplinks/2022/10/eu-lawmakers-must-reject-proposal-scan-private-chats
> It’s difficult to audit the accuracy of the software that’s most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should be sending up red flags, not encouraging lawmakers to move forward.
Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: widespread scanning produces significant numbers of false accusations. Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering.
The false positives cause real harm. A recent New York Times story highlighted a faulty Google CSAM scannerthat wrongly identified two U.S. fathers of toddlers as being child abusers. In fact, both men had sent medical photos of infections on their children at the request of their pediatricians. Their data was reviewed by local police, and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by the failed AI system, and defended their opaque human review process.
With regards to the recently published Irish data, the Irish national police verified that they are currently retaining all personal data forwarded to them by NCMEC—including user names, email addresses, and other data of verified innocent users. <
Perhaps it's the way you think it works, that is wrong.
Or do you suppose that there were many people prior to the covid pandemic would have been in favor of all of the restrictions that came about? Maybe 5% of the population, at most. It was obvious to some from the beginning that the restrictions were not about public safety but conditioning. They were criticized as fools but time is showing they were correct (cloth mask efficacy, etc.).
Mass manipulation is accomplished over time, because people have stupidly short memories. So the manipulators push the boundaries in steps, enduring the initial backlash, then when enough people get used to the idea they take another step, and the people completely forget about their former ideals, believing themselves to have become more educated when in reality they’re just easily manipulated by their overlords. Nowadays an alarming number of Americans aren’t even bothered by the idea of having an overlord (as long as it’s not Trump, at least).
It's lose.. not loose. Give me a break with the woke BS... You don't know what it means so stop using the term. I would hardly call trying to combat abusive child images "woke" it's a bit half baked and a serious intrusion of privacy, but woke? Hahaha The term has morphed into a completely different meaning and it is silly at this point.
Also, what do you think analytics are for? In a sense it’s Apple (or other vendors) monitoring what you do with your phone. I guess Apple will be losing your business. I’m curious to know which phone you go with that won’t have any analytics happening.
The false positives in which people in that last link are affected by Google's own stupid policies after being exonerated are a minuscule outlier. Meanwhile:
There are multiple human layers in play here, all members of which have one of the most horrible and difficult jobs in the world. Mistakes will be made. Surely there are false negatives as well. Some would argue that's an acceptable compromise in order to protecting the most vulnerable from abuse.
But choosing iMessage for this purpose in a first place is not very smart, as there exist tones of other instant messaging platforms, some of which are arguably much more popular than iMessage (WhatsApp) or more secure (Signal).
As for blocking port 433 to ensure no calls are made from the user device to back-end servers — it’s laughable. Who came up with this infantile suggestion? My router listens to port 8433 for (encrypted) HTTPS traffic, and I could make it listen to any available port. Besides, who said that the communication must be necessary via HTTP/S? There is no web browser involved here even. There are tones of other secure, end-to-end encrypted data transmission protocols out there, which are no less secure (if security even was such a huge concern for Apple in this scenario), all of which potentially could be set up to use any port in a specific application. If the implication here was that Apple would have to use a “well-known” port number to guarantee they can get through a common firewall or the one configured with default settings, first — firewalls primarily set up to protect from incoming requests, not that much from those initiated from the client device, whatever the protocol used. Then, there are better ways to obscure your “spying” communication with an external/back-end server.
A better and more useful way to control Apple software’s (including MacOS, iOS, and any Apple apps) extracurricular communications is to block smoot.apple.com domain for starters, and certain others domains, in the filtering DNS service, or just in the firewall. Although this is of course not a 100% defense, since Apple could access their backend servers by IP address, and distribute IP addresses used for a specific purpose at the time simply as part of their iCloud connection (iCloud software, which is part of the very MacOS and iOS) and do it completely unbeknownst to anybody, since none of this software is not open source to be verified by the community on a subject of what exactly it does.