Tests confirm macOS Finder isn't scanning for CSAM images

Posted:
in macOS edited January 2023
Apple isn't checking images viewed within the macOS Finder for CSAM content, an investigation into macOS Ventura has determined, with analysis indicating that Visual Lookup isn't being used by Apple for that particular purpose.

The macOS Finder isn't scanning your images for illegal material.
The macOS Finder isn't scanning your images for illegal material.


In December, Apple announced it had given up on plans to scan iPhone photos uploaded to iCloud for Child Sexual Abuse Material (CSAM), following considerable backlash from critics. However, rumors apparently lingered alleging that Apple was still performing checks in macOS Ventura 13.1, prompting an investigation from a developer.

According to Howard Oakley of Eclectic Light Co. in a blog post from January 18, a claim started to circulate that Apple was automatically sending "identifiers of images" that a user had browsed in Finder, doing so "without that user's consent or awareness."

The plan for CSAM scanning would've involved a local on-device check of images for potential CSAM content, using a hashing system. The hash of the image would then be sent off and checked against a list of known CSAM files.

While the idea of scanning images and creating a neural hash to be sent off to Apple to describe characteristics of image could feasibly be used for CSAM scanning, Oakley's testing indicates it's not actively being used in that way. Instead, it seems that Apple's Visual Lookup system, which allows macOS and iOS to identify people and objects in an image, as well as text, could be mistaken for conducting in this sort of behavior.

No evidence in tests

As part of testing, macOS 13.1 was run within a virtual machine and the application Mints was used to scan a unified log of activities on the VM instance. On the VM, a collection of images were viewed for a period of one minute in Finder's gallery view, with more than 40,000 log entries captured and saved.

If the system was used for CSAM analysis, there would be repeated outgoing connections from the "mediaanalysisd" to an Apple server for each image. The mediaanalysisisd refers to an element used in Visual Lookup where Photos and other tools can display information about detected items in an image, such as "cat" or the names of objects.

The logs instead showed that there were no entries associated with mediaanalysisd at all. A further log extract was then found to be very similar to Visual Lookup as it appeared in macOS 12.3, in that the system hasn't materially changed since that release.

Typically, mediaanalysisd doesn't contact Apple's servers until very late in the process itself, as it requires neural hashes generated by image analysis beforehand. Once sent off and a response is received back from Apple's servers, the received data is then used to identify to the user elements within the image.

Further trials determined that there were some other attempts to send off data for analysis, but for enabling Live Text to function.

In his conclusion, Oakley writes that there is "no evidence that local images on a Mac have identifiers computed and uploaded to Apple's servers when viewed in Finder windows."

While images viewed in apps with Visual Lookup support have neural hashes produced, which can be sent to Apple's servers for analysis. Except that trying to harvest the neural hashes for detecting CSAM "would be doomed to failure for many reasons."

Local images in QuickLook Preview also go under normal analysis for Live Text, but "that doesn't generate identifiers that could be uploaded to Apple's servers."

Furthermore, Visual Lookup could be disabled by turning off Siri Suggestions. External mediaanalysiss look-ups could also be blocked using a software firewall configured to block port 443, though "that may well disable other macOS features."

Oakley concludes the article with a warning that "alleging that a user's actions result in controversial effects requires full demonstration of the full chain of causation. Basing claims on the inference that two events might be connected, without understanding the nature of either, is reckless if not malicious."

CSAM still an issue

While Apple has gone off the idea of performing local CSAM detection processing, lawmakers still believe Apple isn't doing enough about the problem.

In December, the Australian e-Safety Commissioner attacked Apple and Google over a "clearly inadequate and inconsistent use of widely available technology to detect child abuse material and grooming."

Rather than directly scanning for existing content, which would be largely ineffective due to Apple using fully end-to-end encrypted photo storage and backups, Apple instead seems to want to go down a different approach. Namely, one that can detect nudity included in photos sent over iMessage.

Read on AppleInsider

Comments

  • Reply 1 of 20
    Someone post this on Louis Rossman’s reaction video. It would be good to see this being cleared out
    watto_cobra
  • Reply 2 of 20
    jdwjdw Posts: 1,324member
    While forced local scanning on a Mac by government order is a frightful 1984-style nightmare for all citizens (law abiding or not), the upside is that Little Snitch would likely work to block any outgoing transfers.

    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    So how do law enforcers deal with law breakers?  How they always have — which doesn't include privacy invasions like file scanning without a search warrant.  It may not be the ideal approach in light of the tech we have today, but it's the only approach to protect citizen from unlawful search and seizure.
    darkvaderbaconstangelijahgwatto_cobramuthuk_vanalingamgeorgie01appleinsideruserbonobobJaiOh81FileMakerFeller
  • Reply 3 of 20
    That's Eclectic Light Company, not Electric Light Company. You should make that correction in the article.
    watto_cobrawilliamlondonFileMakerFeller
  • Reply 4 of 20
    entropysentropys Posts: 4,152member
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    edited January 2023 darkvaderwatto_cobrageorgie01bonobobJaiOh81FileMakerFeller
  • Reply 5 of 20
    fastasleepfastasleep Posts: 6,408member
    jdw said:

    Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  
    That’s not how hashes work, at all.
    watto_cobra
  • Reply 6 of 20
    fastasleepfastasleep Posts: 6,408member
    entropys said:
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    watto_cobra
  • Reply 7 of 20
    jdwjdw Posts: 1,324member
    entropys said:
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    Why so hostile toward me, and so defensive of file scanning?  ("Horseshit" is a word that indicates hostility toward me, and I do not appreciate it at all.)

    Sounds to me like you're the type who wants Big Government to force its hand pretty much anytime you personally feel it is "for the greater good."  That is just as frightening as government getting tip-offs to the content of my hard drive.  If you're happy with file scanning on your devices, great.  But to go beyond that and force everyone else to capitulate to what makes YOU feel good, well, that's a different matter altogether.  And that remains true regardless of what you wish to argue about "hashes."
    muthuk_vanalingamelijahgbonobobJaiOh81
  • Reply 8 of 20
    Apple should stop thinking about monitoring anything I do on my iPhone. Its not their business. If they don’t understand this, Apple will loose a lot of business including me (apple fan for years). #nowokeonmyphone
    muthuk_vanalingamdanox
  • Reply 9 of 20
    davidwdavidw Posts: 2,036member
    jdw said:

    Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  
    That’s not how hashes work, at all.


    There's a reason why Apple set a 30 "hit" limit, before an iOS user would be flagged, with their once proposed on device CSAM scanning. It was to reduce the chances of errors and false accusations because of inherent inaccuracy of the CSAM scanning software. 

    https://www.eff.org/deeplinks/2022/10/eu-lawmakers-must-reject-proposal-scan-private-chats

    > It’s difficult to audit the accuracy of the software that’s most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should be sending up red flags, not encouraging lawmakers to move forward. 

    • A Facebook study found that 75% of the messages flagged by its scanning system to detect child abuse material were not “malicious,” and included messages like bad jokes and memes.
    • LinkedIn reported 75 cases of suspected CSAM to EU authorities in 2021. After manual review, only 31 of those cases—about 41%—involved confirmed CSAM.  
    • Newly released data from Ireland, published in a report by our partners at EDRi(see page 34), shows more inaccuracies. In 2020, Irish police received 4,192 reports from the  U.S. National Center for Missing and Exploited Children (NCMEC). Of those, 852 referrals (20.3%) were confirmed as actual CSAM. Of those, 409 referrals (9.7%) were deemed “actionable” and 265 referrals (6.3%) were “completed” by Irish police. 

    Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: widespread scanning produces significant numbers of false accusations. Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering. 

    The false positives cause real harm. A recent New York Times story highlighted a faulty Google CSAM scannerthat wrongly identified two U.S. fathers of toddlers as being child abusers. In fact, both men had sent medical photos of infections on their children at the request of their pediatricians. Their data was reviewed by local police, and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by the failed AI system, and defended their opaque human review process. 

    With regards to the recently published Irish data, the Irish national police verified that they are currently retaining all personal data forwarded to them by NCMEC—including user names, email addresses, and other data of verified innocent users. <


    Perhaps it's the way you think it works, that is wrong. 

    elijahgmuthuk_vanalingamwilliamlondonbonobobFileMakerFeller
  • Reply 10 of 20
    davidwdavidw Posts: 2,036member
    entropys said:
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?

    The bigger problem are people like you that think that CSAM scanning software is a "feature". If the government can get enough people to fall for that or ... to "Think of the children.", then the hardest part of getting "nefarious" software to scan everyone's data, is done. Now all the government need to do is to load the hashes of the images they want to look for, into the database.

    Neither Apple, Microsoft, Google, Facebook or any other companies offering the services of storing consumers data, knows what the hashes they are scanning for, are images of. That's the way it works. The main source of the database is the NCMEC. Which is suppose to be a private entity but gets funding from the government. Not even the NCMEC is positive that all the hashes are images of CSAM. So all the government (good or bad) need to do is to load hashes of the images they want to look for, into the database. Who, except the government, would know that their hashes of images like the Gay Flag, Swastika, pictures of known terrorist, images of assault rifles, Winnie the Pooh ( https://www.bbc.com/news/blogs-china-blog-40627855 ), or any other images of whatever the government might be searching for, are in the database that these companies use in their search?


    >We also offer an Industry Hash Sharing platform, which enables select companies to share their own CSAM hashes with each other. We are ensuring that any company that is willing and able to proactively detect this material has all of the tools it needs to do so and that companies can share their own CSAM hashes with each other. Google is the largest contributor to this platform with approximately 74% of the total number of hashes on the list.<



    muthuk_vanalingamgeorgie01williamlondon
  • Reply 11 of 20

    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    This is actually the mindset that results in government overreach, because it misses the reality that people are conditioned over time, step by step, to accept what they previously wouldn’t have accepted.

    Or do you suppose that there were many people prior to the covid pandemic would have been in favor of all of the restrictions that came about? Maybe 5% of the population, at most. It was obvious to some from the beginning that the restrictions were not about public safety but conditioning. They were criticized as fools but time is showing they were correct (cloth mask efficacy, etc.).

    Mass manipulation is accomplished over time, because people have stupidly short memories. So the manipulators push the boundaries in steps, enduring the initial backlash, then when enough people get used to the idea they take another step, and the people completely forget about their former ideals, believing themselves to have become more educated when in reality they’re just easily manipulated by their overlords. Nowadays an alarming number of Americans aren’t even bothered by the idea of having an overlord (as long as it’s not Trump, at least). 
    appleinsiderusermuthuk_vanalingam
  • Reply 12 of 20
    danoxdanox Posts: 2,801member
    jdw said:
    While forced local scanning on a Mac by government order is a frightful 1984-style nightmare for all citizens (law abiding or not), the upside is that Little Snitch would likely work to block any outgoing transfers.

    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    So how do law enforcers deal with law breakers?  How they always have — which doesn't include privacy invasions like file scanning without a search warrant.  It may not be the ideal approach in light of the tech we have today, but it's the only approach to protect citizen from unlawful search and seizure.
    Do the paperwork, and go before a judge and get a search warrant, that should be the only way any government entity gets any information about private citizens suspected of doing wrong, there shouldn’t be any fishing trips without the proper paperwork. If a warrant is served to Apple, obviously they should comply and that should be it.
    edited January 2023 muthuk_vanalingamFileMakerFeller
  • Reply 13 of 20
    jdw said:
    While forced local scanning on a Mac by government order is a frightful 1984-style nightmare for all citizens (law abiding or not), the upside is that Little Snitch would likely work to block any outgoing transfers.

    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    So how do law enforcers deal with law breakers?  How they always have — which doesn't include privacy invasions like file scanning without a search warrant.  It may not be the ideal approach in light of the tech we have today, but it's the only approach to protect citizen from unlawful search and seizure.
    I just to be clear, nobody is concerned about false positives. Everyone clutches their pearls and pretends like that’s the real thing they’re scared of. I used to work in ministry, and everybody always acted as if the worst thing that could happen to you was a false complaint, because after all all complaints are false. 

    You know what happened when I witnessed a prominent minister harassing multiple guys half his age? Everyone worried about his reputation. 

    You know what is infinitely worse than being harassed by the cops? Being kidnapped, stripped  and violated over and over and having that filmed and shared with millions of creeps on the internet, and then having selfish morons in forums debate about whether or not reasonable measures to identify and apprehend the people creating and consuming the content is the end of the world. 

    The reason this isn’t getting implemented is because as soon as they announced that they were doing something very reasonable, apple was hit by a barrage of threats from well connected people who totally don’t consume this material. 

    But unfortunately when you get a threat from the Vatican, people still listen
    fastasleep
  • Reply 14 of 20
    jcs2305jcs2305 Posts: 1,336member
    jdw said:
    entropys said:
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    Why so hostile toward me, and so defensive of file scanning?  ("Horseshit" is a word that indicates hostility toward me, and I do not appreciate it at all.)

    Sounds to me like you're the type who wants Big Government to force its hand pretty much anytime you personally feel it is "for the greater good."  That is just as frightening as government getting tip-offs to the content of my hard drive.  If you're happy with file scanning on your devices, great.  But to go beyond that and force everyone else to capitulate to what makes YOU feel good, well, that's a different matter altogether.  And that remains true regardless of what you wish to argue about "hashes."
    Look at what was being quoted.... That wasn't your comment. His initial response to you was saying that wasn't how hashes worked at all when you mentioned someone being wrongly accused.
    dutchlord said:
    Apple should stop thinking about monitoring anything I do on my iPhone. Its not their business. If they don’t understand this, Apple will loose a lot of business including me (apple fan for years). #nowokeonmyphone
    It's lose.. not loose. Give me a break with the woke BS... You don't know what it means so stop using the term. I would hardly call trying to combat abusive child images "woke" it's a bit half baked and a serious intrusion of privacy, but woke?  Hahaha The term has morphed into a completely different meaning and it is silly at this point.

    edited January 2023 fastasleep
  • Reply 15 of 20
    dutchlord said:
    Apple should stop thinking about monitoring anything I do on my iPhone. It’s not their business. If they don’t understand this, Apple will loose a lot of business including me (apple fan for years). #nowokeonmyphone
    It IS their business. The CSAM scanning was only going to happen when images were being uploaded to iCloud, which is a feature that can be turned off. Obviously, Apple owns all the “iCloud hardware”. So, if you chose to have CSAM on your phone AND use iCloud Photo Library then your images would be scanned at upload with any matching hashes being flagged.

    Also, what do you think analytics are for? In a sense it’s Apple (or other vendors) monitoring what you do with your phone. I guess Apple will be losing your business. I’m curious to know which phone you go with that won’t have any analytics happening.
    FileMakerFellerfastasleep
  • Reply 16 of 20
    jdw said:
    entropys said:
    The concern here is the same as CSAM scanning on the iPhone. It's more than a matter of personal privacy. It's a concern centered on the possibility that an error could result in a law abiding person being reported to law enforcement, which cares more about filling quotas and busting so-called bad guys than anything else.  Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  

    It was always more than that. The concern was that child abuse was a smoke screen for a broader purpose, eg identifying people that did not agree with an authoritarian government by what pictures they were looking at.
    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    Why so hostile toward me, and so defensive of file scanning?  ("Horseshit" is a word that indicates hostility toward me, and I do not appreciate it at all.)

    Sounds to me like you're the type who wants Big Government to force its hand pretty much anytime you personally feel it is "for the greater good."  That is just as frightening as government getting tip-offs to the content of my hard drive.  If you're happy with file scanning on your devices, great.  But to go beyond that and force everyone else to capitulate to what makes YOU feel good, well, that's a different matter altogether.  And that remains true regardless of what you wish to argue about "hashes."
    I clearly was not responding to you. Do you have a multitude of matching CSAM files hashes on YOUR hard drive? If not, then you don't need to worry about anyone being "tipped off". Regardless, this was going to happen on upload to iCloud, which is Apple's hard drives, not yours. They have the right and responsibility to make sure they're not storing such materials, and absolutely should tip off government.
  • Reply 17 of 20
    davidw said:
    jdw said:

    Having an accused person's time wasted, or worse, being arrested for something they didn't do only because a computer secretly misread a file on their computer is something no citizen of any nation should stand for.  
    That’s not how hashes work, at all.


    There's a reason why Apple set a 30 "hit" limit, before an iOS user would be flagged, with their once proposed on device CSAM scanning. It was to reduce the chances of errors and false accusations because of inherent inaccuracy of the CSAM scanning software. 

    https://www.eff.org/deeplinks/2022/10/eu-lawmakers-must-reject-proposal-scan-private-chats

    > It’s difficult to audit the accuracy of the software that’s most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should be sending up red flags, not encouraging lawmakers to move forward. 

    • A Facebook study found that 75% of the messages flagged by its scanning system to detect child abuse material were not “malicious,” and included messages like bad jokes and memes.
    • LinkedIn reported 75 cases of suspected CSAM to EU authorities in 2021. After manual review, only 31 of those cases—about 41%—involved confirmed CSAM.  
    • Newly released data from Ireland, published in a report by our partners at EDRi(see page 34), shows more inaccuracies. In 2020, Irish police received 4,192 reports from the  U.S. National Center for Missing and Exploited Children (NCMEC). Of those, 852 referrals (20.3%) were confirmed as actual CSAM. Of those, 409 referrals (9.7%) were deemed “actionable” and 265 referrals (6.3%) were “completed” by Irish police. 

    Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: widespread scanning produces significant numbers of false accusations. Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering. 

    The false positives cause real harm. A recent New York Times story highlighted a faulty Google CSAM scannerthat wrongly identified two U.S. fathers of toddlers as being child abusers. In fact, both men had sent medical photos of infections on their children at the request of their pediatricians. Their data was reviewed by local police, and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by the failed AI system, and defended their opaque human review process. 

    With regards to the recently published Irish data, the Irish national police verified that they are currently retaining all personal data forwarded to them by NCMEC—including user names, email addresses, and other data of verified innocent users. <


    Perhaps it's the way you think it works, that is wrong. 

    That Facebook stat, if you actually read it in context, was a result of a self-imposed audit of a very small sample, and "While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem" and "Our work is now to develop technology that will apply the intent taxonomy to our data at scale." All of this is a work in progress.

    The false positives in which people in that last link are affected by Google's own stupid policies after being exonerated are a minuscule outlier. Meanwhile:
    Images of children being exploited or sexually abused are flagged by technology giants millions of times each year. In 2021, Google alone filed over 600,000 reports of child abuse material and disabled the accounts of over 270,000 users as a result. Mark’s and Cassio’s experiences were drops in a big bucket.

    There are multiple human layers in play here, all members of which have one of the most horrible and difficult jobs in the world. Mistakes will be made. Surely there are false negatives as well. Some would argue that's an acceptable compromise in order to protecting the most vulnerable from abuse.

  • Reply 18 of 20
    georgie01 said:

    Which was also a horseshit argument. If a hostile government wanted to force Apple to install nefarious software features, why would they need this feature as a smokescreen? And would you expect Apple to comply with something like that?
    This is actually the mindset that results in government overreach, because it misses the reality that people are conditioned over time, step by step, to accept what they previously wouldn’t have accepted.

    Or do you suppose that there were many people prior to the covid pandemic would have been in favor of all of the restrictions that came about? Maybe 5% of the population, at most. It was obvious to some from the beginning that the restrictions were not about public safety but conditioning. They were criticized as fools but time is showing they were correct (cloth mask efficacy, etc.).

    Mass manipulation is accomplished over time, because people have stupidly short memories. So the manipulators push the boundaries in steps, enduring the initial backlash, then when enough people get used to the idea they take another step, and the people completely forget about their former ideals, believing themselves to have become more educated when in reality they’re just easily manipulated by their overlords. Nowadays an alarming number of Americans aren’t even bothered by the idea of having an overlord (as long as it’s not Trump, at least). 
    Good grief. Covid restrictions — are you for real? "It was obvious" to paranoid, selfish, libertarian types such as yourself, but most rational people (which numbered far more than 5%) were complying with restrictions out of concern for the health and safety of themselves and others. And, cloth masks were always more effective than no mask at all. It was a novel virus, one which experts were learning about just as we were. The fools were the ones thinking they were smarter than the experts and started eating livestock dewormer and shitting out their intestinal linings thinking they were owning the libs instead of getting vaccinated.
  • Reply 19 of 20
    iMessage communications are (supposed to be) E2E encrypted. So if Apple wants to do CSAM image identification in iMessage, the iMessage app itself would have to do what Finder’s ImageViewer was being suspected of doing, which was the subject of this “investigation.”

    But choosing iMessage for this purpose in a first place is not very smart, as there exist tones of other instant messaging platforms, some of which are arguably much more popular than iMessage (WhatsApp) or more secure (Signal).  

    But choosing iMessage for this purpose could be Apple’s limited but sufficiently defensible response to the calls to combat CSAM, since iMessage is their proprietary technology, and they are in fact responsible for it not facilitating illegal activity.  They will be able to say “we did our part, we did enough.”

    As for blocking port 433 to ensure no calls are made from the user device to back-end servers — it’s laughable. Who came up with this infantile suggestion? My router listens to port 8433 for (encrypted) HTTPS traffic, and I could make it listen to any available port. Besides, who said that the communication must be necessary via HTTP/S? There is no web browser involved here even. There are tones of other secure, end-to-end encrypted data transmission protocols out there, which are no less secure (if security even was such a huge concern for Apple in this scenario), all of which potentially could be set up to use any port in a specific application. If the implication here was that Apple would have to use a “well-known” port number to guarantee they can get through a common firewall or the one configured with default settings, first  — firewalls primarily set up to protect from incoming requests, not that much from those initiated from the client device, whatever the protocol used. Then, there are better ways to obscure your “spying” communication with an external/back-end server.
    Why not use QUIC protocol, for example?

    A better and more useful way to control Apple software’s (including MacOS, iOS, and any Apple apps) extracurricular communications is to block smoot.apple.com domain for starters, and certain others domains, in the filtering DNS service, or just in the firewall. Although this is of course not a 100% defense, since Apple could access their backend servers by IP address, and distribute IP addresses used for a specific purpose at the time simply as part of their iCloud connection (iCloud software, which is part of the very MacOS and iOS) and do it completely unbeknownst to anybody, since none of this software is not open source to be verified by the community on a subject of what exactly it does. 

    That is precisely what makes investigations like this simply a smoke screen for incompetent in IT people. This “investigation” is done for the IT expertise level of US Congressmen, house wives, and “college-uneducated white male,” whom Apple Insider for some reason deems their audience. 
    edited December 2023
  • Reply 20 of 20
    dutchlord said:
    Apple should stop thinking about monitoring anything I do on my iPhone. Its not their business. If they don’t understand this, Apple will loose a lot of business including me (apple fan for years). #nowokeonmyphone
    Tackling child sexual abuse is woke now?  You alpha chads.
Sign In or Register to comment.