What you need to know: Apple's iCloud Photos and Messages child safety initiatives

1234689

Comments

  • Reply 101 of 162
    DAalsethDAalseth Posts: 2,783member
    One thought. If this is scanning for hashes of known existing images, then all the bad guys have to do is keep making new images. Stay ahead of the trackers. Each one becomes a use once and delete item. The end result will be MORE abuse to make new images. 

    This is an unbelievably bad idea.
    GeorgeBMacbaconstang
  • Reply 102 of 162
    killroykillroy Posts: 276member
    netrox said:
    The fact that people are freaking out over the implementation of Apple's detection of child pornography shows that they have absolutely no ability to think logically and understand that they have no risk if they don't distribute child pornography.  

    It's simple as that. Literally. 

    Also, the government has your ID, your SSN, your taxes, your public records, your passports, your registrations, your work history, your residence, and so on. You ain't that private as you think you are.  


    Didn't forget goggle, Facebook, YouTube and others.
    watto_cobra
  • Reply 103 of 162
    macplusplusmacplusplus Posts: 2,112member
    gatorguy said:
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.


    Just a positive and forced optimistic description of Apple's delicate situation. Or we can put it the other way: the Government can already monitor everything by deep packet inspection. Instead they want to pass the burden to Tim Apple... 
    No, they can’t. E2E encryption prevents man in the middle. That’s the entire reason law enforcement doesn’t want encryption.
    That depends on the implementation.

    In iMessages for instance you can have more than 2 people in an E2E encrypted exchange, and that's by design. There's nothing to prevent adding additional participants, even a "man-in-the-middle" ghost observer, someone you would have no idea being involved, and the message session still being E2E encrypted. Unless only two people are allowed in an encrypted message session there can be danger of some "authority" insisting they be added as an undisclosed 3rd participant, an eavesdropper in practical terms, and by national law Apple not being permitted to discuss it. So no, E2E would not necessarily prevent a man-in-the-middle listener. 

    Before you brush it off, yes that was exactly what was proposed by a major intelligence service sometime back, and no further public mention of it since. 
    Yes. "Creative" solution from the Brits.
  • Reply 104 of 162
    elijahgelijahg Posts: 2,759member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.
    Nailed it. By doing something “for the children”, Apple is likely positioning itself to add E2E encryption for iCloud hosting and take away the only crit the government has about it. 
    And what exactly is the point in end to end encryption if on behalf of a government Timmy can snap his fingers and upload any photo desired from the device pre-encryption? The best protection "for the children" would be to remove all encryption. Or better, shut down the Internet entirely. Why don't we do that instead since apparently we should compromise everything for the 0.000001% that abuse children? Don't get me wrong - it's one seriously sick individual that abuses children and the police should do everything reasonable to stop it, but that doesn't extend to having microphones in every house that are listening out for illegal conversations. Well actually, maybe Cook will reveal HomePods now listen in to conversations in case there's speak of child abuse.
    edited August 2021 omairmacplusplusbaconstangRayz2016
  • Reply 105 of 162
    larryjwlarryjw Posts: 1,031member
    It's a slippery slope. 

    There are no good guys. 
  • Reply 106 of 162
    radarthekatradarthekat Posts: 3,842moderator
    Delightful ad hominem and continued slippery slope. You asked about my kids yesterday in a poorly thought-out example that didn't apply, and I care more about them, than I do a hypothetical Apple contractor who hypothetically has PTSD and is hypothetically being treated poorly by Apple. And there's a lot of could, maybe, and in the future here.

    Did you even read this article to its conclusion, or did you just start commenting? I feel like if you had read the article, that last paragraph would be a little different.

    You've been complaining about Apple hardware and software for a long time here. Some of it is warranted and some of it is not. If this is so unacceptable to you, take the final steps and go. That simple.

    There is no extreme danger from this system as it stands. As with literally everything else in this world, there is good and bad, and we've talked about it here. Every decision we all make weighs the good and the bad.

    You're welcome to believe that there extreme danger based on hypotheticals. When and if it develops, we'll write about it.
    I will just cut to the chase. The article you wrote above mostly apologist in nature. Your strongest argument is that it is OK if Apple does this because other companies have been doing it for years.

    You didn't address the elephant in the room: Apple has been selling itself and its products on the idea that they will keep our private data private. They stated or strongly implied that all of our data would be strongly encrypted and that governments and hackers would not be able to access it and that even Apple could not access it due to the strength of the encryption. All of those statements and implied promises were false if encrypted data from our phones is unencrypted when it is stored in iCloud. The promise of privacy is turned into a farce when Apple itself violates it and scans our data without our permission. Yes other companies have been doing things like this with our data for years but that's why we were buying Apple products, right?

    I could pick apart your article a point at a time but it would be tedious. Example: Our ISPs can scan our data for IP content. Yes but we can and do use VPNs to work around that issue. Heck most employers require the use of a VPN to keep their company secrets secret (I bet Apple does too).

    BTW I hope you appreciate that I incorporated your arguments into my own. I did read what you wrote. I just happen to completely disagree with it.
    Wait, it’s okay that ISPs snoop because we can use VPNs but it’s not okay if Apple does even though you could use PhotoVault?  Examine what you’re saying, please. 
    killroywatto_cobra
  • Reply 107 of 162
    radarthekatradarthekat Posts: 3,842moderator
    I'm not saying that I'm for or against this system. I am unbiased for now. I don't have any position yet, I just have some technical and procedural questions.

    1. Why would CSAM or Apple use a hash algorithm that has a 1 in 1,000,000,000 (a trillion) chance of a mismatch, when using the MD5 hashing algorithm which is already built-into macOS has a 1 in 70,000,000,000,000,000,000,000,000,000 (70 octillion) chance of a mismatch? Yes, I know this is an image hash and not a byte-wise file hash, but even so, why was the image hash algorithm designed with such an amazingly high collision chance? Is it impossible to design an image hash that has an error rate in the octillions? Why did they settle for an error rate as surprisingly large as one in a trillion? I want to know.

    2. What happens if the news media releases a picture of a child in a victimized situation but blurs or hides the indecent parts, in order to help get the public to identify the child, and what if this image gets sent to my iCloud? Is that going to trigger a match? The image has lots in common with the original offending image. Sure, a small part of the image was cropped out, but the CSAM algorithm results in matches even when images area as they say, "slightly cropped." They said this: "an image that has been slightly cropped or resized should be considered identical to its original and have the same hash." This means it could trigger a match. Am I entitled to know exactly how much of a change will typically cause the match to fail? Or is this something the public is not allowed to learn about?

    3. When Apple detects a matching photo hash, how does Apple (or the US government) take into account the location of the culprit when a match occurs? Suppose the culprit who owns the offending photo resides in, say, Russia. What happens then? The US can't put that person on trial (although since 2004 the US has been putting a growing number foreign terrorists on trial after dragging them into the US, and then using tenuous links like accessing an American bank account as justification for charging them in a US Federal court.) About 96% of the people in the world do not live in the US, so does that mean a high percentage of the cases will never go to trial? Does the US government send a list of suspects to Vladimir Putin every week when they find Russians who have these illegal images in their iCloud? Or do Russian culprits get ignored? What about Canadian culprits, since Canada is on good terms with the US? Does the US government notify the Canadian government, or does the US wait until the culprit attempts to cross the border for a vacation? I want to see the list of countries that the US government provides its suspect list with. Or is this something the public is not allowed to learn about?

    And now for a couple of hypothetical questions of less urgency but similar importance:

    4. If a friendly country like Canada were to develop its own database, would Apple report Canadian matches to the Canadian authority, or to the US authority, or to both governments? In other words, is Apple treating the US government as the sole arbitrator of this data, or will it support other jurisdictions? 

    5. What happens if an unfriendly country like China builds its own child abuse database, would Apple support that, and then would Apple report to Chinese authorities? And how would Apple know that China hasn't included images of the Tiananmen massacre in its own database?

    And now a final question that comes to my mind:

    Since the crimes Apple is trying to fight occur all over the world, shouldn't the ICC (International Criminal Court) be creating a CSAM database? Frankly, I blame the ICC for not tackling this problem. I'm aware that US Republicans generally oppose the ICC, but Democrats sound much more open to using the ICC. Biden has been silent on the ICC since getting elected, but he has said that his administration: "will support multilateral institutions and put human rights at the center of our efforts to meet the challenges of the 21st century.” Reading between the lines, that sounds like he supports the ICC. And since 70% of Americans support the ICC, according to a poll, maybe this is a popular issue that Biden can hang his hat on.

    My main concern with this whole topic is that there are so many important questions like these that are not being considered.
    Let’s start with the answers you got to these questions from your ISP, from Google and from Microsoft, among others.  For surely you didn’t wait until Apple joined them in applying CSAM databases to attempt to catch bad actors.  Surely?  So what answers did you get when you asked these questions in the applicable forums for those other vendors? And we’ll go from there, shall we?   
    killroywatto_cobra
  • Reply 108 of 162
    radarthekatradarthekat Posts: 3,842moderator
    amar99 said:
    Last apple product I purchase, last software update I install. Now that they're in bed with the government, their politically-charged "excuse" for invading every user's privacy cannot outweigh the long-term consequences. Governments want in, and Apple has just given them an avenue. The future is dark for Apple if they go down this path off complicity.
    I’m super curious what you’ll replace your Apple gear with.  
    killroywatto_cobrafastasleep
  • Reply 109 of 162
    GeorgeBMacGeorgeBMac Posts: 11,421member
    Domestic terrorists are a greater threat than some guy saving a picture he found on the internet.  Shouldn't Apple be scanning for them?

    Likewise, an angry, disaffected person with an AR15 is a greater threat to kids in school than some guy saving a picture he found on the internet.  Shouldn't Apple be scanning for them?
    edited August 2021 elijahgbaconstang
  • Reply 110 of 162
    IreneWIreneW Posts: 303member
    crowley said:
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    Have you even bothered to read these articles? Like even bothered? They do NOT evaluate the subject of your photos. They are specific hash matches to *known* child pornography, cataloged in the CSAM database. 

    Seriously fucking educate yourself before clutching your pearls. If you can’t read the article you’re commenting on, try this one:

    https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope
    Apparently you fucking educated yourself enough to still not understand that an innocent looking photo may still point to child abuse but Apple’s scheme will miss it thus it is ineffective. Crime is a very complex setup, it cannot be reduced to a couple of hashes.
    No one is claiming that this system will solve the problem of child abuse. 
    Well, Apple _could_ improve even further by implementing more intelligent, AI based, scanning. Something along the lines of what Google and others are using. Or even just use Google's implementation:

    "How does Google identify CSAM on its platform?

    We invest heavily in fighting child sexual exploitation online and use technology to deter, detect and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs, to detect, remove and report CSAM on our platforms. We deploy hash matching, including YouTube’s CSAI match, to detect known CSAM.  We also deploy machine-learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritise abuse content for human review. 

    Both CSAI match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms. Please see here for more details."
    (From https://support.google.com/transparencyreport/answer/10330933#zippy=,how-does-google-identify-csam-on-its-platform)

    Why settle at known hashes, if this is only good?
    dewme
  • Reply 111 of 162
    crowleycrowley Posts: 10,453member
    IreneW said:
    crowley said:
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    Have you even bothered to read these articles? Like even bothered? They do NOT evaluate the subject of your photos. They are specific hash matches to *known* child pornography, cataloged in the CSAM database. 

    Seriously fucking educate yourself before clutching your pearls. If you can’t read the article you’re commenting on, try this one:

    https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope
    Apparently you fucking educated yourself enough to still not understand that an innocent looking photo may still point to child abuse but Apple’s scheme will miss it thus it is ineffective. Crime is a very complex setup, it cannot be reduced to a couple of hashes.
    No one is claiming that this system will solve the problem of child abuse. 
    Well, Apple _could_ improve even further by implementing more intelligent, AI based, scanning. Something along the lines of what Google and others are using. Or even just use Google's implementation:

    "How does Google identify CSAM on its platform?

    We invest heavily in fighting child sexual exploitation online and use technology to deter, detect and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs, to detect, remove and report CSAM on our platforms. We deploy hash matching, including YouTube’s CSAI match, to detect known CSAM.  We also deploy machine-learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams. Using our classifiers, Google created the Content Safety API, which we provide to others to help them prioritise abuse content for human review. 

    Both CSAI match and Content Safety API are available to qualifying entities who wish to fight abuse on their platforms. Please see here for more details."
    (From https://support.google.com/transparencyreport/answer/10330933#zippy=,how-does-google-identify-csam-on-its-platform)

    Why settle at known hashes, if this is only good?
    Because people are going crazy enough at comparing known hashes.
    killroymwhite
  • Reply 112 of 162
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    Thats actually not a discrepancy. 

    1 in a trillion is not 1 per trillion accounts, it’s one per trillion photos. 

    So, yes, it will affect much more than 1 in a trillion accounts. You can divide the number of TOTAL photos by one trillion and see the rough number of false positives you’ll get, no matter the number of accounts. Why spread inaccurate info that takes away your own privacy?
  • Reply 113 of 162
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    Thats actually not a discrepancy. 

    1 in a trillion is not 1 per trillion accounts, it’s one per trillion photos. 

    So, yes, it will affect much more than 1 in a trillion accounts. You can divide the number of TOTAL photos by one trillion and see the rough number of false positives you’ll get, no matter the number of accounts. Why spread inaccurate info that takes away your own privacy?
    One in a trillion is one per trillion accounts, as per Apple. This is in the document embedded in the post.

    From the document:

    "The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated."

    So no, it won't.
    edited August 2021 killroywatto_cobra
  • Reply 114 of 162
    The more I think about this story, and read about it, the more I’m not okay with what Apple is doing.

    The point of end to end encryption is so people can’t analyze our data, check through it. This software update does all of that, it just does it right on our phones, before it’s even had a chance to be encrypted. And then sends out a warning if it flags something, even sending a low resolution photo of our own content, with zero encryption. Why would Apple think we were any more ok with them scanning our private data before it was encrypted, as opposed to scanned on their own servers with the same code?
    elijahgbaconstang
  • Reply 115 of 162
    mjtomlinmjtomlin Posts: 2,673member
    Just thought I’d chime in after reading so many misguided complaints about this subject. Are all of you context deaf? Did you read what is actually happening or are you just incapable of comprehending it?

    They are not scanning photos for specific images, they’re simply counting the bits and creating a hash… all they see are numbers… this in no way scans photos for offensive images, nor does it in any way violate your privacy.

    It amazes me that so many are complaining about trying to restrict/thwart child pornography?!

    it’s even more ridiculous when you consider every photo you save to the Photos app is automatically scanned through an image recognition engine to identify the contents of the photo.
    watto_cobra
  • Reply 116 of 162
    mjtomlin said:
    Just thought I’d chime in after reading so many misguided complaints about this subject. Are all of you context deaf? Did you read what is actually happening or are you just incapable of comprehending it?

    They are not scanning photos for specific images, they’re simply counting the bits and creating a hash… all they see are numbers… this in no way scans photos for offensive images, nor does it in any way violate your privacy.

    It amazes me that so many are complaining about trying to restrict/thwart child pornography?!

    it’s even more ridiculous when you consider every photo you save to the Photos app is automatically scanned through an image recognition engine to identify the contents of the photo.

    Creating a hash requires scanning the image and absolutely requires the file to be opened.  Do this on a Mac and it's pretty easy to demonstrate.  Also DO NOT RUN COMMANDS YOU DO NOT UNDERSTAND FROM FORUM POSTS.  Ask someone or research the commands.

    Now.... In a terminal....

    sudo -I and enter your account password. This elevates your privileges to root.  That's a lower case I btw.
    echo "foo" > /opt/root-owned-file.txt  This creates a file in /opt called root-owned-file.txt with the word "foo" as the only content.
    chmod 0600 /opt/root/owned-file.txt This ensures that only the root user can read and write to the file
    exit  and hit return

    Now you're running as the user you logged in with. 
    sha256sum /opt/root-owned-file.txt   should give you a hash (those number you were talking about) but it doesn't.  You get a permission denied because you can't hash a file that you can't open.  Apple isn't magic, they have to open the image in order to analyze it.  Full stop.  No binary or user on a Unix system can hash a file without opening.  

    Okay, clean up the file sudo rm /opt/root-owned-file.txt

    Next up... This computer is one that I paid for and I own.  Only parties I consent to should have the right to open files to analyze them.  From the example above,   No one is complaining about stopping CSAM, but these aren't computers that Apple owns, and they aren't asking users if they want to submit to surveillance, and no scanning a photo to see if a dog is in it is not surveillance.  Additionally Apple is clearly adopting a vigilante role that is extra-judicial.  Law enforcement agencies require a warrant to compel someone to surrender access to a computer, and yet Apple presumes powers that the FBI doesn't have.

    The article is primarily an ad hominem fallacy without many facts.  "Hey they other guys are doing it too!" is a baseless argument.  I do not have a Facebook account so I don't care what they do.  I'm not given a choice with Apple suddenly, and I am perfectly justified in getting my ire up when they insist that they have the uninvited right to open files that I create.  
    edited August 2021 macplusplusbaconstangmuthuk_vanalingam
  • Reply 117 of 162
    GeorgeBMacGeorgeBMac Posts: 11,421member
    mjtomlin said:
    Just thought I’d chime in after reading so many misguided complaints about this subject. Are all of you context deaf? Did you read what is actually happening or are you just incapable of comprehending it?

    They are not scanning photos for specific images, they’re simply counting the bits and creating a hash… all they see are numbers… this in no way scans photos for offensive images, nor does it in any way violate your privacy.

    It amazes me that so many are complaining about trying to restrict/thwart child pornography?!

    it’s even more ridiculous when you consider every photo you save to the Photos app is automatically scanned through an image recognition engine to identify the contents of the photo.

    That is merely an investigative technique.  But yes, they are investigating to determine if you are storing prohibited images.

    The defense is:   Child pornography is a serious enough issue that we think we need to do it.  But don't worry:  we won't scan for any other issues including:  Terrorism, mass murders, criminal organizations, insurrection, hate speech and the like.
    Rayz2016
  • Reply 118 of 162
    mjtomlinmjtomlin Posts: 2,673member

    The defense is:   Child pornography is a serious enough issue that we think we need to do it.  But don't worry:  we won't scan for any other issues including:  Terrorism, mass murders, criminal organizations, insurrection, hate speech and the like.

    True, but having said that, those are mostly adults making adult decisions. The issue here is to protect children that may not know better. This isn't about monitoring conversation or data mining, looking for foul play... it's about protecting children from being taken advantage of and possibly abused.

    Do all adults need to have a babysitter when left alone at home? No. Why? Because it is assumed by society that they can make informed decisions based off experience and general common knowledge. This has nothing to do with adults (humans over 12), it is all and only about children 12 years of age and younger who may not know that they're being used. This is why there are parental controls and why it is up to the parent to opt-in if they think it is necessary.
    edited August 2021
  • Reply 119 of 162
    mjtomlinmjtomlin Posts: 2,673member
    aguyinatx said:
    sha256sum /opt/root-owned-file.txt   should give you a hash (those number you were talking about) but it doesn't.  You get a permission denied because you can't hash a file that you can't open.  Apple isn't magic, they have to open the image in order to analyze it.  Full stop.  No binary or user on a Unix system can hash a file without opening.  

    Okay, clean up the file sudo rm /opt/root-owned-file.txt

    Next up... This computer is one that I paid for and I own.  Only parties I consent to should have the right to open files to analyze them.  From the example above,   No one is complaining about stopping CSAM, but these aren't computers that Apple owns, and they aren't asking users if they want to submit to surveillance, and no scanning a photo to see if a dog is in it is not surveillance.  Additionally Apple is clearly adopting a vigilante role that is extra-judicial.  Law enforcement agencies require a warrant to compel someone to surrender access to a computer, and yet Apple presumes powers that the FBI doesn't have.

    The article is primarily an ad hominem fallacy without many facts.  "Hey they other guys are doing it too!" is a baseless argument.  I do not have a Facebook account so I don't care what they do.  I'm not given a choice with Apple suddenly, and I am perfectly justified in getting my ire up when they insist that they have the uninvited right to open files that I create.  

    First of all data that passes through Apple's services and stored on Apple's servers are absolutely Apple's responsibility as they can be held accountable and liable for the transmission and storage of that data. This is exactly why Apple goes to extraordinary lengths to make sure user data is encrypted whenever and wherever possible and only "viewable" by the end user; it removes them from some of that liability. (Remember Napster? They got shut down because people were using their software to primarily share pirated digital media..)

    Second, opening a file and adding up the bytes to create a hash, is NOT the same thing as scanning an image. Any file can be opened and have a hash created. In fact any data you transmit (be it to a storage device or over a network) is ALWAYS "looked" at, this is how data corruption is discovered. When you send a file to a server, that file's data is "counted", much in the same way discussed here, to make sure all the data is intact. This isn't about being able to "open" a file, many people are up-in-arms because they believe this is actually "looking" at the photo to determine if there are any specific images contained within. Also, a user trying to access data from another account is not the same thing as the OS. What do you think stops that user from doing that? It's the security portion of the OS! The OS always has access to your data whether it's locked or not (with the exception of being encrypted).

    Third, you own the hardware and your data, but not the software or services, those are licensed. When you purchase a computer from Apple that license is a contract (actually two; a privacy policy and an EULA) that states that by using their software and services you agree to the terms of that license. Much of which outlines how the operating system and its various services need access to your data. The FBI and other law enforcement agencies are not part of that contract, therefor being an outside party they need another legal avenue to access that data, i.e. a warrant.
    edited August 2021 radarthekat
  • Reply 120 of 162
    GeorgeBMacGeorgeBMac Posts: 11,421member
    mjtomlin said:

    The defense is:   Child pornography is a serious enough issue that we think we need to do it.  But don't worry:  we won't scan for any other issues including:  Terrorism, mass murders, criminal organizations, insurrection, hate speech and the like.

    True, but having said that, those are mostly adults making adult decisions. The issue here is to protect children that may not know better. This isn't about monitoring conversation or data mining, looking for foul play... it's about protecting children from being taken advantage of and possibly abused.

    Do all adults need to have a babysitter when left alone at home? No. Why? Because it is assumed by society that they can make informed decisions based off experience and general common knowledge. This has nothing to do with adults (humans over 12), it is all and only about children 12 years of age and younger who may not know that they're being used. This is why there are parental controls and why it is up to the parent to opt-in if they think it is necessary.

    I don't Mike Pence made a decision to be hanged on January 6th.   Nor do I think most of the kids and adults who died at the hands of an AR15 carrying crackpot made the decision to die that day either.
Sign In or Register to comment.