What you need to know: Apple's iCloud Photos and Messages child safety initiatives

1235789

Comments

  • Reply 81 of 162
    macplusplusmacplusplus Posts: 2,112member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.


    Just a positive and forced optimistic description of Apple's delicate situation. Or we can put it the other way: the Government can already monitor everything by deep packet inspection. Instead they want to pass the burden to Tim Apple... 
  • Reply 82 of 162
    macplusplusmacplusplus Posts: 2,112member
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    edited August 2021
  • Reply 83 of 162
    bulk001bulk001 Posts: 764member
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? 
    It is not scanning for photos of a mother taking a picture of her child. It is scanning for know photos of child pornography and abuse. 
    StrangeDaysradarthekatkillroydysamoriamwhite
  • Reply 84 of 162
    StrangeDaysStrangeDays Posts: 12,877member
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their  servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s attempt to adopt a self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    Er, except it’s literally after the crime occurs. If not hosting child pedo on their commercial servers, there is no crime.

    And most criminals are idiots. And if cloud providers like Apple, Google, and Dropbox didn’t take these measures, are you pretending their cloud services wouldn’t be used to host this content? Of course they would. 
    radarthekatkillroywatto_cobra
  • Reply 85 of 162
    StrangeDaysStrangeDays Posts: 12,877member
    bulk001 said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    Ever watched the TV show dumbest criminals? Beside, a pretext to what? What do you have that is so private that the government should not see it? Your affair? Your own dick pic? Nobody cares and if the NSA looked at them so what? If you are a terrorist, 1/6 insurrectiionist, child pornographer, teen slashing tires in a neighborhood etc. I want you to be caught. If anything, the pretext to me seems to be that privacy is being used as an excuse to exploit children. 
    You have law enforcement for that. I don't want Apple to become a cop, that's it.

    The pretext is a long developed deep issue involved with Tim Apple's political engagements that apparently leaves him now in a very delicate situation. Won't discuss that further here...
    Nah that’s just idiotic. Cook doesn’t run Google, Dropbox, Microsoft, Twitter — all doing the same CSAM hash matching on child porn. 
    killroyradarthekatwatto_cobradysamoriamwhite
  • Reply 86 of 162
    MarvinMarvin Posts: 15,322moderator
    bulk001 said:
    elijahg said:
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    Unfortunately it is - 1 in 1 trillion becomes 2 in 1 trillion with two photos. Or 1 in 500 billion. That then halves again with 4 photos, 1 in 250 billion and so on. It's little more than simplified fractions. Punch 1,000,000,000,000/20,000 into a scientific calculator and it'll be simplified to 50,000,000/1. The odds do decrease because there is a more likelihood you have a matching photo with 2 photos than 1 photo. And yes, statistically speaking 1 in 1 trillion means that in a trillion-strong library there will be one false match.

    Also, it's massively more likely someone will get their password phished than a hash collision occurring - probably 15-20% of people I know have been "hacked" through phishing. All it takes is a couple of photos to be planted, with a date a few years ago so they aren't at the forefront of someone's library and someone's in very hot water. You claim someone could defend against this in court, but I fail to understand how? "I don't know how they got there" isn't going to wash with too many people. And unfortunately, "good security practices" are practised only by the likes of us anyway, most people use the same password with their date of birth or something equally insecure for everything. 
    1 in 50 million is not the same statistically as one in a trillion tried 20,000 times, no matter how much you want it to be so, I'm afraid. Regardless, your 1 in 50 million is still a very large number.

    One in a trillion tried a trillion times does not guarantee a match, although it is likely. as you're saying. There may even be two or three. You're welcome to believe what you want, and you can research it with statisticians if you are so inclined. This is the last I will address this point here.

    And, in regards to the false positive, somebody will look at the image, and say something like: Oh, this is a palm tree. It just coincidentally collides with the hash. All good. Story over.

    In regards to your latter point, this is addressed in the article.
    Correct! If the lottery odds are 1:150 million, buying two tickets does not increase your odds of winning to 1:75 million. You now have two tickets, each with the odds of 1:150 million of winning.
    That is actually how the lottery works and how the odds are represented. You can extrapolate it to the point that if you buy 150 million tickets, each with a unique number, you will win the lottery. The chances of winning don't stay constant independently of the amount of tickets you buy, there would be no point to buying more tickets if that was the case. You can imagine all the numbers (150m) on a table and place a marker on two of them. Split the amount in two with each set containing a selection. Given that the winning number selector will pick one out of either set, that's equivalent to playing a game with one choice from half the numbers (75m).

    https://lottolibrary.com/lottery-odds-calculator/

    But it's different for odds matching images in a database because the photos someone has aren't entirely unique/random, someone will have 100 selfies that are all visually similar to each other and all different from what's in the database and if they take 1 trillion selfies, that will also be the case. The stats are based on purely random images i.e worst case scenario. False positives are nothing to worry about here because even in the unlikely event that multiple images accidentally match, the human verification will see that.
    StrangeDayselijahgradarthekatdewmebaconstangwatto_cobramwhite
  • Reply 87 of 162
    StrangeDaysStrangeDays Posts: 12,877member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.
    Nailed it. By doing something “for the children”, Apple is likely positioning itself to add E2E encryption for iCloud hosting and take away the only crit the government has about it. 
    radarthekatwatto_cobra
  • Reply 88 of 162
    StrangeDaysStrangeDays Posts: 12,877member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.


    Just a positive and forced optimistic description of Apple's delicate situation. Or we can put it the other way: the Government can already monitor everything by deep packet inspection. Instead they want to pass the burden to Tim Apple... 
    No, they can’t. E2E encryption prevents man in the middle. That’s the entire reason law enforcement doesn’t want encryption.
    killroywatto_cobradysamoria
  • Reply 89 of 162
    StrangeDaysStrangeDays Posts: 12,877member
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    Have you even bothered to read these articles? Like even bothered? They do NOT evaluate the subject of your photos. They are specific hash matches to *known* child pornography, cataloged in the CSAM database. 

    Seriously fucking educate yourself before clutching your pearls. If you can’t read the article you’re commenting on, try this one:

    https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope
    edited August 2021 killroyradarthekatwatto_cobradysamoria
  • Reply 90 of 162
    nrg2nrg2 Posts: 18member
    It is not scanning for photos of a mother taking a picture of her child. It is scanning for know photos of child pornography and abuse. 
    Excellent point, but even further than that. It's comparing HASH to HASH not PHOTO to PHOTO of already KNOWN child pornography/abuse. Very different things and only delusional to think this is some sort of privacy concern.

    As for photo message monitoring as part of the parental controls, this is on phone and NOT SENT TO APPLE with an alert going to the parent only. Every responsible parent should enable it on their children's phones. This shouldn't be a news flash, but here in the USA and I'm guessing most other countries, if your child is under 18 and sends nude photos of themselves to a girl/boy friend etc, they have just committed a crime as they have just become purveyors of child porn. The person they just sent that photo to is also potentially criminally liable for possession of child pornography.
    killroyradarthekatwatto_cobradysamoriamwhite
  • Reply 91 of 162
    macplusplusmacplusplus Posts: 2,112member
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    Have you even bothered to read these articles? Like even bothered? They do NOT evaluate the subject of your photos. They are specific hash matches to *known* child pornography, cataloged in the CSAM database. 

    Seriously fucking educate yourself before clutching your pearls. If you can’t read the article you’re commenting on, try this one:

    https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope
    Apparently you fucking educated yourself enough to still not understand that an innocent looking photo may still point to child abuse but Apple’s scheme will miss it thus it is ineffective. Crime is a very complex setup, it cannot be reduced to a couple of hashes.
    elijahg
  • Reply 92 of 162
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    The article was amended as that 1 in a trillion is per account, not per image.

    Also, if there was a hit it would be reviewed by a human before being acted upon, so if the false positive turned out not to be kiddie porn it would never escalate.

    Sounds like the only one who should really be concerned is if you really do store kiddie porn in your photo library.
    killroyradarthekatwatto_cobra
  • Reply 93 of 162
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    The article was amended as that 1 in a trillion is per account, not per image.

    Also, if there was a hit it would be reviewed by a human before being acted upon, so if the false positive turned out not to be kiddie porn it would never escalate.

    Sounds like the only one who should really be concerned is if you really do store kiddie porn in your photo library.
    Considering I was the guy who amended it, yup.
    killroyradarthekatwatto_cobra
  • Reply 94 of 162
    GeorgeBMacGeorgeBMac Posts: 11,421member
    crowley said:
    darkvader said:
    1.  Spying on your customers is EVIL, even when it's done for ostensibly noble purposes.
    This isn't spying.  It';s verifying against a known dataset of child abuse images.  No match, no problem.
    darkvader said:
    2.  If this technology is implemented, it WILL be used for purposes other than what it originally was intended to do.
    Baseless conjecture, and not even necessarily a negative one.
    darkvader said:
    3.  This can easily be used to target people, both by state and non-state actors, simply by sending your target texts or emails with matchable images.
    Except that wouldn't work, because the check only happens when the image is being sent to iCloud Photos.
    darkvader said:

    4.  This WILL be used by authoritarian governments for things other than its original design purpose.
    Authoritarian governments are not in control, Apple are. 
    darkvader said:

    5.  It must be stopped.
    I disagree, and counter that rampant child abuse must be stopped.
    darkvader said:

    What I'd propose is that if Apple continues down this dark path, the defeat is going to be overwhelming the system.  The hash database exists, therefore it's going to be obtainable.  What needs to happen is the creation of a large number of image files that, while being innocuous cat memes or even just line noise, will match the hashes.  Then those harmless images should be so widely distributed that systems like this are utterly overwhelmed with false positives, making them useless.
    Great, so you're helping out those who share child abuse images.  Good to know whose side you're on.
    It IS spying!   Looking at people's personal stuff without their permission is spying.  The fact that it's for a good purpose doesn't change that.
    And, Apple has been saying for years that any "back doors" WILL be used to spy on people.
    And too, it's not just "authoritarian governments" who do the spying.  The U.S. does more of it than (probably) all other countries combined (with the exception of Israel -- they seem to be in a league of their own).

    elijahgbaconstang
  • Reply 95 of 162
    macplusplusmacplusplus Posts: 2,112member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.


    Just a positive and forced optimistic description of Apple's delicate situation. Or we can put it the other way: the Government can already monitor everything by deep packet inspection. Instead they want to pass the burden to Tim Apple... 
    No, they can’t. E2E encryption prevents man in the middle. That’s the entire reason law enforcement doesn’t want encryption.
    That for general traffic. If they want to monitor a specific route, they still have a lot of options. The entire reason law enforcement doesn't want encryption is totally different.
    killroyelijahg
  • Reply 96 of 162
    GeorgeBMacGeorgeBMac Posts: 11,421member
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s attempt to adopt a self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !

    While that's all true (or most of it*), there's a significant difference between police retrieving somebody's data from iCloud with a search warrant versus this:   The search warrant supposedly has to have a justifiable cause for the search (we saw that rule gets bent when its Democratic law makers being searched) and, it searches one person.

    This is mass surveillance more akin to what the NSA does than a search warrant.

    * I doubt that everybody storing prohibited images has access to the black web.
    elijahgbaconstang
  • Reply 97 of 162
    thrangthrang Posts: 1,008member
    Again, it is NOT a back door. It is a warning beacon. Very different
    killroyradarthekatwatto_cobramwhite
  • Reply 98 of 162
    omairomair Posts: 9member
    I am shocked at this.  This is a slippery slope and I disagree with the article that this is a middle ground.  This is essentially a 2-step backdoor.  Back in 2011, India gave blackberry an ultimatum to hand over access to bbm or leave the country.  Blackberry countered for a month and then handed over the keys.

    In apple's case they have been harassed by FBI and have always presented the defence that the design is built with complete privacy with no way to circumvent it.
    Well now there are ways to circumvent this.  Even if you are using CSAM database, why cant at the behest of a foreign government you use another one to help them or get shut out of the country?  Apple is going to say no and leave china? or India? They absolutely wouldnt.  Same with fbi who might even be looking at CSAM itself at this point.

    Also 1 in a trillion is not great odds.  I have 350k photos alone.  And think of other use cases, any government agency can trap you through an app placing photos that triggers an investigation.  A person can trap someone by planting photos on someones phone.

    And then this goes into imessages, browser and search history, emails.  Why stop anywhere?  Why can't apple do realtime surveillance and alert authorities when a child is being abused.  Let algorithms call 911.  Or any other crime.  Dont you wanna save children?  Isnt that a goal above all goals at any price?

    Why do we need search warrants to search houses?  If police suspects someone of a crime, why do they need any warrant, why cant they barge in?  After all they could potentially save children?  They wont be getting into every house right.  And why are you worried if you have nothing to hide?

    Sad part is, people who abuse children, are likely going to leave the platform or find other ways.  While those who dont will be under surveillance of unprecedented historical nature in the history of human civilization.

    Lets not forget, desipte all claims that apple makes - its apple devices that resulted in leaks of thousands of nudes of celebrities, it is apple that NSO's pegasus so efficiently targeted for governments to surveil thousands of politicians, journalists and social activists.  With a horrible track record, fanboys just keep drinking the kool-aid.

    And now apple is going all in on creating backdoors on its users.

    elijahgGeorgeBMacmuthuk_vanalingambaconstangRayz2016
  • Reply 99 of 162
    crowleycrowley Posts: 10,453member
    crowley said:
    crowley said:
    Then I assume you don’t use Dropbox, Gmail, Twitter, Tumblr, etc etc… They all use the CSAM database for the same purpose. 

    The main take-away - commercial cloud hosting uses their servers. Should they not take measures to address child pornography on them? Not using their commercial service, there’s no issue. Is that not reasonable? One needn’t use commercial hosting services, especially if using it for illegal purposes.
    And this is exactly what criminals actually do: they are not stupid enough to use iCloud, they have the dark web, they have browsers and file transfer tools tailored to the special protocols developed for the dark web. Apple has long explained very well that iCloud backups are not encrypted. Law enforcement has (or should have) no issue with iCloud, because they can get any person’s unencrypted iCloud data anytime by presenting a court order. And I assure you, this is almost always much faster than Apple’s surveillance, based on the accumulation of some nasty tokens and the following human review.

    So, that child protection pretext stinks. Since law enforcement can access iCloud data anytime, Apple’s  attempt to adopt self-declared law enforcement role to “prevent crimes before they occur” is Orwellian !
    I'mma just leave this here:
    U.S. law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.
    From: https://www.nytimes.com/2021/08/05/technology/apple-iphones-privacy.html
    Flagging such cases doesn't mean preventive Orwellian surveillance. Such a law cannot pass. Even if it did, it cannot be interpreted in such an Orwellian sense. Citizens will fight, courts will interpret.
    No idea what you're even talking about.  

    You said criminals are "not stupid enough to use iCloud", which is obviously untrue, since they're stupid enough to use Facebook.

    You said Apple are attempting to "prevent crimes before they occur", which doesn't seem to be true or even relevant.  Images of child abuse are definitely crimes that have already occurred.

    Stop using Orwellian like a trump word.  It isn't.
    This is why preventive Orwellian surveillance is not a solution. How will you distinguish a mother's baby shower photo from a child abuse photo? Not AI, I mean human interpretation. You need a context to qualify it as child abuse. The scheme as described will not provide that context. "Images of child abuse are definitely crimes that have already occurred", agreed, but if and only if they are explicit enough to provide an abuse context. What about innocent looking non-explicit photos collected as a result of long abusive practices? So, the number of cases Apple can flag will be extremely llimited, since such explicit context will mostly reside elsewhere, dark web or some other media.
    Have you even bothered to read these articles? Like even bothered? They do NOT evaluate the subject of your photos. They are specific hash matches to *known* child pornography, cataloged in the CSAM database. 

    Seriously fucking educate yourself before clutching your pearls. If you can’t read the article you’re commenting on, try this one:

    https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope
    Apparently you fucking educated yourself enough to still not understand that an innocent looking photo may still point to child abuse but Apple’s scheme will miss it thus it is ineffective. Crime is a very complex setup, it cannot be reduced to a couple of hashes.
    No one is claiming that this system will solve the problem of child abuse. 
    radarthekatwatto_cobradysamoria
  • Reply 100 of 162
    gatorguygatorguy Posts: 24,213member
    dewme said:
    bulk001 said:
    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    I’m 100% in favor, as I said in my post, about what Apple is doing. They are taking an argument that the government has been trying to use to justify forcing Apple to open back doors off the table - by helping to solve the child abuse problem. This is not a deflection, this is the way that people who truly care about solving problems go about solving them. They aren’t just saying, “I don’t buy into your argument” and ending up at a stalemate. They are saying, “we will negate your argument by solving that problem … so now let’s talk about why you still think you need that back door.”  

    I’m totally behind Apple on this because they are doing the right thing wrt child abuse and they are removing an impediment that’s causing a stalemate on the larger issue of forced government surveillance. The inextricable linkage between the two is very evident in the posts . Doing nothing, as most posters are suggesting and the standard mode of operation, is not a valid option in my opinion. Whether you agree with what Apple is doing or not, these issues need to be discussed openly and not enter into an interminably entrenched ideological stalemate with no progress made on anything. Pragmatism has its time and place, and Apple is saying that time is now.


    Just a positive and forced optimistic description of Apple's delicate situation. Or we can put it the other way: the Government can already monitor everything by deep packet inspection. Instead they want to pass the burden to Tim Apple... 
    No, they can’t. E2E encryption prevents man in the middle. That’s the entire reason law enforcement doesn’t want encryption.
    That depends on the implementation.

    In iMessages for instance you can have more than 2 people in an E2E encrypted exchange, and that's by design. There's nothing to prevent adding additional participants, even a "man-in-the-middle" ghost observer, someone you would have no idea being involved, and the message session still being E2E encrypted. Unless only two people are allowed in an encrypted message session there can be danger of some "authority" insisting they be added as an undisclosed 3rd participant, an eavesdropper in practical terms, and by national law Apple not being permitted to discuss it. So no, E2E would not necessarily prevent a man-in-the-middle listener. 

    Before you brush it off, yes that was exactly what was proposed by a major intelligence service sometime back, and no further public mention of it since. 
    edited August 2021 baconstangdysamoria
Sign In or Register to comment.