New iOS 15.2 beta includes Messages feature that detects nudity sent to kids

Posted:
in General Discussion edited November 2021
Apple's latest iOS 15.2 beta has introduced a previously announced opt-in communication safety feature designed to warn children -- and not parents -- when they send or receive photos that contain nudity.

Credit: Andrew O'Hara, AppleInsider
Credit: Andrew O'Hara, AppleInsider


The Messages feature was one part of a suite of child safety initiatives announced back in August. Importantly, the iMessage feature is not the controversial system designed to detect child sexual abuse material (CSAM) in iCloud.

Although not enabled by default, parents or guardians can switch on the Messages feature for child accounts on a Family Sharing plan. The feature will detect nudity in Messages, blur the image, and warn the child.

Apple says children will be given helpful resources and reassured that it's okay if they don't want to view the image. If a child attempts to send photos that contain nudity, similar protections will kick in. In either case, the child will be given the option to message someone they trust for help.

Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.

Apple says that the detection of nudity flag will never leave a device, and the system doesn't encroach upon the end-to-end encryption of iMessages.

It's important to note that the feature is opt-in and only being included in the beta version of iOS, meaning it is not currently public-facing. There's no timeline on when it could reach a final iOS update, and there's a chance that the feature could be pulled from the final release before that happens.

Again, this feature is not the controversial CSAM detection system that Apple announced and then delayed. Back in September, Apple said it debut the CSAM detection feature later in 2021. Now, the company says it is taking additional time to collect input and make necessary improvements.

There's no indication of when the CSAM system will debut. However, Apple did say that it will be providing additional guidance to children and parents in Siri and Search. In an update to iOS 15 and other operating systems later in 2021, the company will intervene when users perform searches for queries related to child exploitation and explain that the topic is harmful.

Read on AppleInsider

Comments

  • Reply 1 of 19
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    curiousrun8
  • Reply 2 of 19
    elijahgelijahg Posts: 2,759member

    Although not enabled by default, parents or guardians can switch on the Messages feature for child accounts on a Family Sharing plan. The feature will detect nudity in Messages, blur the image, and warn the child. 

    Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.

    Surely that's a bit of a catch-22: Since it has to be enabled by parents but won't notify parents due to potential repercussions, I'd wager the kind of parents that would dish out those repercussions would never turn the feature on anyway?

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".
    Alex1N
  • Reply 3 of 19
    elijahg said:

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".
    Photos library scanning is already happening on our devices. Both Apple and Google offers items and faces recognition in their apps, so it makes no sense to be worried now after years of library scanning. 
    williamlondonronndewmemichelb76uraharakurai_kagefastasleepAlex1N
  • Reply 4 of 19
    darkvaderdarkvader Posts: 1,146member
    elijahg said:

    Although not enabled by default, parents or guardians can switch on the Messages feature for child accounts on a Family Sharing plan. The feature will detect nudity in Messages, blur the image, and warn the child. 

    Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.

    Surely that's a bit of a catch-22: Since it has to be enabled by parents but won't notify parents due to potential repercussions, I'd wager the kind of parents that would dish out those repercussions would never turn the feature on anyway?

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".

    If this contains no code capable of reporting the results of the image analysis to any third party, then it's fine. 

    Unfortunately, we have no way of knowing if that dangerous code is there, and given Apple's previous intent of putting it there, Apple can't be trusted at this point.  Apple needs to have a third-party code audit to confirm that the dangerous code is indeed gone.
    williamlondon
  • Reply 5 of 19
    auxioauxio Posts: 2,727member
    darkvader said:

    If this contains no code capable of reporting the results of the image analysis to any third party, then it's fine. 

    Unfortunately, we have no way of knowing if that dangerous code is there, and given Apple's previous intent of putting it there, Apple can't be trusted at this point.  Apple needs to have a third-party code audit to confirm that the dangerous code is indeed gone.
    The on-device scanning and sending of information could easily be audited by logging everything your phone is sending out via your router (which I'm sure people are already doing).  As for whether information is shared with 3rd parties, that would require an audit on Apple's side, not the on-device code.
  • Reply 6 of 19
    auxioauxio Posts: 2,727member
    JaiOh81 said:
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    A simplified explanation is that a "fingerprint" of the image (a unique number which is calculated using the pixels in the image) is created and sent to Apple.  This fingerprint is then compared with fingerprints of known sexually explicit images to see how similar it is.  It won't be done purely on device, but only when an image is sent/received via iMessage.  The fingerprint will be calculated on device, but the comparison will be done on Apple's iMessage servers most likely.
  • Reply 7 of 19
    mcdavemcdave Posts: 1,927member
    This ML-driven implicit content rating is great as provider content rating could never be trusted.
    Though until this service also alerts me, the parent, as to who’s sending nudity, my daughter won’t be getting iMessage switched on.
  • Reply 8 of 19
    mcdavemcdave Posts: 1,927member
    darkvader said:
    elijahg said:

    Although not enabled by default, parents or guardians can switch on the Messages feature for child accounts on a Family Sharing plan. The feature will detect nudity in Messages, blur the image, and warn the child. 

    Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.

    Surely that's a bit of a catch-22: Since it has to be enabled by parents but won't notify parents due to potential repercussions, I'd wager the kind of parents that would dish out those repercussions would never turn the feature on anyway?

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".

    If this contains no code capable of reporting the results of the image analysis to any third party, then it's fine. 

    Unfortunately, we have no way of knowing if that dangerous code is there, and given Apple's previous intent of putting it there, Apple can't be trusted at this point.  Apple needs to have a third-party code audit to confirm that the dangerous code is indeed gone.
    Apple had no previous plan to report this to anyone but the parents. This is nothing to do with the CSAM scanning.

    In it’s planned form it’s an irrelevant feature. If anything, stating a picture may be objectionable will only pique a child’s interest and with no parental involvement this shouldn’t be considered a parental control. If improving child safety is its intent, it’s likely to achieve the opposite. Worse, it’s a placebo.
    muthuk_vanalingam
  • Reply 9 of 19
    yojimbo007yojimbo007 Posts: 1,165member
    elijahg said:

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".
    Photos library scanning is already happening on our devices. Both Apple and Google offers items and faces recognition in their apps, so it makes no sense to be worried now after years of library scanning. 
    Yes but Only on iCloud … not on ones device …
    No more Updates for me !!! 
    Principally f-ed up …. Its surveillance.. regardless of what they want to call it or Misrepresent it as!!!
    If they can scan photos… they can scan for anything their heart desires.
    F-Them

    williamlondon
  • Reply 10 of 19
    auxioauxio Posts: 2,727member
    elijahg said:

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".
    Photos library scanning is already happening on our devices. Both Apple and Google offers items and faces recognition in their apps, so it makes no sense to be worried now after years of library scanning. 
    Yes but Only on iCloud … not on ones device …
    Face scanning has been done on-device ever since it was introduced back in iPhoto in 2009
    edited November 2021 williamlondonsphericmichelb76kurai_kageAlex1N
  • Reply 11 of 19
    auxio said:
    elijahg said:

    This does still have the potential to be contentious, since it's still scanning (on device) the photos being sent. That means much like before the tech for further erosion of privacy is already implemented, potentially allowing a country to force Apple to scan for particular pictures as they see fit. Yes it's enabled for child accounts only, but it wouldn't be much trouble to enable scanning for anyone, and not much more of a stretch to force them to send a surreptitious notification to a government minion.

    What happens if Jingping tells Apple it has to scan for any photos of Winnie the Pooh? Will Apple still say no under threat of being removed from China? This question that was never really answered before still exists - and Apple's only response is "we won't bend to government demands", with no answer to "even if it's China?".
    Photos library scanning is already happening on our devices. Both Apple and Google offers items and faces recognition in their apps, so it makes no sense to be worried now after years of library scanning. 
    Yes but Only on iCloud … not on ones device …
    Face scanning has been done on-device ever since it was introduced back in iPhoto in 2009
    Not the point for him, contrived outrage is.
    kurai_kageAlex1N
  • Reply 12 of 19
    mcdavemcdave Posts: 1,927member
    the child will be given the option to message someone they trust for help” - I think Apple needs to research paedophile behaviour a little better, the perpetrators are often ‘trusted’ & deferring this decision to a child is the ugly side of empowerment.
    sphericmuthuk_vanalingambeowulfschmidt
  • Reply 13 of 19
    sphericspheric Posts: 2,556member
    auxio said:
    JaiOh81 said:
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    A simplified explanation is that a "fingerprint" of the image (a unique number which is calculated using the pixels in the image) is created and sent to Apple.  This fingerprint is then compared with fingerprints of known sexually explicit images to see how similar it is.  It won't be done purely on device, but only when an image is sent/received via iMessage.  The fingerprint will be calculated on device, but the comparison will be done on Apple's iMessage servers most likely.
    You're describing the CSAM scanning, which is entirely separate from and has nothing to do with this feature. 

    This is content analysis, just like Photos has been doing for years now. When it's analysing your photos for content, so that you can search for "cat", it's also analysing for nudity. 
    williamlondonbeowulfschmidt
  • Reply 14 of 19
    I find it gross that Apple continues to push the idea that young children and pre-teens are equipped to make good decisions around sexual content.  And now parents have even less role to play in these content restrictions than when first announced.  Like the CSAM feature, Apple continues to view its customers with suspicion that they might be child abusers.
    muthuk_vanalingamwilliamlondonmcdave
  • Reply 15 of 19
    auxioauxio Posts: 2,727member
    spheric said:
    auxio said:
    JaiOh81 said:
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    A simplified explanation is that a "fingerprint" of the image (a unique number which is calculated using the pixels in the image) is created and sent to Apple.  This fingerprint is then compared with fingerprints of known sexually explicit images to see how similar it is.  It won't be done purely on device, but only when an image is sent/received via iMessage.  The fingerprint will be calculated on device, but the comparison will be done on Apple's iMessage servers most likely.
    You're describing the CSAM scanning, which is entirely separate from and has nothing to do with this feature. 

    This is content analysis, just like Photos has been doing for years now. When it's analysing your photos for content, so that you can search for "cat", it's also analysing for nudity. 
    I'll have to look into it a bit more I guess.  I assumed that image hashes could be used to determine visual similarity (not just pure comparisons, as CSAM does).
  • Reply 16 of 19
    I’m not against scanning, as the software necessarily needs to be able to read and write images (wouldn’t be useful if it didn’t).  Scanning is just a subset of reading images - it makes it easier for me to find images of particular things.

    But to report to a 3rd party of the results of a scan without my consent, that’s what I take issue with.  As long as the data remains on device, scanning is a useful time saving tool.

    I also being a concerned parent, probably won’t be handing over a smartphone or tablet to my kids that I don’t have access to.  So this feature is not really useful to me. They have an iPad, but I also have it locked down where the password can’t be changed (and technically at this point, they don’t know the password, so they have to ask to use it).

    I guess it could be useful if an unknown number/account sends illicit images to have it blurred out, just in case.

    When they are old enough to buy/afford a phone and pay the monthly charges, they can do what they want.
    muthuk_vanalingamwilliamlondonmcdave
  • Reply 17 of 19
    gatorguygatorguy Posts: 24,212member
    auxio said:
    JaiOh81 said:
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    A simplified explanation is that a "fingerprint" of the image (a unique number which is calculated using the pixels in the image) is created and sent to Apple.  This fingerprint is then compared with fingerprints of known sexually explicit images to see how similar it is.  It won't be done purely on device, but only when an image is sent/received via iMessage.  The fingerprint will be calculated on device, but the comparison will be done on Apple's iMessage servers most likely.
    This isn't the hash-tagged "known child porn images" so many were complaining about. My guess is this particular feature uses some fuzzy algorithm that recognizes too much skin, or certain human features typically considered to be of a private nature. 
  • Reply 18 of 19
    mknelsonmknelson Posts: 1,124member
    gatorguy said:
    auxio said:
    JaiOh81 said:
    I’m legitimately curious, how can iMessage “detect” sexually explicit photos being sent to or from a phone.

    Does anyone know how this is being done on device?
    A simplified explanation is that a "fingerprint" of the image (a unique number which is calculated using the pixels in the image) is created and sent to Apple.  This fingerprint is then compared with fingerprints of known sexually explicit images to see how similar it is.  It won't be done purely on device, but only when an image is sent/received via iMessage.  The fingerprint will be calculated on device, but the comparison will be done on Apple's iMessage servers most likely.
    This isn't the hash-tagged "known child porn images" so many were complaining about. My guess is this particular feature uses some fuzzy algorithm that recognizes too much skin, or certain human features typically considered to be of a private nature. 
    In much the same way as you can search you photo library for pictures of "car","dog", "cat", "cake", etc.

    I find it gross that Apple continues to push the idea that young children and pre-teens are equipped to make good decisions around sexual content.  And now parents have even less role to play in these content restrictions than when first announced.  Like the CSAM feature, Apple continues to view its customers with suspicion that they might be child abusers.
    You have it 100% backwards. The concept was that it would "warn" kids about the image they were about to send and then notify the parents as well if the kids send the image.

    Also - you clearly haven't spent much time on the internet. Example: back when Periscope was still active you could go on and find a kid (clearly under 14 so not following Twitter's terms and conditions) doing something completely innocent and normal and they would be bombarded with "do you have cash app?" and emoji suggestions such as 👚⬆️ and 👖⬇️
    williamlondon
  • Reply 19 of 19
    mknelson said:
    I find it gross that Apple continues to push the idea that young children and pre-teens are equipped to make good decisions around sexual content.  And now parents have even less role to play in these content restrictions than when first announced.  Like the CSAM feature, Apple continues to view its customers with suspicion that they might be child abusers.
    You have it 100% backwards. The concept was that it would "warn" kids about the image they were about to send and then notify the parents as well if the kids send the image.

    Also - you clearly haven't spent much time on the internet. Example: back when Periscope was still active you could go on and find a kid (clearly under 14 so not following Twitter's terms and conditions) doing something completely innocent and normal and they would be bombarded with "do you have cash app?" and emoji suggestions such as 👚⬆️ and 👖⬇️
    You should read the article again! 

    Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.


    williamlondon
Sign In or Register to comment.