Apple expands feature that blurs iMessage nudity to UK, Canada, New Zealand, and Australia...
Apple is rolling out the communication safety feature that scans iMessages for nudity on devices owned by younger users in the U.K., Canada, New Zealand, and Australia months after debuting it in the U.S.
-xl-xl.jpg)
Apple iMessage
The feature, which is different from the controversial on-device Photos evaluation function, will automatically blur potentially harmful images in received or outgoing messages on devices owned by children.
First reported by The Guardian, Apple is expanding the feature to the U.K. after rolling it out in the U.S. back in iOS 15.2. AppleInsider has learned that the feature is expanding to Canada, New Zealand, and Australia as well.
How the feature works depends on whether a child receives or sends an image with nudity. Received images will be blurred, and the child will be provided with safety resources from child safety groups. Nudity in photos sent by younger users will trigger a warning advising that the image should not be sent.
The feature is privacy-focused and is available only on an opt-in basis. It must be enabled by parents. All detection of nudity is done on-device, meaning that any potentially sensitive images never leaves an iPhone.
Apple first announced the Communication Safety function alongside a suite of features meant to provide better safety mechanisms aimed at children. That suite included a system that scanned Photos for child sex abuse material (CSAM).
The CSAM scanning had several privacy mechanisms and never looked through a user's images. Instead matched potential abusive material based on known hashes provided by child safety organizations. Despite that, Apple faced a backlash and delayed the feature until further notice.
Apple's Communication Safety function is completely unrelated to the CSAM scanning mechanism. It first debuted in the U.S. in November as part of iOS 15.2, and its expansion to these regions signals its rollout to other markets.
Read on AppleInsider
-xl-xl.jpg)
Apple iMessage
The feature, which is different from the controversial on-device Photos evaluation function, will automatically blur potentially harmful images in received or outgoing messages on devices owned by children.
First reported by The Guardian, Apple is expanding the feature to the U.K. after rolling it out in the U.S. back in iOS 15.2. AppleInsider has learned that the feature is expanding to Canada, New Zealand, and Australia as well.
How the feature works depends on whether a child receives or sends an image with nudity. Received images will be blurred, and the child will be provided with safety resources from child safety groups. Nudity in photos sent by younger users will trigger a warning advising that the image should not be sent.
The feature is privacy-focused and is available only on an opt-in basis. It must be enabled by parents. All detection of nudity is done on-device, meaning that any potentially sensitive images never leaves an iPhone.
Apple first announced the Communication Safety function alongside a suite of features meant to provide better safety mechanisms aimed at children. That suite included a system that scanned Photos for child sex abuse material (CSAM).
The CSAM scanning had several privacy mechanisms and never looked through a user's images. Instead matched potential abusive material based on known hashes provided by child safety organizations. Despite that, Apple faced a backlash and delayed the feature until further notice.
Apple's Communication Safety function is completely unrelated to the CSAM scanning mechanism. It first debuted in the U.S. in November as part of iOS 15.2, and its expansion to these regions signals its rollout to other markets.
Read on AppleInsider
Comments
on the particular matter, my first thought was how many kids actually have iPhones anyway? But then I am an old fogey from The Time Before Mobiles. Every kid has phones. And privacy, once important, and protection against creation of a Big Brother in all its possible forms, no longer matters.
So go ahead a blur naughty photos, Apple! Just keep the cops and prosecution out of it. Seems like that is the case here, so it doesn't bother me that this could be the precursor to something else. When that something else arrives, we the consumer can evaluate it at the time. For now, no objections from me.
My point was it's a waste of time, they work can on nuclear technology for all I care, but don't waste engineers time.
That said at face value this opt in sounds like a welcome feature for many...
This nudity detection feature, if as stated all happens on device, with no reporting to any party outside the original participants, does not violate privacy and is a tool that can be used by parents to protect their kids (when the kids can afford their own phone/plan, they can decide for themselves to turn it on or off, imo)
I don’t have issue with scanning, what I take issue with is what is done with the results of a scan (is it reported or not).
The article says "All detection of nudity is done on-device, meaning that any potentially sensitive images never leaves an iPhone.". If only the phone in question is involved then no-one else is involved, so by definition it's private.
I also don't understand what encryption you think is being "circumvented". The data on your phone is encrypted but, within the phone, the encryption keys are available and the data unlocked: if it wasn't, how would Photos be able to display your photos to you?
It all boils down to trust. Do I trust Apple? More so than other companies. Do I trust that they won’t look at my images? Generally I do, but I’m also not naive. Do I trust their software not to be hacked or leak these images? To some degree, but again, I’m not naive. Do I trust Apple’s claims about encryption? Generally, yes. How about end to end encryption (meaning it stays encrypted once it leaves the device and is only decrypted once it arrives at the final destination)? Enough that I’ll send private details to someone on the other side.
And that’s just part of it. Do I trust the receiver not to re-share the details? Do I trust the receiver to have taken precautions not to inadvertently leak the info, such as using untrusted 3rd party software, screen lookers, or even hackers? This is why things I want to keep private stay on my phone, and things I send out of it I regard as public knowledge despite promises made about the implementation (which I generally consider as trust worthy for now).