exceptionhandler
About
- Username
- exceptionhandler
- Joined
- Visits
- 966
- Last Active
- Roles
- member
- Points
- 344
- Badges
- 0
- Posts
- 381
Reactions
-
Apple expands feature that blurs iMessage nudity to UK, Canada, New Zealand, and Australia...
command_f said:exceptionhandler said:jdw said:<Snip>.
I also don't understand what encryption you think is being "circumvented". The data on your phone is encrypted but, within the phone, the encryption keys are available and the data unlocked: if it wasn't, how would Photos be able to display your photos to you?You are correct that the data is encrypted before storing on the phone, and it has to be decrypted to be displayed on your screen. This is different from the encryption used to send something via Messages for instance, and is also different from the encryption to send things via iCloud photos. When sending images via iCloud photos, it would have to be decrypted from storage to be sent. The image should then be encrypted in transit (https) to Apple servers using a different protocol and a different key. But then at this point Apple has access to the contents to check for CSAM, and they’ve probably have been doing this since before the announcement of on device CSAM. I suspect Apple is then re-encrypting for data at rest on the server side, so only people with the keys may have access to the data.I highly doubt iCloud photos will ever be fully E2E encrypted, meaning only the original intended participants are privy to the contents of the encryption, with no one in the middle being able to view it, including but not limited to Apple or law enforcement. Threshold encryption is a way to circumvent the original participants to report to another, originally unintended party. Thus, I see little difference between the privacy of on-device vs server side CSAM detection. On device gives a false sense privacy. Since I treat anything I send off device (already decrypted from local storage) as public knowledge, this is effectively the same.
It all boils down to trust. Do I trust Apple? More so than other companies. Do I trust that they won’t look at my images? Generally I do, but I’m also not naive. Do I trust their software not to be hacked or leak these images? To some degree, but again, I’m not naive. Do I trust Apple’s claims about encryption? Generally, yes. How about end to end encryption (meaning it stays encrypted once it leaves the device and is only decrypted once it arrives at the final destination)? Enough that I’ll send private details to someone on the other side.
And that’s just part of it. Do I trust the receiver not to re-share the details? Do I trust the receiver to have taken precautions not to inadvertently leak the info, such as using untrusted 3rd party software, screen lookers, or even hackers? This is why things I want to keep private stay on my phone, and things I send out of it I regard as public knowledge despite promises made about the implementation (which I generally consider as trust worthy for now). -
Apple expands feature that blurs iMessage nudity to UK, Canada, New Zealand, and Australia...
jdw said:I was against the proposed Apple CSAM features because, even though the risk was low, there was still the change it could basically "call the cops" on someone, even by accident. However, I am not averse to the blurring of nudity as mentioned by the article because it doesn't secretly phone home or otherwise call the cops on the user. Some naughty people who send those kinds of photos won't be happy, but since I don't do that, it's obviously a non-issue for me. And while I don't engage in other illicit activities regarding kiddie porn either, the fact it potentially could alert authorities to target a given iPhone user keeps me dead set against it.
So go ahead a blur naughty photos, Apple! Just keep the cops and prosecution out of it. Seems like that is the case here, so it doesn't bother me that this could be the precursor to something else. When that something else arrives, we the consumer can evaluate it at the time. For now, no objections from me.
This nudity detection feature, if as stated all happens on device, with no reporting to any party outside the original participants, does not violate privacy and is a tool that can be used by parents to protect their kids (when the kids can afford their own phone/plan, they can decide for themselves to turn it on or off, imo)
I don’t have issue with scanning, what I take issue with is what is done with the results of a scan (is it reported or not). -
Smarthome firm and early HomeKit partner Insteon is dead, with no warning to customers
I abhor devices that won’t work without internet. If I’m on the local network, it should work without needing to phone home. And the only internet communication should be done through homekit’s apis to allow remote control access. Apple’s services/apis aren’t going away anytime soon. Another alternative would be to use an on-site vpn to access these devices. From security and reliability perspectives, each vendor providing their own services is a nightmare. -
Wemo Smart Video Doorbell review: The new HomeKit doorbell of choice
Andrew_OSU said:Japhey said:Like I said previously, this will probably be my first smart doorbell. And, like I asked earlier…what material is the outside casing made from? Is it metal? Is it plastic? This information should be included in the review.
I was unaware there was a PoE doorbell… not a fan of the commercial look though of the Robin ProLine for a residence. -
iOS 15 adoption rate appears to be lagging behind past updates
StrangeDays said:Incorrect. Sorry but you're speaking from ignorance here. Deploying 100% E2E is exactly why Apple designed on-device child rape CSAM scanning.
point is, even if some criminal were using it (probably not unless they are dumb, because of server side CSAM), they can encrypt their stash using something like TrueCrypt (and even use the plausible deniability mode that double encrypts it) and store it in a general file sharing/storage service like Dropbox or Google drive. Or heck, even just keep it local and just sync it to their own devices. They’ll find ways to see and propagate it.
im not saying apple shouldn’t do CSAM… they should, but keep it on the server side. It’s encrypted in transit (https) and is probably encrypted at rest, so only a few who have the keys may have access to it at Apple. How does that change if the move to on device scanning and reporting?As I’ve said before:Apple has used a threshold algorithm to enable review of the contents… remove that, and you will have true e2e encryption; otherwise it’s just “wish it were e2e”. It’s close, but no cigar.
Once a threshold is met what happens then? Do we trust Apple to handle the data correctly? Do we trust the authorities will handle the data correctly? Do we trust individuals at these places not to share it or leak it? Do we trust that the thresholds won’t change? Do we trust that other types of images won’t be deemed “illicit” in the future? Do we trust a similar threshold algorithm won’t be applied to text in imessages, looking for messages of terrorism or “hate” speech? Do we trust that it will only be done for photos uploaded to iCloud?I see no difference:
- Someone having limited access outside the intended audience to the data with on device scanning and reporting…
- Someone having limited access outside the intended audience to the data with server side scanning and reporting…
The only difference here is where it’s done, and on device has more serious privacy problems than the server side.
I treat anything I sent to iCloud photos as public knowledge, aka anything I want to keep private stays on my devices. On device scanning and reporting encroaches on that privacy (even given the current software “limitations” of thresholds and only done when sent to iCloud)… so, given that, I would much rather the CSAM reporting stay server side.
¯\_(ツ)_/¯