exceptionhandler

About

Username
exceptionhandler
Joined
Visits
966
Last Active
Roles
member
Points
344
Badges
0
Posts
381
  • Apple expands feature that blurs iMessage nudity to UK, Canada, New Zealand, and Australia...

    command_f said:
    jdw said:
    <Snip>.
    I agree.  I was and still am against on device CSAM detection.  I’m ok if CSAM detection happens in the cloud, however… their house, their rules. On device CSAM necessitates the circumvention of encryption to report to a 3rd originally unintended party the data that is in violation of the rules.
    I don't follow your logic. CSAM detection is OK if it's done on someone else's computer (Apple's Server in the cloud) but not if it's done on your own phone? If it's done on your phone, you then get the warning and no-one else can tell (unless you keep doing it, which is my memory of what Apple proposed). If it's done on someone else's server then the data is present somewhere out of your control.

    I also don't understand what encryption you think is being "circumvented". The data on your phone is encrypted but, within the phone, the encryption keys are available and the data unlocked: if it wasn't, how would Photos be able to display your photos to you? 
    As I’ve said before: I treat anything I send to iCloud photos as public knowledge (Even if it’s meant only for friends and family).  Anything I want to keep private stays on my devices. On device scanning and reporting encroaches on that privacy (even given the current software “limitations” of thresholds and only done when sent to iCloud)… so, given that, I would much rather the CSAM reporting stay server side.

    You are correct that the data is encrypted before storing on the phone, and it has to be decrypted to be displayed on your screen.  This is different from the encryption used to send something via Messages for instance, and is also different from the encryption to send things via iCloud photos.  When sending images via iCloud photos, it would have to be decrypted from storage to be sent.  The image should then be encrypted in transit (https) to Apple servers using a different protocol and a different key. But then at this point Apple has access to the contents to check for CSAM, and they’ve probably have been doing this since before the announcement of on device CSAM. I suspect Apple is then re-encrypting for data at rest on the server side, so only people with the keys may have access to the data.

    I highly doubt iCloud photos will ever be fully E2E encrypted, meaning only the original intended participants are privy to the contents of the encryption, with no one in the middle being able to view it, including but not limited to Apple or law enforcement.  Threshold encryption is a way to circumvent the original participants to report to another, originally unintended party.  Thus, I see little difference between the privacy of on-device vs server side CSAM detection.  On device gives a false sense privacy.  Since I treat anything I send off device (already decrypted from local storage) as public knowledge, this is effectively the same.

    It all boils down to trust.  Do I trust Apple? More so than other companies.  Do I trust that they won’t look at my images? Generally I do, but I’m also not naive.  Do I trust their software not to be hacked or leak these images? To some degree, but again, I’m not naive.  Do I trust Apple’s claims about encryption? Generally, yes.  How about end to end encryption (meaning it stays encrypted once it leaves the device and is only decrypted once it arrives at the final destination)? Enough that I’ll send private details to someone on the other side.

    And that’s just part of it.  Do I trust the receiver not to re-share the details? Do I trust the receiver to have taken precautions not to inadvertently leak the info, such as using untrusted 3rd party software, screen lookers, or even hackers? This is why things I want to keep private stay on my phone, and things I send out of it I regard as public knowledge despite promises made about the implementation (which I generally consider as trust worthy for now).
    muthuk_vanalingamwatto_cobra
  • Apple expands feature that blurs iMessage nudity to UK, Canada, New Zealand, and Australia...

    jdw said:
    I was against the proposed Apple CSAM features because, even though the risk was low, there was still the change it could basically "call the cops" on someone, even by accident.  However, I am not averse to the blurring of nudity as mentioned by the article because it doesn't secretly phone home or otherwise call the cops on the user.  Some naughty people who send those kinds of photos won't be happy, but since I don't do that, it's obviously a non-issue for me.  And while I don't engage in other illicit activities regarding kiddie porn either, the fact it potentially could alert authorities to target a given iPhone user keeps me dead set against it.

    So go ahead a blur naughty photos, Apple!  Just keep the cops and prosecution out of it.  Seems like that is the case here, so it doesn't bother me that this could be the precursor to something else.  When that something else arrives, we the consumer can evaluate it at the time.  For now, no objections from me.
    I agree.  I was and still am against on device CSAM detection.  I’m ok if CSAM detection happens in the cloud, however… their house, their rules. On device CSAM necessitates the circumvention of encryption to report to a 3rd originally unintended party the data that is in violation of the rules.

    This nudity detection feature, if as stated all happens on device, with no reporting to any party outside the original participants, does not violate privacy and is a tool that can be used by parents to protect their kids (when the kids can afford their own phone/plan, they can decide for themselves to turn it on or off, imo)

      I don’t have issue with scanning, what I take issue with is what is done with the results of a scan (is it reported or not).
    elijahgmuthuk_vanalingamwatto_cobra
  • Smarthome firm and early HomeKit partner Insteon is dead, with no warning to customers

    I abhor devices that won’t work without internet.  If I’m on the local network, it should work without needing to phone home.  And the only internet communication should be done through homekit’s apis to allow remote control access.  Apple’s services/apis aren’t going away anytime soon.  Another alternative would be to use an on-site vpn to access these devices.  From security and reliability perspectives, each vendor providing their own services is a nightmare.
    watto_cobra
  • Wemo Smart Video Doorbell review: The new HomeKit doorbell of choice

    Japhey said:
    Like I said previously, this will probably be my first smart doorbell. And, like I asked earlier…what material is the outside casing made from? Is it metal? Is it plastic? This information should be included in the review. 
    It will always be plastic for connectivity. The only metal one is the Robin ProLine because it uses PoE rather than Wi-Fi. Every other HomeKit doorbell is plastic. The mounting plate is metal though.
    Thanks.  I actually would prefer more PoE homekit devices, but understand why they favor wifi and Bluetooth (convenience).  I don’t mind and have run my own Ethernet cables in a 2 story home, so where I can I like to run them to avoid clogging up the air waves.

    I was unaware there was a PoE doorbell… not a fan of the commercial look though of the Robin ProLine for a residence.
    watto_cobra
  • iOS 15 adoption rate appears to be lagging behind past updates

    Incorrect. Sorry but you're speaking from ignorance here. Deploying 100% E2E is exactly why Apple designed on-device child rape CSAM scanning.
    Do you work for Apple? Do you know an insider at Apple that has stated so? Has Apple officially released this detail somewhere?  You’re making an assumption here, and it’s no better than what pundits do that I see bashed all the time here.  Apple may, or may not, be implementing a sort of encryption that would be “wish it were real E2E” (think of the basic security questions banks ask, which is a type of “wish it were 2 factor auth”, it’s not the real thing, dumpster divers can find that info, and at best it’s just secondary passwords “something you know”… real 2F is something you know, and something you have, 3F is something you know, something you have, and something you are.). The whole point of this is, real E2E is when I, in secret, encrypt a message, that no one else knows it’s contents, and it stays encrypted until it reaches the target audience, who has the capability to decrypt it (at which point you have to trust they will not share that info, or someone or something is not watching).  But the point is behind E2E is that only you, the sender are privy to the contents, and sometime later, the target audience is also privy.

    point is, even if some criminal were using it (probably not unless they are dumb, because of server side CSAM), they can encrypt their stash using something like TrueCrypt (and even use the plausible deniability mode that double encrypts it) and store it in a general file sharing/storage service like Dropbox or Google drive.  Or heck, even just keep it local and just sync it to their own devices.  They’ll find ways to see and propagate it.

    im not saying apple shouldn’t do CSAM… they should, but keep it on the server side.  It’s encrypted in transit (https) and is probably encrypted at rest, so only a few who have the keys may have access to it at Apple.  How does that change if the move to on device scanning and reporting? 

    As I’ve said before:
    Apple has used a threshold algorithm to enable review of the contents… remove that, and you will have true e2e encryption; otherwise it’s just “wish it were e2e”. It’s close, but no cigar.

    Once a threshold is met what happens then? Do we trust Apple to handle the data correctly? Do we trust the authorities will handle the data correctly?  Do we trust individuals at these places not to share it or leak it?  Do we trust that the thresholds won’t change? Do we trust that other types of images won’t be deemed “illicit” in the future? Do we trust a similar threshold algorithm won’t be applied to text in imessages, looking for messages of terrorism or “hate” speech?  Do we trust that it will only be done for photos uploaded to iCloud?

    I see no difference:

     - Someone having limited access outside the intended audience to the data with on device scanning and reporting…

    - Someone having limited access outside the intended audience to the data with server side scanning and reporting…

    The only difference here is where it’s done, and on device has more serious privacy problems than the server side.


    I treat anything I sent to iCloud photos as public knowledge, aka anything I want to keep private stays on my devices.  On device scanning and reporting encroaches on that privacy (even given the current software “limitations” of thresholds and only done when sent to iCloud)… so, given that, I would much rather the CSAM reporting stay server side.

    ¯\_(ツ)_/¯

    ctt_zhwilliamlondonmuthuk_vanalingamelijahg