bulk001

About

Username
bulk001
Joined
Visits
451
Last Active
Roles
member
Points
1,256
Badges
1
Posts
828
  • WhatsApp latest to pile on Apple over Child Safety tools

    What nefarious purposes? The government is tracking you to Target? You have a subscription to a legal porn site? You are having an affair? Nobody cares! They have access to all your financial information already. And to your phone meta data. The ones who will use this against you can access it by just arresting you and forcing you to give over the data or hack into it like Saudi Arabia is doing to murder dissidents. 
    watto_cobra
  • Open letter asks Apple not to implement Child Safety measures

    darkvader said:
    crowley said:
    iadlib said:
    This is admirable and I like the intention behind it. But to plays devils advocate. I bring up that this technique. Maybe. Possibly. Could be adapted for other kinds of data. Say emails. Text messages. Let’s say it becomes an API or baked in feature. What if that feature gets hijacked? By a hacker, or a government? To search for anti-state speech in China, or even industrial espionage. This is a Pandora’s box that couldn’t ever be shut. 
    FUD.

    It's a private method for comparing an image hash against a specific database of image hashes.  

    No indication that it'll become a generic API, or a feature for anyone else to use.
    No indication that there will be any way for it to be hijacked, short of rooting the entire phone, in which case all bets are off anyway.
    No indication that it could be used to search for anything other than images, or more specifically the hashes of those images.
    No indication that it couldn't be shut down any time Apple wanted to shut it down.

    It's not FUD.  There is no indication that it can't be used for scanning any type of data, and in fact it's obvious that it absolutely can be.

    Saying things like "oh, but they're just going after evil kiddy porn, think of the children" is a wilful misunderstanding of the technology.  This is incredibly dangerous, and should absolutely be scrapped before it's ever implemented. 
    There would need to be a database to scan it against. What data are you afraid of them scanning? What do you have that is so private but at the same time available in a database that is important enough to have been created to catch that information. If you are worried about them just reading your text messages, that is readily available to authoritarian governments already - they will just force you to unlock your phone or stored data and jail you or worse if you don’t. 
    DBSynckillroylongfang
  • What you need to know: Apple's iCloud Photos and Messages child safety initiatives

    dewme said:
    thrang said:
    My take is this.

    Apple saw the writing on the wall regarding government mandated back-doors. Doors that would all in all likelihood much more open with unfettered access to much if not all of your information. Perhaps begin he sense, the pressure were growing immense.

    Perhaps, they decided to develop a content and technology around very narrow and focused one-way beacons (to avoid a mandated backdoor), initially to identify illegal and abhorrent possession and behavior. Perhaps evidence of murders, rapes, extortion, terrorism, may be other beacons that are sent out in the future.

    I know, there will be discussions over who makes the decisions, how is it vetted, errors, misuse, hacking, etc. But essentially, to me, Apple seeking to control the process of what they see are inevitable laws that they would need to comply with and with much worse outcomes for users. One way beacons that need to be further vetted to ensure no false positives is an effort to ameliorate law enforcement concerns while still protecting legitimate private information is perhaps a very good approach.

    While it feels icky upon initially hearing about it, the more you think about it this way (and what government enforced alternatives might be), it may begin to make sense.

    In terms of what defines when these future beacons will be sent out, Apple will like say ask for the pertinent laws to govern such beaconing, leaving up to elected leaders to clarify/legislate/vote what kind of content is considered severely harmful, how it is to be legally obtained, and utlimately leave it up to voters to support or oust those politicians in future elections. So in this case, there is a well defined hash database for this particular category. Apple then implements an on-device methodology that is designed to keep the rest of your data protected and unavailable for unrelated sniffing about while beaconing when there is a match.

    As to other categories of hash matching, governments will need to figure that out which would be subjects to immense scrutiny  and public debate I'm sure...

    There are caveats of course, but in principal this is how I see what has happened.
    This largely echoes my sentiment expressed in another AI thread on the same topic. Those who wish to have unfettered access to the private information of ordinary citizens have long used the argument “what about the abused children?” to justify their requests that Apple open a back door for intrusive government spying on its own citizens. Apple is trying to take that card off the table, probably in hopes that it will quell the onslaught of requests. 

    I think it’ll buy Apple some time, but not much. As more and more countries including the United States continue to radicalize against not only remote outsiders, but their fellow citizens who they now consider outsiders because they don’t embrace reality bending authoritarian despots, the requests for back doors will transform into demands for back doors that cannot be denied. 

    I’m very much in favor of what Apple is proposing, but I’m equally concerned that what they are proposing will not be enough to keep the bigger issue of massive government intrusion through mandated back doors at bay. At some point we’ll all have to assume that privacy as we used to know it no longer exists. Nothing Apple is doing will change the eventual outcome if the embrace of authoritarianism and demonization of fellow citizens is allowed to grow. 
    Child abuse and child pornography is the very definition of “what about the children”! And after you buy Apple some time and they don’t agree that their servers should host this content, then what? You going to sign up for Jitterbug service and use an old flip phone? I remember a Walmart or Walgreens reporting a mother who took her film in to be developed and there was a picture of her child partially naked and got her arrested and possibly flagged as a sex offender. This is not what is going on here. Unless your pictures matches those in the database no one is going to see it. While false positives are rare they will happen and if there is a criticism, it would be that Apple should explain better how the image will be reviewed and what action taken. To turn your “what about the children” thesis around though, what I don’t understand is the support for the very worse of society on this board in the name of their privacy. 
    radarthekatdysamoria
  • What you need to know: Apple's iCloud Photos and Messages child safety initiatives

    darkvader said:

    What you REALLY need to know: Apple's iCloud Photos and Messages child safety initiatives

    1.  Spying on your customers is EVIL, even when it's done for ostensibly noble purposes.

    2.  If this technology is implemented, it WILL be used for purposes other than what it originally was intended to do.

    3.  This can easily be used to target people, both by state and non-state actors, simply by sending your target texts or emails with matchable images.

    4.  This WILL be used by authoritarian governments for things other than its original design purpose.

    5.  It must be stopped.

    What I'd propose is that if Apple continues down this dark path, the defeat is going to be overwhelming the system.  The hash database exists, therefore it's going to be obtainable.  What needs to happen is the creation of a large number of image files that, while being innocuous cat memes or even just line noise, will match the hashes.  Then those harmless images should be so widely distributed that systems like this are utterly overwhelmed with false positives, making them useless.


    Considering that authoritarian governments can already access you iPhone (Pegasus is one example
    and there are many more no doubt out there) this is nonsense. I am sure that the Democratic governments can too. In China and Saudi Arabia they are not going to politely ask you to give them permission to access your phone or iCloud data and if you decline, let you go with an apology! Child pornographers would be thrilled to have the database corrupted and your suggestion only helps them and no one else. 
    GeorgeBMackillroyradarthekatdysamoria
  • What you need to know: Apple's iCloud Photos and Messages child safety initiatives

    elijahg said:
    elijahg said:
    Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
    One in a trillion over 20,000 photos is not 1 in 50 million. It's one in a trillion, 20,000 times. The odds do not decrease per photo, as your photo library increases in size. There is not a 1:1 guarantee of a falsely flagged photo in a trillion-strong photo library.

    And even if it was, one in 50 million is still pretty spectacularly against.
    Unfortunately it is - 1 in 1 trillion becomes 2 in 1 trillion with two photos. Or 1 in 500 billion. That then halves again with 4 photos, 1 in 250 billion and so on. It's little more than simplified fractions. Punch 1,000,000,000,000/20,000 into a scientific calculator and it'll be simplified to 50,000,000/1. The odds do decrease because there is a more likelihood you have a matching photo with 2 photos than 1 photo. And yes, statistically speaking 1 in 1 trillion means that in a trillion-strong library there will be one false match.

    Also, it's massively more likely someone will get their password phished than a hash collision occurring - probably 15-20% of people I know have been "hacked" through phishing. All it takes is a couple of photos to be planted, with a date a few years ago so they aren't at the forefront of someone's library and someone's in very hot water. You claim someone could defend against this in court, but I fail to understand how? "I don't know how they got there" isn't going to wash with too many people. And unfortunately, "good security practices" are practised only by the likes of us anyway, most people use the same password with their date of birth or something equally insecure for everything. 
    1 in 50 million is not the same statistically as one in a trillion tried 20,000 times, no matter how much you want it to be so, I'm afraid. Regardless, your 1 in 50 million is still a very large number.

    One in a trillion tried a trillion times does not guarantee a match, although it is likely. as you're saying. There may even be two or three. You're welcome to believe what you want, and you can research it with statisticians if you are so inclined. This is the last I will address this point here.

    And, in regards to the false positive, somebody will look at the image, and say something like: Oh, this is a palm tree. It just coincidentally collides with the hash. All good. Story over.

    In regards to your latter point, this is addressed in the article.
    Correct! If the lottery odds are 1:150 million, buying two tickets does not increase your odds of winning to 1:75 million. You now have two tickets, each with the odds of 1:150 million of winning. Or to put it another way, if you have a whole collection of photos of child pornography on iCloud, you are going to jail. 
    StrangeDaysdewmewatto_cobramwhite