Mike Wuerthele
About
- Username
- Mike Wuerthele
- Joined
- Visits
- 178
- Last Active
- Roles
- administrator
- Points
- 23,992
- Badges
- 3
- Posts
- 7,272
Reactions
-
Open letter asks Apple not to implement Child Safety measures
coolfactor said:I have no problem with this "service" provided by Apple since:- It's 100% Opt-In, not activated automatically.
- Not new. One commenter mentioned that "this could be applied to email". Well, newsflash! Our emails have been geting scanned by our email providers for decades for spam. We expect that! Why aren't we protesting that that be stopped???
-
Open letter asks Apple not to implement Child Safety measures
DoctorQ said:Interesting that this news came out before the weekend. Wonder what the stock will do Monday?
Also, this system doesn’t stop originating CSM on an iPhone with iCloud Photo turned off. If it’s not in the hash db, it won’t be flagged. One could argue that this encourages production of new CSM (sarcasm tag here).
The point is, this is a way for Apple to get the FBI and other LEOs off their back in the most non-intrusive way they could think of. Is it flawed? Hell yes. But they hope Congress will look kindly on them come the anti-trust hearings.
The stock closed at $146.81 at the close of business on Thursday, and is $145.96 as I write this. No appreciable change on the news, and it fell more when Apple announced blockbuster earnings a few weeks ago. The market overall is as down the same as Apple is, but it can be argued that AAPL pricing had an impact on that. -
What you need to know: Apple's iCloud Photos and Messages child safety initiatives
verne arase said:Mike Wuerthele said:elijahg said:Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
And even if it was, one in 50 million is still pretty spectacularly against.
Also, if there was a hit it would be reviewed by a human before being acted upon, so if the false positive turned out not to be kiddie porn it would never escalate.
Sounds like the only one who should really be concerned is if you really do store kiddie porn in your photo library. -
What you need to know: Apple's iCloud Photos and Messages child safety initiatives
22july2013 said:I'm not saying that I'm for or against this system. I am unbiased for now. I don't have any position yet, I just have some technical and procedural questions.
1. Why would CSAM or Apple use a hash algorithm that has a 1 in 1,000,000,000 (a trillion) chance of a mismatch, when using the MD5 hashing algorithm which is already built-into macOS has a 1 in 70,000,000,000,000,000,000,000,000,000 (70 octillion) chance of a mismatch? Yes, I know this is an image hash and not a byte-wise file hash, but even so, why was the image hash algorithm designed with such an amazingly high collision chance? Is it impossible to design an image hash that has an error rate in the octillions? Why did they settle for an error rate as surprisingly large as one in a trillion? I want to know.
2. What happens if the news media releases a picture of a child in a victimized situation but blurs or hides the indecent parts, in order to help get the public to identify the child, and what if this image gets sent to my iCloud? Is that going to trigger a match? The image has lots in common with the original offending image. Sure, a small part of the image was cropped out, but the CSAM algorithm results in matches even when images area as they say, "slightly cropped." They said this: "an image that has been slightly cropped or resized should be considered identical to its original and have the same hash." This means it could trigger a match. Am I entitled to know exactly how much of a change will typically cause the match to fail? Or is this something the public is not allowed to learn about?
3. When Apple detects a matching photo hash, how does Apple (or the US government) take into account the location of the culprit when a match occurs? Suppose the culprit who owns the offending photo resides in, say, Russia. What happens then? The US can't put that person on trial (although since 2004 the US has been putting a growing number foreign terrorists on trial after dragging them into the US, and then using tenuous links like accessing an American bank account as justification for charging them in a US Federal court.) About 96% of the people in the world do not live in the US, so does that mean a high percentage of the cases will never go to trial? Does the US government send a list of suspects to Vladimir Putin every week when they find Russians who have these illegal images in their iCloud? Or do Russian culprits get ignored? What about Canadian culprits, since Canada is on good terms with the US? Does the US government notify the Canadian government, or does the US wait until the culprit attempts to cross the border for a vacation? I want to see the list of countries that the US government provides its suspect list with. Or is this something the public is not allowed to learn about?
And now for a couple of hypothetical questions of less urgency but similar importance:
4. If a friendly country like Canada were to develop its own database, would Apple report Canadian matches to the Canadian authority, or to the US authority, or to both governments? In other words, is Apple treating the US government as the sole arbitrator of this data, or will it support other jurisdictions?
5. What happens if an unfriendly country like China builds its own child abuse database, would Apple support that, and then would Apple report to Chinese authorities? And how would Apple know that China hasn't included images of the Tiananmen massacre in its own database?
And now a final question that comes to my mind:
Since the crimes Apple is trying to fight occur all over the world, shouldn't the ICC (International Criminal Court) be creating a CSAM database? Frankly, I blame the ICC for not tackling this problem. I'm aware that US Republicans generally oppose the ICC, but Democrats sound much more open to using the ICC. Biden has been silent on the ICC since getting elected, but he has said that his administration: "will support multilateral institutions and put human rights at the center of our efforts to meet the challenges of the 21st century.” Reading between the lines, that sounds like he supports the ICC. And since 70% of Americans support the ICC, according to a poll, maybe this is a popular issue that Biden can hang his hat on.
My main concern with this whole topic is that there are so many important questions like these that are not being considered.
2. A "slight crop" will be similar as you say. A blur is a noticeable change to the hash. I'm pretty sure there'll be no transparency on this, as was discussed in the piece.
3. Existing law prevails where the user is located. The system is US-only. The NCMEC isn't transparent about data sources for the hash database.
4. No idea.
5. No idea.
ICC, probably, but it doesn't look like they are. -
What you need to know: Apple's iCloud Photos and Messages child safety initiatives
cpsro said:Mike Wuerthele said:elijahg said:Remember that 1 in 1 trillion isn't 1 false positive per 1 trillion iCloud accounts - it's 1 per 1 trillion photos. I have 20,000 photos, that brings the chances I have a falsely flagged photo to 1 in 50 million. Not quite such spectacular odds then.
And even if it was, one in 50 million is still pretty spectacularly against.
But Apple might claim the false positive rate is per account, not per photo.
These statistics are, however, under idealized (perfectly random) circumstances and real data typically doesn't look entirely random.That's four novemdecillion hashes. Had to look that one up.