Preferences

> Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

Is that a typo? What use is something that’s unreadable? I’m only half done, but it sounds like a bunch of hand wavy, AI, ML bullshit to me.

How do you match similar images without reducing them down to something that’s less than the original?

> Users can’t identify which images were flagged as CSAM by the system.

> If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

So the user can’t have information about anything, but they can appeal the decision if their account is locked? And locking the account might cause people to do all kinds of stupid things like resetting their device to get the account back or because they panicked. Then the only evidence is some opaque system that claims the user had illegal content. That’s scary.

Many years ago I knew a person that took a perfectly good PC to the garbage dump because of a scareware warning saying it was involved in illegal activity. Poor guy wasn’t very computer savvy. This will open up some excellent phishing opportunities because of the seriousness/consequences of the type of accusation and the legitimacy of everyone knowing it’s something that’s done now.

Edit: I finished reading it. If the photos are synced to iCloud I guess locking the account preserves the data. I still don’t like how cloak and dagger the whole thing is. No one’s allowed to know anything and all the average person is going to understand is that Apple’s system says this person is guilty, so they’re guilty. I think stuff like this would get a lot more scrutiny and less traction if it wasn’t “for the children.”

What if there are bugs? No one is going to be allowed to audit the system or see how it works because of the sensitive nature of the content. It’s pretty scary IMO.

> The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account.

How was that calculated? Prove it.

> The neural network is taught to generate descriptors that are close to one another for the original/perturbed pair.

So is that the same kind of crappy AI that (incorrectly) detects suspicious activity on email accounts?


This is why AI is complete snake oil. They're trying to encode a recognizer, but they can't even define what it is they are actually feature extracting. Only that it's "typical ML techniques" that the rest of the industry doesn't want to call bullshit on because no one wants to give someone the idea that they shouldn't be allowed to use something if they can't explicitly articulate the function.

There's some things where screw it, why not. This is not one of them.

> So the user can’t have information about anything, but they can appeal the decision if their account is locked? And locking the account might cause people to do all kinds of stupid things like resetting their device to get the account back or because they panicked. Then the only evidence is some opaque system that claims the user had illegal content. That’s scary.

At that point you have uploaded the pictures to icloud. The evidence is in icloud. You cannot destroy your device to get rid of this.

Having an appeals process doesn't inspire confidence since we rarely hear stories of those appeals processes working.

What we usually hear instead are people being unsuccessful in their attempts to go through an appeals process, often because the system doesn't reveal any information on why they were flagged. The real remediation appears to try to gain some publicity in order to attract the attention of some real human to actually look at their case.

The low resolution version of the images are available (decryptable once the threshold has been met), so there’s evidence whether it is or not illegal material.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal