Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> with a 1-in-a-trillion chance of a false collision

I don't think that's exactly where the "one in a trillion" claim comes from. Rather, it's that a single matching hash isn't enough to trigger the reporting; there needs to be multiple matches, and when there are enough of them to cross an unspecified threshold, then the reporting is triggered. There's theoretically only a one in a trillion chance of that threshold being crossed without having actual CSAM matches.

If I understand the white paper correctly, this even goes a step farther than that; they can't decrypt the signatures of the images corresponding to the matched hashes until the threshold is passed, because those images form a kind of decryption key together.

On a technical level, I'm actually pretty impressed. They absolutely could set up E2E encryption and still implement this system, and it largely assuages my worries about false matches of innocent photos (with the extremely big caveat that a false match has a very high potential of ruining someone's life). As the linked article points out, though, the real privacy concern here comes from having this matching capability on-device at all, because once it's there, limiting the data set to just this one provided by NCMEC becomes a matter of company policy. If an agency of any government demands Apple add their data set, they can no longer say, "we can't do that without drastically compromising the way our devices and services work," because it will be public knowledge that this in fact how their devices and services work already.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: