Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the terrible misfeature that this group wants is “government provides a bunch of opaque hashes that are ‘CSAM’, all images are compared with those hashes, and if the hashes match then the user details are given to police”

Note that by design the hashes cannot be audited (though in the legitimate case I don’t imagine doing so would be pleasant), so there’s nothing stopping a malicious party inserting hashes of anything they want - and then the news report will be “person x bought in for questioning after CSAM detector flagged them”.

That’s before countries just pass explicit laws saying that the filter must includE LGBT content (in the US several states consider books with lgbt characters to be sexual content, so a lgbt teenager would be de facto CSAM), in the UK the IPA is used to catch people not collecting dog poop so trusting them not to expand scope is laughable, in Iran a picture of a woman without a hijab would obviously be reportable, etc

What Apple has done is add the ability to filter content (eg block dick picks) and for child accounts to place extra steps (incl providing contact numbers I think?) if a child attempts to send pics with nudity, etc



>in the UK the IPA is used to catch people not collecting dog poop

What does this mean? What is IPA? I tried Googling for it but I’m not finding much. I would love to learn more about that


The investigatory powers act.

It was passed to stop terrorism, because previously they found that having multiple people (friends and family etc) report that someone was planning a terrorist attack failed to stop a terrorist attack.


How exactly would you be able to "filter" LGBT content? I don't think you understand how this system would've worked.


Hypothetically you have hashes for two people of gender X (lets be honest, based on popularity of different types of porn two men). This is not meaningfully different from opaque hash of "CSAM".

But you're missing the point:

Step 1. generate some opaque hash of the "semantics" of an image

Step 2. compare those hashes to some list of hashes of "CSAM", which again fundamentally cannot be audited

Step 3. report any hits to law enforcement

Step 4. person X is being investigated due to reported violations of laws against child abuse.

Basically: how do you design a system in which the state provides "semantic" hashes of "CSAM" that cannot be trivially abused by inclusion of non-CSAM as "CSAM", or by laws mandating inclusion of things that are objectively not-CSAM. Hypothetically: hashes that match christian crosses, star of David, muslim star and/or crescent, etc. Or in the US DNC, RNC, pride, etc flags. Recall that definitionally no one can audit the hashes that would trigger notifying law encforcement.


Except this system wouldn't have looked at "semantics". You can't simply match a hash of a cross or star or flag, you have to match a specific photograph. Which photograph do you use?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: