Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Brilliant guerrilla marketing move, really: anyone who chooses Android over iPhone now implicitly has some very unsavoury reasons for doing so. And if it was really a broader surveillance initiative pushed by the government, the same implication neutralises any protest.


As long as false positives are a thing, there are very practical reasons for switching, especially for those who have kids.

I wouldn't be surprised if a completely innocent false positive gets you put on a list indefinitely, with little recourse.


False positives of the kind you're thinking of aren't possible--it's checking for hashes that match known bad images, not running machine learning/image detection to detect if the photo you just took contains bad content. The issue is that there's nothing stopping Apple/the government from marking anything it finds objectionable--like anti-government free speech--as a Bad Image, beyond CSAM.


The thing is Apple uses some custom hash thing with parameters generated by AIs. As some other article shows you can get conflicting hashes if some color patterns and shadows match. Also the threshold they mentioned is secret so it could be 1 or 2 or it could change in future.


Once the policy decision is made that it can run some kind of scanning, it opens the doors for any kind of scanning. Today it's that "neural hash", tomorrow it's going to do something even more invasive.


Really, hashes are sufficiently unique? The objections I saw for this news were along the lines that random images could be manipulated to have hashes that match the flagged cases, in a way that was undetectable by the naked eye.


Doesn't the hash change by exporting a photo as a new file type or by changing a few pixels in photoshop?

If this was the FINAL solution to catch every last child pornographer in one glorious roundup MAYBE it would be worth the massive risk of authoritarian abuse but this algorithm sounds stupidly easy to get around for the deviants while still throwing our collective privacy under the bus.


This is a PhotoDNA hash, not a file-content hash. It is a bit more powerful than a normal hash:

> In the same way that PhotoDNA can match an image that has been altered to avoid detection, PhotoDNA for Video can find child sexual exploitation content that’s been edited or spliced into a video that might otherwise appear harmless

https://en.wikipedia.org/wiki/PhotoDNA


False positives are possible. Apple states that their hash function has about a 1 in 1 trillion chance of producing one.


> it's checking for hashes that match known bad images

Many of the hashes provided by the NCMEC are MD5. There are going to be false positives left and right.


It is not a surveillance initiative yet, but it will be trivial to expand upon the scanning capabilities once established.

You have more trust in Apple, something i don't. So we see this change in a different light.


They're all a bunch of commies who want to repair their stuff and... own? it.


Private property is the opposite of communism.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: