Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> possible solutions focused on the device, the server and the encryption protocol

Looks like they're going to find ways to read our messages before they are encrypted and sent. Why would anyone continue to use a communications application that's known to do this?



> Why would anyone continue to use a communications application that's known to do this?

Network effect. Most people are not using Whatsapp because it is E2EE, they are using it because all their friends are.


Not sure if terrorist and organized crime are influenced by the "Network effect"...and it's because of them right?


Even if true it seems it could still create a smaller haystack.


Of freedom activists or terrorists? Or is that a viewpoint thing?


My guess: Client-side scan for certain keywords to identify grooming and some kind of signature-based identification of known child-porn media. Basically what I assume Messenger does today, but on the local devices instead.

The general public won't care until we're halfway down a slippery slope, and then people will just switch to whatever platform is perceived as more secure/popular at that particular moment in time.


> Why would anyone continue to use a communications application that's known to do this?

Are you kidding? Almost nobody will care about that. This isn't even a new threat. It's common practice already.


What if hashes of known-bad content are stored locally on the device, and sending content that matches against those hashes is not allowed. Or, the user could appeal if they think there's a false positive. This can be used for CP but also for known-bad fake news or inflammatory content. Clearly, the content hash DB needs to be scope down, and what goes in there should be chosen with democratic principles, and stand scrutiny in the courts. If done thoughtfully, it seems like a feasible solution.


Changing a hash is incredibly easy, you could just change some Metadata and the hash would change. And any perceptual hashing algorithms would naturally lead to false positives.

Also this would likely be quickly commandeered for copyrighted work (honestly pretty surprised it hasn't happened already).



Yes, it would have to be a perceptual hash. False positives will occur, so there needs to be a way to appeal or remediate the algorithmic decision. We already apply this approach in a bunch of places. I believe the major personal cloud storage providers (OneDrive, etc) already do such scanning.


>This can be used for CP but also for known-bad fake news or inflammatory content.

It worries me that anyone thinks it would be a good idea to have "fake news" and "inflammatory content" blocked at the device level. Obviously cloud providers can do whatever they want (though I doubt it catches any more than the lowest hanging fruit, encrypting then uploading would be uncatchable), but the idea that my device will have a list of disapproved content, and I'll have to appeal to the government to be allowed to view it in case of false positives? The day that becomes a reality freedom will truly be dead.


I didn't say it would be at the system level. I'd expect this to happen per app. It's similar to how photo manipulation software can detect currency. I doubt every such app complies, and certainly the system screenshot tool does not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: