Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Then you'll have one of these "fancy" modern websites that load their content entirely through js and it'll fail because the HTML is the same on all pages.

A URL match is probably good enough the vast majority of the time. Maybe it could also support a bit of fuzzing, such as matching with and without the leading www and both http and https. Beyond that it's probably asking for trouble.



> one of these "fancy" modern websites that load their content entirely through js [will] fail because the HTML is the same on all pages.

That's a feature, not a bug. (Although it would admittedly be better to block those explicitly rather than relying on coincidental interactions with something that doesn't seem directly related.)


> Then you'll have one of these "fancy" modern websites that load their content entirely through js and it'll fail because the HTML is the same on all pages.

What are you talking about, it would be done entirely on the backend.

Mind you, I've brought it up with dang multiple times and he says it would be a hassle and too brittle to be effective (fair enough), but nothing about it would require javascript.


I believe the parent is talking about submitting links to websites that render their content via clientside JavaScript and how that would break the hash dupe detection. They aren't suggesting that the functionality would need to be implemented in JS by HN.

Regardless, hashing the content to detect dupes is just an idea that wouldn't work for a lot of reasons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: