Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the AI driven future we are heading into, telling the difference between AI-bot and human might become a valuable good.


Why? And how would that even work? Just because an online account is tied to a verified real human doesn't guarantee that the content isn't coming from an AI-bot.


It's a blockchain, so you can keep permanent record of what a person is doing and when and where they got caught violating the rules. It won't stop the infractions from happening at first, but it will make it very easy to avoid them happening again. And if this gets widespread, people might think twice before risking their blockchain personhood certificate.


You're missing the point. How would they get caught violating the rules in the first place? You (and the HN admins) have no way of knowing whether I typed this comment in myself or an AI bot used my account to do it.


It will be trivial however to hold you liable for the content no matter its source, when it is tied to your irl identity instead of a pseudonym.


The “person” could just be copying and pasting AI output. Eye scanning can’t stop that at all


Maybe it would allow you to rate-limit and/or ban by the human, which is probably more effective than banning by IP address.

(Obviously Worldcoin is shady as shit, I'm not defending it.)


So they're going to get rich from introducing both the problem and its nightmare of a solution.


Or they just want to sell us minority report style ads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: