Hacker Newsnew | past | comments | ask | show | jobs | submit | mrdomino-'s commentslogin

What if a human had done this?

They’d likely be held culpable and prosecuted. People have encouraged others to commit crimes before and they have been convicted for it. It’s not new.

What’s new is a company releasing a product that does the same and then claiming they can’t be held accountable for what their product does.

Wait, that’s not new either.


Encouraging someone to commit a crime is aiding and abetting, and is also a crime in itself.

Then they’d get prosecuted?

Maybe, but they would likely offer an insanity defense.

And this has famously worked many times

Charles Manson died in prison.

Human therapists are trained to intervene when there are clearly clues that the person is suicidal or threatening to murder someone. LLMs are not.

checks notes

Nothing. Terry A. Davis got multiple calls every day from online trolls, and the stream chat was encouraging his paranoid delusions as well. Nothing ever happened to these people.


Well, LLMs aren't human so that's not relevant.

Hm, I don't know. If an automatic car drives over a person, or you can't just write any text to books or the internet. If writing is automated, the company who writes it, has to check for everything is ok.

Yeah, you could use forward error correction too, so any n bits would be enough to reconstruct the input.

Of course then you get into needing software to decode the more advanced encodings; maybe start with a voice transmission explaining in plain language how to decode the first layer, which gives you a program that can decode the second layer, or something.

Starting to sound like an interesting project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: