What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.
What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.
Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.
It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.
Let's say I start an AI program and my initial prompt is "Copy these files to this other computer", and then 100 iterations down the agentic loop the AI decides to hack into Tesla's FSD and ships an update that kills 500 people.
Obviously this is up to courts and juries to hammer out but...
- Your agentic loop hacked something? You're liable.
- FSD crashes? The guy in the driver's seat is liable. He/his insurance can sue Tesla to spread the liability...
Nowhere along the line will anyone go "Oh, the AI did it... whoops"
I'm not aware of any counter-example, but I also don't know any reason why this must be true. And furthermore, I would expect that this will get more likely over time.
It's possible, in theory, that an AI could establish a crypto wallet, but what would they do with it? AI doesn't have desires. It doesn't do what it isn't told to do (although those instructions can be broad and vague). Even if an AI did somehow do something bad without being told, that AI would still be set up by a human and running on some human's hardware and using a human's internet connection.
Maybe in the distant sci-fi future we'll have actual AI (not just glorified chatbots) and AI will be able to decide for itself what it wants to do with its time and we'll be allowing AI to sign leases on property and set up accounts with utility companies, and if that day comes we're going to have a lot of problems if we're not ready for it, but until then it's AI on a human's hardware at a human's property running up a human's electric bill.
I think it's just a gap in definitions. The labs say models don't act on their own initiative. What counts as initiative? I guess an API call in a for loop would count.
Historically it seems like a lot of laws haven't been easy to change. Especially when they regulate zillion dollar industries.
The human making the decision is always liable.
What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.
What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.
Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.
It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.
God told me to do it - Water tight! Right?