Let's say I start an AI program and my initial prompt is "Copy these files to this other computer", and then 100 iterations down the agentic loop the AI decides to hack into Tesla's FSD and ships an update that kills 500 people.
Obviously this is up to courts and juries to hammer out but...
- Your agentic loop hacked something? You're liable.
- FSD crashes? The guy in the driver's seat is liable. He/his insurance can sue Tesla to spread the liability...
Nowhere along the line will anyone go "Oh, the AI did it... whoops"
Who is liable?