There is no inconsistency here. Whoever generates and posts the slander is accountable.
If individuals on Facebook post it, the individuals are responsible under US law section 230.
But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
Once corps start being held legally liable for their AI generated slop, I wouldn't be surprised if they start banning this "new technology" over liability concerns.
LLM based AI is inherently flawed and unreliable and everyone with only half a brain knows it. Making use of technology that is widely known to be flawed for any sort of "serious" work is a textbook example of negligence. And slander can be "serious". Lawyers live for this sort of thing.
> But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
> Once corps start being held legally liable for their AI generated slop...
While I personally agree with your ideal - in the current legal, regulatory, and political environment, I see precious little chance of any such corporation actually being held responsible for the output of its AI.
If individuals on Facebook post it, the individuals are responsible under US law section 230.
But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
Once corps start being held legally liable for their AI generated slop, I wouldn't be surprised if they start banning this "new technology" over liability concerns.
LLM based AI is inherently flawed and unreliable and everyone with only half a brain knows it. Making use of technology that is widely known to be flawed for any sort of "serious" work is a textbook example of negligence. And slander can be "serious". Lawyers live for this sort of thing.