> Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender.
> The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19.
Cool, so we’ve reached the brain-rot stage where people are not only taking AI summaries as fact (been here for a while now, even before LLMs with Google’s quick answer stuff) but they are _citing_ them as proof. Fuck me. I know that’s a little much for HN but still, just insane, at no point did someone think to get a more primary source before canceling the show.
The complete abdication of thinking and even the most minor research is depressing. I use LLMs daily but always make sure to check the sources, verify the claims. They are great for surfacing info but that’s just the first step. I’ve lost track of how many times an LLM has confidently stated something using sources and I check the sources and they say nothing of the sort.
> The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name.
"AI" makes for a clickier story, but you don't need it to have that kinda screw-up.
Actually, you don't even need the web. Back in the 90's, a young coworker of mine was denied a mortgage. Requested his credit report - and he learned that he'd already bought a house. In another city. At age 5. Based on income from the full-time job at Ford Motor he'd had since age 4. And several other laughable-in-retrospect hallucinations.
If, instead of the AI summary, the First Nation had come across some little on-line forum where angry users where denouncing "Ashley MacIsaac" as a sex offender, and (just as with the AI) the First Nation had neglected to verify which Ashley MacIsaac it was - then who would be facing consequences for that?
OTOH - yes, I get that "the AI said" is the new "dog ate my homework" excuse, for ignoble humans trying to dodge any responsibility for their own lazy incompetence.
"Some little on-line forum" with a few angry users is not really comparable to a mega-corp with billions of users.
Lawyers could but are unlikely to go after a few misguided individual users for slander. As they say, you can't get blood out of a rock. Mega-corp is a much more tempting target.
Legal liability for bad AI is just getting started but I expect lawyers are giddy with anticipation.
Okay, let's say the misguided individual users are posting on Facebook - a mega-corp with billions of users. And 13-digit market cap, to tempt the lawyers.
How does that play out? IANAL, but I'm thinking Facebook says "Sorry, but Section 230 covers our ass" - and that's about it. Still no consequences.
does section 230 cover the AI in this case? For misguided individual I doubt you are going to get a cancelled concert off of some rando troll to be honest but IANL either lol
Section 230 allows websites to host user-generated content without being treated as the publisher of that content.
But AI slop is not "user generated content" --- it is content that the web site itself is generating with AI and publishing. As such, they become wholly responsible for the content (in my opinion).
There is no inconsistency here. Whoever generates and posts the slander is accountable.
If individuals on Facebook post it, the individuals are responsible under US law section 230.
But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
Once corps start being held legally liable for their AI generated slop, I wouldn't be surprised if they start banning this "new technology" over liability concerns.
LLM based AI is inherently flawed and unreliable and everyone with only half a brain knows it. Making use of technology that is widely known to be flawed for any sort of "serious" work is a textbook example of negligence. And slander can be "serious". Lawyers live for this sort of thing.
> But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
> Once corps start being held legally liable for their AI generated slop...
While I personally agree with your ideal - in the current legal, regulatory, and political environment, I see precious little chance of any such corporation actually being held responsible for the output of its AI.
This won't end well in my judgment.
reply