Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name.

"AI" makes for a clickier story, but you don't need it to have that kinda screw-up.

Actually, you don't even need the web. Back in the 90's, a young coworker of mine was denied a mortgage. Requested his credit report - and he learned that he'd already bought a house. In another city. At age 5. Based on income from the full-time job at Ford Motor he'd had since age 4. And several other laughable-in-retrospect hallucinations.





the difference is that "ai can have errors" absolves anybody of consequences for this sort of thing.

If, instead of the AI summary, the First Nation had come across some little on-line forum where angry users where denouncing "Ashley MacIsaac" as a sex offender, and (just as with the AI) the First Nation had neglected to verify which Ashley MacIsaac it was - then who would be facing consequences for that?

OTOH - yes, I get that "the AI said" is the new "dog ate my homework" excuse, for ignoble humans trying to dodge any responsibility for their own lazy incompetence.


then who would be facing consequences for that?

Your analogy is bad.

"Some little on-line forum" with a few angry users is not really comparable to a mega-corp with billions of users.

Lawyers could but are unlikely to go after a few misguided individual users for slander. As they say, you can't get blood out of a rock. Mega-corp is a much more tempting target.

Legal liability for bad AI is just getting started but I expect lawyers are giddy with anticipation.


Okay, let's say the misguided individual users are posting on Facebook - a mega-corp with billions of users. And 13-digit market cap, to tempt the lawyers.

How does that play out? IANAL, but I'm thinking Facebook says "Sorry, but Section 230 covers our ass" - and that's about it. Still no consequences.


does section 230 cover the AI in this case? For misguided individual I doubt you are going to get a cancelled concert off of some rando troll to be honest but IANL either lol

Section 230 allows websites to host user-generated content without being treated as the publisher of that content.

But AI slop is not "user generated content" --- it is content that the web site itself is generating with AI and publishing. As such, they become wholly responsible for the content (in my opinion).


There is no inconsistency here. Whoever generates and posts the slander is accountable.

If individuals on Facebook post it, the individuals are responsible under US law section 230.

But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.

Once corps start being held legally liable for their AI generated slop, I wouldn't be surprised if they start banning this "new technology" over liability concerns.

LLM based AI is inherently flawed and unreliable and everyone with only half a brain knows it. Making use of technology that is widely known to be flawed for any sort of "serious" work is a textbook example of negligence. And slander can be "serious". Lawyers live for this sort of thing.


> But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.

> Once corps start being held legally liable for their AI generated slop...

While I personally agree with your ideal - in the current legal, regulatory, and political environment, I see precious little chance of any such corporation actually being held responsible for the output of its AI.


To avoid responsibility, liability laws and centuries of legal precedent will have to be struck down on a societal scale.

For example, is medical and legal malpractice going to be voided just so incompetent AI can be applied?

I doubt it. The USA is ruled by lawyers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: