Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of what you read online is written by insane people.

https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...



Frankly, this is a big part of why I believe LLMs are so inept at solving mundane problems. The mundane do not write about their experiences en mass.


Or if they do, it's anecdotal or wrong. Worse, they say it with confidence, which the AI models also do.

Like, I'm sure the models have been trained and tweaked in such a way that they don't lean into the bigger conspiracy theories or quack medicine, but there's a lot of subtle quackery going on that isn't immediately flagged up (think "carrots improve your eyesight" lvl quackery, it's harmless but incorrect and if not countered it will fester)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: