Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What verbiage would you deem acceptable, if "claims" and "hallucinates" are both out?


“Weighted random text generator generates random text that doesn’t make logical sense” maybe. Seems to be both accurate and better describes what’s actually going on. The AI can’t claim or hallucinate anything because that requires it to have core beliefs and senses that can be fooled by incorrect inputs.


It's right in the most famous system's name: "Generative Pre-trained Transformer". LLMs generate text (and other media) which imitates their pre-training corpus. That's all.

At the end of the day, it's turning a mathematical crank. LLMs have no more intentionality than a jack-in-the-box.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: