Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it not physically impossible for LLM's to be anything but "plausible text completion"?

Neural Networks as I understand them are universal function approximators.

In terms of text, that means they're trained to output what they believe to be the "most probably correct" sequence of text.

An LLM has no idea that it is "conversing", or "answering" -- it relates some series of symbolic inputs to another series of probabilistic symbolic outputs, aye?



At this point you need to actually define what it means for an LLM to "have an idea".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: