Is it not physically impossible for LLM's to be anything but "plausible text completion"?
Neural Networks as I understand them are universal function approximators.
In terms of text, that means they're trained to output what they believe to be the "most probably correct" sequence of text.
An LLM has no idea that it is "conversing", or "answering" -- it relates some series of symbolic inputs to another series of probabilistic symbolic outputs, aye?
Neural Networks as I understand them are universal function approximators.
In terms of text, that means they're trained to output what they believe to be the "most probably correct" sequence of text.
An LLM has no idea that it is "conversing", or "answering" -- it relates some series of symbolic inputs to another series of probabilistic symbolic outputs, aye?