> What's the mechanistic model of "intention" that you're using to claim that there is no intention in the model's operation?
You can’t prove intention, but I can show examples of LLMs lacking intent (as when repeating the same solution even after being told it was incorrect)
> Generating text is the trace of an internal process in an LLM.
Not really sure precisely what you mean by trace, but the output from an LLM (as with any statistical model) is the result of the calculations, not a representation of some emergent internal state.
> You can’t prove intention, but I can show examples of LLMs lacking intent (as when repeating the same solution even after being told it was incorrect)
I don't think that shows lack of intent, any more than someone who has dementia forgetting why they entered a room shows they lack intent.
You can’t prove intention, but I can show examples of LLMs lacking intent (as when repeating the same solution even after being told it was incorrect)
> Generating text is the trace of an internal process in an LLM.
Not really sure precisely what you mean by trace, but the output from an LLM (as with any statistical model) is the result of the calculations, not a representation of some emergent internal state.