True, LLMs definetly has something that is "thought-like".
But todays networks lacks the recursion(feedback where the output can go directly to the input) that is needed for the type of internalized thoughts that humans have. I guess this is one thing you are pointing at by mentioning the continuousnes of the internals of LLMs.
But todays networks lacks the recursion(feedback where the output can go directly to the input) that is needed for the type of internalized thoughts that humans have. I guess this is one thing you are pointing at by mentioning the continuousnes of the internals of LLMs.