> But the claim for LLMs are next token predictors is the SAME mischaracterization. LLMs are clearly more than next token predictors. Don’t get me wrong LLMs aren’t human… but they are clearly more than just a next token predictor.
it's simply not. I find this argument by analogy very lazy. you need to do the work to show what that "and more" is and how it's the same for humans and LLMs. you can't just hand wave that it feels the same and leave it at that
it's simply not. I find this argument by analogy very lazy. you need to do the work to show what that "and more" is and how it's the same for humans and LLMs. you can't just hand wave that it feels the same and leave it at that