AIUI, they generally do all of that at the beginning.
Another approach, I suppose, could be to have it generate a second pass? Though that would probably ~double the inference cost.
If you didn't have the luxury of a delete button, such as when you're just talking directly to someone IRL, you would probably say something like "no, wait, that doesn't make any sense, I think I'm confusing myself" and then either give it another go or just stop there.
I wish LLMs would do this rather than just bluster on ahead.
What I'd like to hear from the AI about seahorse emojis is "my dataset leads me to believe that seahorse emojis exist... but when I go look for one I can't actually find one."
There have been attempts to give LLMs backspace tokens. Since no frontier model uses it I can only guess it doesn't scale as well as just letting it correct itself in COT
I have the luxury of a delete button - the LLM doesn't get that privilege.