It also happened to me in the gemini-cli. It tried to think but somehow failed and putted all thoughts into the output and tried again and again to switch to "user output". If was practically stuck in an infinite loop
So far they don't look to be doing anything about it, but Gemini models have a serious repetition bug.
I don't think that it is related to a specific prompt, like a "prompt logic issue" badly understood by the model, but instead, it looks like that sometimes it generates things that makes it go nuts.
My best intuition is that sometimes it forgets all the context and just look at the last X tokens as context before the repetition, and so start repeating like if the last generated tokens are the only thing that you gave to it.
It was a consistent weak point for Gemini, compared to other major AIs. Reportedly, still is.
The progress is undeniable, the performance only ever goes up, but I'm not sure if they ever did anything to address this type of deficiency specifically. As opposed to being carried upwards by spillover from other interventions.
I thought it was a specific prompt that breaks it, and that it's just something they never tested against, but when I saw it happen in antigravity, which supposedly must have been tested with a very specific use case, then I was very surprised.