So far they don't look to be doing anything about it, but Gemini models have a serious repetition bug.
I don't think that it is related to a specific prompt, like a "prompt logic issue" badly understood by the model, but instead, it looks like that sometimes it generates things that makes it go nuts.
My best intuition is that sometimes it forgets all the context and just look at the last X tokens as context before the repetition, and so start repeating like if the last generated tokens are the only thing that you gave to it.
I don't think that it is related to a specific prompt, like a "prompt logic issue" badly understood by the model, but instead, it looks like that sometimes it generates things that makes it go nuts.
My best intuition is that sometimes it forgets all the context and just look at the last X tokens as context before the repetition, and so start repeating like if the last generated tokens are the only thing that you gave to it.