Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gemini is very prone to go into an infinite loop. Sometimes, it even happens with Google's own vibe coding IDE (Antigravity): https://bsky.app/profile/egeozcan.bsky.social/post/3maxzi4gs...




It also happened to me in the gemini-cli. It tried to think but somehow failed and putted all thoughts into the output and tried again and again to switch to "user output". If was practically stuck in an infinite loop

Yep. It happens all the time. Happened to me about 5 minutes ago. It does detect this and offer you the option to stop the loop or to let it continue.

> "A potential loop was detected. This can happen due to repetitive tool calls or other model behavior. The request has been halted."


So far they don't look to be doing anything about it, but Gemini models have a serious repetition bug.

I don't think that it is related to a specific prompt, like a "prompt logic issue" badly understood by the model, but instead, it looks like that sometimes it generates things that makes it go nuts.

My best intuition is that sometimes it forgets all the context and just look at the last X tokens as context before the repetition, and so start repeating like if the last generated tokens are the only thing that you gave to it.


All LLMs are, it's an innate thing. Google just sucks at the kind of long context training you need to do to mitigate that.

I would bet they won't suck at it for much longer, Gemini's progress in undeniable.

It was a consistent weak point for Gemini, compared to other major AIs. Reportedly, still is.

The progress is undeniable, the performance only ever goes up, but I'm not sure if they ever did anything to address this type of deficiency specifically. As opposed to being carried upwards by spillover from other interventions.


``sometimes, it even happens with [antigravity]``

Isn't this a problem with the agent loop / structure, rather than the llm, in that case?

The ide doesn't affect the models results, just what is done with those results?


I thought it was a specific prompt that breaks it, and that it's just something they never tested against, but when I saw it happen in antigravity, which supposedly must have been tested with a very specific use case, then I was very surprised.

The problem happens across tools that use Gemini



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: