That's because "intelligent beings" have memory. If you ask an LLM the same question within the same chat session you'll get a consistent answer about it.
I disagree. If you were to take a snapshot of someone's knowledge and memory such that you could restore to it over and over, that person would give the same answer to the question. The same is not true for an LLM.
Heck, I can't even get LLMs to be consistent about *their own capabilities*.
Bias disclaimer: I work at Google, but not on Gemini. If I ask Gemini to produce an SVG file, it will sometimes do so and sometimes say "sorry, I can't, I can only produce raster images". I cannot deterministically produce either behavior - it truly seems to vary randomly.
We're often explicitly adding in randomness to the results so it feels weird to then accuse them of not being intelligent after we deliberately force them off the path.