Users don't care about the technical details of what model is being used for what type of output. What matters to them is that they asked ChatGPT to draw a map, and it spat out nonsense.
The same issue exists with a bunch of other types of image output from ChatGPT - graphs, schematics, organizational charts, etc. It's been getting better at generating images which look like the type of image you requested, but the accuracy of the contents hasn't kept up.
"The custom is always right", maybe so, but CNN has a duty to fact check the accuracy of their central claims.
ChatGPT's image generation was not introduced as part of the GPT-5 model release (except SVG generation).
The article leads with "The latest ChatGPT [...] can’t even label a map".
Yes, ChatGPT's image gen has uncanny valley issues, but OpenAI's GPT-5 product release post says nothing about image generation, it only mentions analysis [1].
As far as I can tell, GPT-Image-1 [2], which was released around March, is what powers image generation used by ChatGPT, which they introduced as "4o Image Generation" [3], which suggests to me that GPT-Image-1 is a version of the old GPT-4o.
The GPT-5 System card also only mentions image analysis, not generation. [4]
In the OpenAI live stream they said as much.
CNN could have checked and made it clear the features are from the much earlier release, but instead they lead with a misleading headline.
It's very true that OpenAI doesn't make it obvious how the image generation works though.
Even if the image generation isn't handled by GPT-5 itself, GPT-5 is still in the driver's seat. It's responsible for the choice to generate an image, and for writing the prompt which drives the image model.
As an aside, ChatGPT has always been "overconfident" in the capabilities of its associated image model. It'll frequently offer to generate images which exceed its ability to execute, or which would need to be based on information which it doesn't know. Perhaps OpenAI developers need to place more emphasis on knowing when to refuse unrealistic image generation requests?
Yeah, OpenAI do have harmfulness classifiers in ChatGPT that can detect problems in their own responses before it finishes, and learning how confident GPT-5 or the image generator tool call should be about meeting the brief, after it finishes describing the visual concept, might be a task OpenAI could train a classifier on.
But reliably predicting ahead of time can be a really hard problem to solve; knowing how successful a complicated tool will be, before it actually starts or finishes the task attempt is tricky.
After gpt-image-1 has produced an image is another helpful intervention point, it can do a better self-review for detecting problems after image generation, but it's still not very thorough.
However OpenAI has small teams, they try to keep them small and really focused, and everything is always changing really fast, they probably have gpt-image-2 or something else soon anyway.
In a way, reliable prediction is the main job OpenAI has to solve, and always has been.
Some researches say the main way models are trained causes "Entangled Representations", which makes them unreliable.
They also suffer from the "Reverse Curse". Maybe when they fix these issues, it might be real AGI and ASI all in one go?
The same issue exists with a bunch of other types of image output from ChatGPT - graphs, schematics, organizational charts, etc. It's been getting better at generating images which look like the type of image you requested, but the accuracy of the contents hasn't kept up.