I get the idea that different AI have different characters, and different people can either 'get along with them' or not.
To wit, I have absolutely no problems with claude code, but anytime I try to do anything useful with chatgpt it turns into (effectively) a shouting match; There's just no way that particular AI and I can see eye-to-eye. (there's underlying procedural reasons I think)
The author of this piece has the exact opposite experience. Apparently they hate Claude with a passion, but love ChatGPT. Weird!
This is a fascinating aspect. I also find much more usage out of Claude code and its willingness to go along with my ideas and be “steerable” seems to be part of it with codex many times it kept on overwriting changes I told it to make. It’s a clear kind of disagreeableness trait in action. Makes me think!
I find claude models often use “tricks” like bash one liners, essentially excelling at surgical fixes. It does what i want more reliably, on smaller tasks.
GPT-5 can often be better at larger architectural changes, but i find that comes at the cost of instability/broken PRs. It often fails to capture intent or argues back, or just completely spirals out of control more often.
GPT-5 codex seemed to refuse valid requests like “make a change to break a test so we can test CI” (it over indexed on our agents.md and other instructions and then refused on the basis of “ethics” or some such)
To wit, I have absolutely no problems with claude code, but anytime I try to do anything useful with chatgpt it turns into (effectively) a shouting match; There's just no way that particular AI and I can see eye-to-eye. (there's underlying procedural reasons I think)
The author of this piece has the exact opposite experience. Apparently they hate Claude with a passion, but love ChatGPT. Weird!