Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I understand is Codex takes more time to gather context to make the relevant changes; with more context it may give a more precise response than Claude Code.


With Claude Code, I can 'tune' the prompt by feeling how much context the model needs to perform it's task. I can mention more files or tell it to read more code as needed.

With one hour of experience of Codex CLI, every single prompt - even the most simple ones - are 5+ minutes of investigation before anything gets done. Unbearable and totally unnecessary.


It has the same context. It does not take that long to ingest a bunch of files. OpenAI is just not offering the same level of performance, probably due to oversubscription.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: