Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been caught by this. Even Claude/ChatGPT will suggest it as an optimisation. Every time I've measured a performance drop doing this. Sometimes significant.


Is that weird? LLMs will just repeat what is in their training corpus. If most of the internet is recommending something wrong (like this conditional move "optimization") then that is what they will recommend too.


Not weird but important to note.


> Even Claude/ChatGPT will suggest it as an optimisation.

LLMs just repeat what people on the internet say, and people are often wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: