Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Author’s central point is that an LLM answer “is optimized for arrival, not for becoming” (to paraphrase from the Google “Lucky” part).

So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.

That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: