Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. Even if an LLM tells you its “reasoning” process step by step, it’s not actually an exposition of the model’s internal decision process. It’s just more text that, when generated, improves the chances of a good final output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: