Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That doesn't really make sense with how LLMs work. I think this is exactly why it's risky to use words like "thinking" and "reasoning".

If by the "visible reasoning" is just for show they meant these models don't actually think and reason, then yes that is correct.

But if they meant that the visible reasoning is not quite literally a part of inference process...that's entirely incorrect.

R1 is open source. We don't have to make guesses about its functioning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: