Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't that because all "reasoning" approaches are very much fake? The model cannot internalise the concepts it has to reason about. For instance if you ask it why water feels wet, it is unable to grasp the concept of feeling and sensation of wetness, but will for sure "decompress" learned knowledge of people talking how it is to feel the water.


Everything about LLMs is fake. The "reasoning" trick is still demonstrably useful - the benchmarks consistently show models using that trick performing better at harder code challenges, for example.


I'd argue that what's generally considered "reasoning" isn't actually rooted in understanding either. It's just the process you apply to get to a conclusion

expressed more abstractly: is about drawing logical connections between points and extrapolating from them.

To quote the definition: "the action of thinking about something in a logical, sensible way."

I believe it's rooted in mathematics, not physics. That's probably why there is such a focus on the process instead of the result




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: