Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True, LLMs definetly has something that is "thought-like".

But todays networks lacks the recursion(feedback where the output can go directly to the input) that is needed for the type of internalized thoughts that humans have. I guess this is one thing you are pointing at by mentioning the continuousnes of the internals of LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: