Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The core thesis seems valid " AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition."

As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.



> large language models work comparatively well with (programming) languages

what else would they be good at




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: