Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is funny is that when asked, the current LLMs/AIs, do not believe in an AGI. Here are the some of readings you can do about the AGI fantasy:

- Gödel-style incompleteness and the “stability paradox”

- Wolfram's principle - Principle of Computational Equivalence (PCE)

One of the red flags is human intelligence/brain itself. We have way more neurons than we are currently using. The limit to intelligence might very possibly be mathematical and adding neurons/transistors will not result in incremental intelligence.

The current LLMs will prove useful but since the models are out there, if this is a maxima, the ROI will be exactly 0.



The human brain existing is proof that "Gödel-style incompleteness" and "Wolfram's principle" are not barriers to AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: