Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We would need a new hardware paradigm.

It's not even that. The architecture(s) behind LLMs are nowhere near close that of a brain. The brain has multiple entry-points for different signals and uses different signaling across different parts. A brain of a rodent is much more complex than LLMs are.



LLM 'neurons' are not single input/single output functions. Most 'neurons' are Mat-Vec computations that combine the products of dozens or hundreds of prior weights.

In our lane the only important question to ask is, "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"

Regarding the article, I disagree with the thesis that AGI research is a waste. AGI is the moonshot goal. It's what motivated the fairly expensive experiment that produced the GPT models, and we can look at all sorts of other hairbrained goals that ended up making revolutionary changes.


> "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"

Then you build something that is static and does not learn. This is as far from AI as you can get. You're just building a goofy search engine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: