An LLM is simply a model which given a sequence, predicts the rest of the sequence.
You can accurately describe any AGI or reasoning problem as an open domain sequence modeling problem. It is not an unreasonable hypothesis that brains evolved to solve a similar sequence modeling problem.
> It is not an unreasonable hypothesis that brains evolved to solve a similar sequence modeling problem.
The real world is random, requires making decisions on incomplete information in situations that have never happened before. The real world is not a sequence of tokens.
Consciousness requires instincts in order to prioritize the endless streams of information. One thing people dont want to accept about any AI is that humans always have to tell it WHAT to think about. Our base reptilian brains are the core driver behind all behavior. AI cannot learn that
How do our base reptilian brains reason? We don't know the specifics, but unless it's magic, then it's determined by some kind of logic. I doubt that logic is so unique that it can't eventually be reproduced in computers.
Reptiles didn't use language tokens, that's for sure. We don't have reptilian brains anyway, it's just that part of our brain architecture evolved from a common ancestor. The stuff that might function somewhat similar to an LLM is most likely in the neocortex. But that's for neuroscientists to figure out, not computer scientists. Whatever the case is, it had to have evolved. LLMs are intelligently designed by us, so we should be a little cautious in making that analogy.
"Consciousness requires instincts in order to prioritize the endless streams of information. "
What if "instinct" is also just (pretrained) model weight?
The human brain is very complex and far from understood and definitely does NOT work like a LLM. But it likely shares some core concepts. Neuronal networks were inspired by brain synapses after all.
> What if "instinct" is also just (pretrained) model weight?
Sure - then it will take the same amount of energy to train as our reptilian and higher brains took. That means trillions of real life experiences over millions of years.
Not at all, it took life hundreds of millions of years to develop brains that could work with language, and took us tens of thousands of years to develop languages and writing and universal literacy. Now computers can print it, visually read it, speech-to-text transcribe it, write/create/generate it coherently, text-to-speech output it, translate between languages, rewrite in different styles, explain other writings, and that only took - well, roughly one human lifetime since computers became a thing.
Information is a loaded word. Sure, you can say that based on our physical theories, you can think of the world that way, but information is what's meaningful to us amongst all the noise of the world. Meaningful for goals like survival and reproduction from our ancestors. Nervous systems evolved to help animals decide what's important to focus on. It's not a premade data set, the brain makes it meaningful in context of it's environment.
It depends on the goal, epicycles don't tell you about the nature of heavenly bodies - but they do let you keep an accurate calendar for a reasonable definition of accurate. I'm not sure whether I need deep understanding of intelligence to gain economic benefit from AI.
You can accurately describe any AGI or reasoning problem as an open domain sequence modeling problem. It is not an unreasonable hypothesis that brains evolved to solve a similar sequence modeling problem.