But it’s also based on neurons with far more complex behavior than artificial neurons and also has other separate dynamic systems involving neurochemicals, various effects across the nervous system and the rest of the body (the gut becoming seemingly more and more relevant), various EEG patterns, and most likely quantum effects.
I personally wouldn’t rule out that it can’t be emulated in a different substrate, but I think calling it “an algorithm” is to def stretch and misapply the usefulness of the term.
If it performs a computation, it is by definition running some algorithm regardless of how it's implemented in hardware / wetware. How is it a stretch?
The only way our brains could be not algorithmic is if something like soul is a real thing that actually drives our intelligence.
Why? Rain is not algorithmic, clouds are not algorithmic, waves in the sea are not algorithmic, yet they are entirely physical processes that have nothing to do with souls.
Heaven forbid. I'd go to jail for such a blasphemous transgression of common law, wouldn't I? Thank you kind stranger for reminding me of the legislation.
Ah ok. Here you use the word “explain” which implies more of a descriptive, reducing action rather than extrapolative and constructive. As in, it can explain what it has “read” (and it has obviously “read” far more than any human), but it can’t necessarily extrapolate beyond that or use that to find new truths. To me reasoning is more about the extrapolative, truth-finding process, ie “wisdom” from knowledge rather than just knowledge. But maybe my definition of “reasoning” isn’t quite right.
Edit: I probably should define reasoning as solely “deductive reasoning”, in which case, perhaps it is better than humans. But that seems like a premature claim. On the other hand, non-deductive reasoning, I have yet to see from it. I personally can’t imagine how it could do so reliably (from a human perspective) without real-world experiences and perceptions. I’m the sort that believes a true AGI would require a highly-perceptual, space-occupying organ. In other words it would have to be and “feel” embodied, in time and space, in order to perform other forms of reasoning.
(In case it was missed, I’ve added a relevant addendum to my previous comment.)
Not sure an example is needed because I agree it “explains” better than pretty much everyone. (From my mostly lay perspective) It essentially uses the prompt as an argument in a probabilistic analysis of its incredibly vast store of prior inputs to transform them into an output that at least superficially satisfies the prompter’s goals. This is cool and useful, to say the least. But this is only one kind of reasoning.
A machine without embodied perceptual experiences simply cannot reason to the full-extent of a human.
(It’s also worth remembering that the prompter (very likely) has far less knowledge of the domain of interest and far less skill with the language of communication, so the prompter is generally quite easily impressed regardless of the truth of the output. Nothing wrong with that necessarily, especially if it is usually accurate. But again, worth remembering.)
I have no idea what happened. I don’t even know what you expect me to describe. Someone feels great about something? And I don’t know what it has to do with reasoning.
That’s the point. You don’t know exactly what happened. So you have to reason your way to an answer, right or wrong.
I’m sure it elicited ideas in your head based on your own experiences. You could then use those ideas to ask questions and get further information. Or you could simply pick an answer and then delve into all the details and sensations involved, creating a story based on what you know about the world and the feelings you’ve had.
I could have created a more involved “prompt story” one with more details but still somewhat vague. You would probably have either jumped straight to a conclusion about what happened or asked further questions.
Something like “He kicked a ball at my face and hit me in the nose. I laughed. He cried.”
Again, vague. But if you’ve been in such a situation you might have a good guess as to what happened and how it felt to the participants. ChatGPT would have no idea whatsoever as it has no feelings of its own with which to begin a guess.
Consider poetry. How can ChatGPT reason about poetry? Poetry is about creating feeling. The content is often beside the point. Many humans “fail” at understanding poetry, especially children, but there are of course many humans that “get it”, escpecially after building up enough life experience. ChatGPT could never get it.
Likewise for psychedelic or spiritual experiences. One can’t explain such experience to one who has never had it and ChatGPT will never have it.
You're talking about describing your memories of your inner experiences. Memories transform with time, sometimes I'm not sure if what I think I remember actually happened to me, or if this is something I read or seen in a movie, or someone else described it to me. Fake memories like that might feel exactly the same as the things that I actually experienced.
GPT-4 has a lot of such fake memories. It knows a lot about the world, and about feelings, because it has "experienced" a lot of detailed descriptions of all kinds of sensations. Far more than any human has actually experienced in their lifetime. If you can express it in words, be it poetry, or otherwise, GPT-4 can understand it and reason about it, just as well as most humans. Its training data is equivalent to millions of life experiences, and it is already at the scale where it might be capable of absorbing more of these experiences than any individual human.
GPT-4 does not "get" poetry in the same way a human does, but it can describe very well the feelings a human is likely to feel when reading any particular piece of poetry. You don't need to explain such things to GPT-4 - it already knows, probably a lot more than you do. At least in any testable way.
Imagine a world without words. No need to imagine really. It exists. It’s everywhere. It’s the core. It’s what words represent, but words can only represent it to an entity that has experienced it to some degree. ChatGPT “knows” nothing about it. You do. Whether you recognize it or not.
ChatGPT is a machine, an algorithm, a recombinator of symbols. It doesn’t know what the symbols refer to because each symbol necessarily refers to another symbol until
you finally reach a symbol that refers to a shared, real experience…perhaps (Hello Wittgenstein!). And ChatGPT has no experience. Just symbols. It can’t intuit anything. It can’t feel anything. Even if you put quotes around “feel”, what does that even mean for a software algorithm running on hardware that does not feed continuous, variable electrical
sensations to the algorithm? It only feeds discrete symbols. Do you feel the number 739? Or do you “feel” it? Um what? Whatever inner experience 739 happens to produce in you is grounded in some real experiences in the past. Likewise any fake memories you have that somehow seem real, those are still grounded in a real feelings at some point. You could do this ad infinitum. If you are alive, you have experience. But ChatGPT has no experience, no grounding.
Problem here might be that we are trying to use words and logic to describe something that cannot be described by either.