What if you do? LLMs don't have reflexive output or internal streams of thought, they are simply (complex) processes that produce streams of tokens based on an inputted stream of tokens. They don't have a special response to tokens that indicate higher-level thinking to humans.
LLMs seem to me to be the "internal streams of thought". I.e. it's not LLMs that are missing an internal process that humans have, but rather it's humans that have an entire process of conscious thinking built on top of something akin to LLM.
I agree completely and I think this is where a lot of people get tripped up. There's no reason to think an AGI needs to be an LLM alone, it might just be a key building block.
Well put, and I agree. My belief is that if a typical person was drugged or otherwise induced to just blurt out their unfiltered thoughts out loud as it crossed their brain, the level of incohesion and false confidence on display would look a lot like an LLM hallucinating.
The way I phrased it isn't exactly structured to admit any kind of evidence, so let me destructure it. My observation is that:
- The argument that LLMs are missing introspection / inner voice is based on attempting to compare LLMs directly with human minds.
- Human minds have conscious and unconscious parts; for many people, part of the boundary between unconscious and conscious mind manifests as the "inner voice" - the one that makes verbalized thoughts "appear" in their head (or rather perhaps become consciously observed).
- Based entirely on my own experience in using GPT-3.5 and GPT-4, and my own introspection, I feel that GPT-4 bears a lot of resemblance to my inner voice in terms of functioning.
- Therefore I propose that comparing LLMs directly to human minds is unproductive, and it's much more interesting/useful to compare them to the inner voice in human minds: the part of the boundary between unconscious and conscious that uses natural language for I/O.