You don’t know this. I don’t feel like I generate thoughts in sequence, for me it feels hierarchical.
> can't arbitrarily mutate what came before
Uhh… what?
Do you remember your memories as a child?
Or what you ate for breakfast 3 weeks ago?
Have you ever misremembered an event or half remembered a solution to a problem?
The information in human minds are entirely mutable. They are not like computers…
> It's not a bag full of NULLs that just pulls next words from a look-up table.
Funny enough, the attention mechanism that’s popular right now is effectively lots and lots of stacked look up tables. That’s how it’s taught as well (what with the Q K and V)
Tho I don’t think that’s a requirement for LLMs in general.
I find a lot of people who half understand cognition and understand computing look at LLMs and work backwards to convince themselves that it’s “thinking” or doing more cognitive functions like we humans do. It’s personification bias.
> Do you remember your memories as a child? Or what you ate for breakfast 3 weeks ago?
For me, this seems like conjuring up and thinking about a childhood event is like putting what came out of my nebulous 'memory' fresh into context at the point in time you are thinking about it, along with whatever thoughts I had about it (how embarrassed I was, how I felt proud because of X, etc). As that context fades into the past, some of those thoughts may get mixed back into that region of my 'memory' associated with that event.
You don’t know this. I don’t feel like I generate thoughts in sequence, for me it feels hierarchical.
> can't arbitrarily mutate what came before
Uhh… what?
Do you remember your memories as a child? Or what you ate for breakfast 3 weeks ago?
Have you ever misremembered an event or half remembered a solution to a problem?
The information in human minds are entirely mutable. They are not like computers…
> It's not a bag full of NULLs that just pulls next words from a look-up table.
Funny enough, the attention mechanism that’s popular right now is effectively lots and lots of stacked look up tables. That’s how it’s taught as well (what with the Q K and V)
Tho I don’t think that’s a requirement for LLMs in general.
I find a lot of people who half understand cognition and understand computing look at LLMs and work backwards to convince themselves that it’s “thinking” or doing more cognitive functions like we humans do. It’s personification bias.