> in a way that is often identifiable (by humans) as not having been written by humans.
You should check out reddit sometime. It's been nearly twenty years (not hyperbole) of everyone accusing everyone else of being a bot/shill. Humans are utterly incapable of detecting such things. They're not even capable of detecting Nigerian prince emails as scams.
> not because AI writing is reliably passable.
"Newspaper editor" used to be a job because human writing isn't reliably passable. I say this not to be glib, but rather because sometimes it's easy for me to forget that. I have to keep reminding myself.
Also, has it not occurred to anyone that deep down in the brainmeat, humans might actually be employing some sort of organic LLM when they engage in writing? That technology actually managed to imitate that faculty at some low level? So even when a human really writes something, it's still an LLM doing so? When you type in the replies to me, are you not trying to figure out what the next word or sentence should be? If you screw it up and rearrange phrases and sentences, are you not doing what the LLM does in some way?
> Also, has it not occurred to anyone that deep down in the brainmeat, humans might actually be employing some sort of organic LLM when they engage in writing?
This is a fairly common take, along with the idea that AI image generators are just doing what humans do when they "learn from examples". But I strongly believe it's a fallacy. What generative AI does is analagous to what humans do, but it's still just an analogy. If you want to see this in action, it's better to look at the way generative AI fails than the way it succeeds: when it makes mistakes in text or images, the mistakes are very much not the kind of mistakes that humans make, because the process behind the scenes is very different.
Yes, obviously when humans write, they take into account context and awareness of what words naturally follow other words, but it seems unlikely we've learned to write by subconsciously arranging all the words we've encountered into multidimensional vector space and performing vector math operations to arrive at the next word based on the context window we're subconsciously constructing. We learn to write in a very different way.
It's truly amazing that generative AI writes as well as it does, but we reason about concepts and generative AI reasons about words. Personally, I'm skeptical that the problems LLMs have with "hallucinations" and with creating definitionally median text* can be solved by making LLMs bigger and faster.
*I did see the comment complaining that it's not mathematically accurate to say that LLMs produce average text, but from my understanding of how generative AI works as well as my recent misadventures testing an AI "novel writer," it's a decent approximation of what's going on. Yes, you can say "write X in the style of Y," but "write X but make it way above average" is not actually going to work.
Either the LLM is the most efficient way to generate text, or there's some magic algorithm out there that evolution stumbled upon a million years ago that we haven't even managed to see a hint that it exists. In which case, you'd be right, this is a fallacy.
Or, brainmeat can't do it better or more efficiently, and either uses the same techniques or something even worse. The latter seems unlikely, humans still do pretty well at generating text (gold standard, even).
> it's better to look at the way generative AI fails than the way it succeeds: when it makes mistakes in text or images, the mistakes are very much not the kind of mistakes that humans make, because the process behind the scenes is very different.
But are you looking at "mistakes" that are just little faux pas, or the ones where people with dementia, bizarre brain damage, or blipped out on hallucinogens incorrectly compute the next word? The former offer little insight. Poor taste in word choice, lack of eloquency, vulgar inclinations are what they amount to.
> but it seems unlikely we've learned to write by subconsciously arranging all the words we've encountered into multidimensional vector space and performing vector math operations to arrive at the next word
You think I meant that someone learns to do that at 2 years old, rather than that the brain has already evolved with the ability to do vector math operations or some true equivalent? I'm not talking about some pop psych level "subconscious" thing, but an actual honest to god neurological level faculty.
> but we reason about concepts and
Wander into Walmart next time, close your eyes briefly and extend your psychic powers out to the whole building, and tell me if you truly believe, deep down in your heart, that the humans in that store are reasoning about concepts even once a week. That many, if not most, reason about concepts even once a month. I dare you, just go some place like that, soak it all in.
Human reason exists, from time to time, here and there. But most human behavior can be adequately simulated without any reason at all.
> Or, brainmeat can't do it better or more efficiently, and either uses the same techniques or something even worse. The latter seems unlikely, humans still do pretty well at generating text (gold standard, even).
Considering we use something like a thousand times the compute, "something even worse" seems plausible enough.
I think we have plenty of evidence that humans have the ability to understand, while chatbots lack such an ability. Therefore, I'm inclined to think that we don't employ some sort of organic LLM but something completely different.
You should check out reddit sometime. It's been nearly twenty years (not hyperbole) of everyone accusing everyone else of being a bot/shill. Humans are utterly incapable of detecting such things. They're not even capable of detecting Nigerian prince emails as scams.
> not because AI writing is reliably passable. "Newspaper editor" used to be a job because human writing isn't reliably passable. I say this not to be glib, but rather because sometimes it's easy for me to forget that. I have to keep reminding myself.
Also, has it not occurred to anyone that deep down in the brainmeat, humans might actually be employing some sort of organic LLM when they engage in writing? That technology actually managed to imitate that faculty at some low level? So even when a human really writes something, it's still an LLM doing so? When you type in the replies to me, are you not trying to figure out what the next word or sentence should be? If you screw it up and rearrange phrases and sentences, are you not doing what the LLM does in some way?