Calling LLM's "pattern matching text machines" is a catchy thought-terminating cliche, which accounts to calling a human brain a "blob of fats, salts, and chemicals". It technically makes sense, but it is seeing the forest for the trees, and ignores the fact that this mere pattern patching text machine is doing things people said were impossible a few years ago. The simplicity and seeming mundanity of a technology has no bearing on its potential or emergent properties. A single termite, observed by itself, could never reveal what it could build when assembled together with its brethren.
I agree that there are lots of limitations to current LLM's, but it seems somewhat naive to ignore the rapid pace of improvement over the last 5 years, the emergent properties of AI at scale, especially in doing things claimed to be impossible only years prior (remember when people said LLM's could never do math, or that image models could never get hands or text right?).
Nobody understands with greater clarity or specificity the limitations of current LLM's than the people working in labs right now to make them better. The AGI prognostications aren't suppositions pulled out of the realm of wishful thinking, they exist because of fundamental revelations that have occurred in the development of AI as it has scaled up over the past decade.
I know I claimed that HN's hatred of AI was an emotional one, but there is an element to their reasoning too that leads them down the wrong path. By seeing more flaws than the average person in these AI systems, and seeing the tact with which companies describe their AI offerings to make them seem more impressive (currently) than they are, you extrapolate that sense of "figuring things out" to a robust model of how AI is and must really be. In doing so, you pattern match AI hype to web3 hype and assume that since the hype is similar in certain ways, that it must also be a bubble/scam just waiting to pop and all the lies are revealed. This is the same pattern-matching trap that people accuse AI of making, and see through the flaws of an LLM output while it claims to have solved a problem correctly.
No, it´s really not - it's exactly what they are. Multi-dimensional pattern matching machines, using massive databases put together from resources like stack overflow, Clegg's (every cheaters go to for assignment answers, massive copyright theft etc.). If that wasn´t the case, there wouldn't be jobs right now writing answers to feed into the databases.
And that´s actually quite useful - given that most of this material is paywalled or blocked from search engines. It´s less useful when you look at code examples that mix different versions of python, and have comments referring to figures on the previous page. I´m afraid it becomes very obvious when you look under the hood at the training sets themselves, just how this is all being achieved.
Look into every human’s brain and you’d see the same thing. How many humans can come up with novel, useful patents? How many novel useful patents themselves are just variations of existing tech?
All intelligence is pattern matching, just at different scales. AI is doing the same thing human brains do.
> Look into every human’s brain and you’d see the same thing.
Hard not to respond to that sarcastically. If you take the time to learn anything about neuroscience you'll realise what a profoundly ignorant statement it is.
If that is the case, where are the LLM-controlled robots where LLM is simply given access to bunch of sensors and servos, and learns to control them on its own? And why are jailbreaks a thing?
If tomorrow, all human beings ceased to exist, barring any in-progress operations, LLMs would go silent, and the machinery they run on would eventually stop functioning.
If tomorrow, all LLMs ceased to exist, humans would carry on just fine, and likely build LLMs all over again, next time even better.
I agree that there are lots of limitations to current LLM's, but it seems somewhat naive to ignore the rapid pace of improvement over the last 5 years, the emergent properties of AI at scale, especially in doing things claimed to be impossible only years prior (remember when people said LLM's could never do math, or that image models could never get hands or text right?).
Nobody understands with greater clarity or specificity the limitations of current LLM's than the people working in labs right now to make them better. The AGI prognostications aren't suppositions pulled out of the realm of wishful thinking, they exist because of fundamental revelations that have occurred in the development of AI as it has scaled up over the past decade.
I know I claimed that HN's hatred of AI was an emotional one, but there is an element to their reasoning too that leads them down the wrong path. By seeing more flaws than the average person in these AI systems, and seeing the tact with which companies describe their AI offerings to make them seem more impressive (currently) than they are, you extrapolate that sense of "figuring things out" to a robust model of how AI is and must really be. In doing so, you pattern match AI hype to web3 hype and assume that since the hype is similar in certain ways, that it must also be a bubble/scam just waiting to pop and all the lies are revealed. This is the same pattern-matching trap that people accuse AI of making, and see through the flaws of an LLM output while it claims to have solved a problem correctly.