No, it wasn't, language itself didn't even exist at one point. It wasn't inferred from training data into existence because such examples existed before. Now we have a dictionary of tens of thousands of words, which describe high level ideas, abstractions, and concepts that someone, somewhere along the line had to invent.
And I'm not talking about imitation nor am I interested in semantic games, I'm talking about raw inventiveness. Not a stochastic parrot looping through a large corpus of information and a table of weights on word pairings.
Has AI ever managed to learn something humans didn't already know? It's got all the physics text books in its data set. Can it make novel inferences from that? How about in math?
> No, it wasn't, language itself didn't even exist at one point.
Language took dozens of millennia to form, and animals have long had vocalizations. Seems like a natural building on top of existing features.
> Has AI ever managed to learn something humans didn't already know?
AlphaZero invented all new categories of strategy for games like Go, when previously we thought almost all possible tactics had been discovered. AIs are finding new kinds of proteins we never thought about, which will blow up the fields of medicine and disease in a few years once the first trials are completed.
Sure, but in a simulated evolutionary algorithm, you can hit a few hundred generations in a matter of seconds.
Indeed, the identification of an abstraction, followed by a definition of that abstraction and an enshrinement of the concept in the form of a word or phrase, in and of itself, shortcuts the evolutionary path altogether. AI isn't starting from scratch: it's starting from a dictionary larger than any human alive knows and in-memory examples of humans conversing on nearly every topic imaginable.
We never thought "all possible tactics" had been discovered with Go. We quite literally understood that Go had a more complex search space than Chess, with far more possible moves and outcomes. And I don't think anyone has any kind of serious theorem that "all possible tactics" have been discovered in either game, to this day.
That being said, Go and Chess are enumerable games with deterministic, bounded complexity and state space.
The protein folding example is a neat one, I definitely think it's interesting to see what develops there. However, protein folding has been modeled by Markov State models for decades. The AlphaFold breakthrough is fantastic, but it was already known how to generate models from protein structures: it was just computationally expensive.
It was also carefully crafted by humans to achieve what it did: https://www.youtube.com/watch?v=gg7WjuFs8F4. So this is an example of humans using neural network technology that humans invented to achieve a desired solution to a known problem that they themselves conceived. The AI didn't tell us something we didn't already know. It was an expert system built with teams of researchers in the loop the whole way through.
Coming up with new language is rarely ever coming up with new concepts that didn't exist until the word. We come up with high-level abstractions because there already exists a material system to be described and modeled. Language that doesn't describe anything that already exists is more like babbling.
Not really, no. There are plenty of intangible abstractions that don't describe material systems. Take imaginary numbers, for example. Or the concept of infinity or even zero, neither of which exists in the physical world.
The reason why "naming things" is the other hard problem in computer science, after cache invalidation, is that the process of identifying, creating, and describing ideal abstractions is itself inherently difficult.
And I'm not talking about imitation nor am I interested in semantic games, I'm talking about raw inventiveness. Not a stochastic parrot looping through a large corpus of information and a table of weights on word pairings.
Has AI ever managed to learn something humans didn't already know? It's got all the physics text books in its data set. Can it make novel inferences from that? How about in math?