>This scientific field is called universal artificial intelligence
How many times do we need to re-define the same concept? Artificial Intelligence, Human Level Artificial Intelligence, Artificial General Intelligence, Strong Artificial Intelligence etc....
Lets pick one as a community and stick with it. I thought AGI was in the lead there recently what with the conference, journal and high amount of web searches but apparently that isn't enough.
Beyond that, I would like to see how they make their algorithm narrow for the narrow application of the Pac-Man game while keeping it generalizable. My guess is that they don't and this algo ends up being a "starting point" for narrow AI applications to rest on. In that case it's fine and interesting, but doesn't pass the test to integrating specificity within a general learning model.
Because they are not the same concept. Universal Artificial Intelligence is not same as Human Level Artificial intelligence.
There is no reason to assume that human intelligence is universal. Or intelligence is biased towards picking berries and avoiding bears in three dimensional world.
> How many times do we need to re-define the same concept? Artificial Intelligence, Human Level Artificial Intelligence, Artificial General Intelligence, Strong Artificial Intelligence etc....
Every time the name gets co-opted to mean something else. These days "artificial intelligence" == "machine learning and natural language processing" which is most definitely not what TFA is about.
No. There is a perfectly good term which has been around for a very long time: Strong AI. In no context does this mean "ML + NLP" (which, by the way, AI itself hardly means). I think terms like AGI are just rebranding.
I don't think strong AI has quite the history you think it does. It's one of these later coinages. Also, it has separate quasi-related meanings in philosophy of the mind...
Strong AI is one of those hand-wavey things that makes you sound like you're talking about something well defined, when you actually aren't. Has always struck me as counter-productive.
I like computational intelligence since we still don’t really know what intelligence is and whether our kind of intelligence can be reproduced artificially (although there are some hints that it can be done).
Ok there's no conclusive proof yet as no one has been able to do it yet, but as far as I know no one actually debates this anymore. It would be incredibly surprising, to say the least, if the brain turned out to work completely outside the known laws of physics.
We don't know whether physical reactions in our brains allow for and depend on calculation using real numbers, which can only be approximated using our current computing technology.
> How many times do we need to re-define the same concept?
I sometimes think everybody's still afraid there'll be another AI Winter (and/or that the first one hasn't ended yet), and are therefore anxious to come up with a new name for what they're doing every so often as sort of a pre-emptive dodge.
I have been following this for a while and have taken the time to understand it in a little detail. While it does not provide the 'answers', it does frame the question of 'what is an intelligent machine?' in a very precise manner. It's interesting work, what it needs is for someone to now work out how to build much better models and plug them into the framework provided by AIXI.
And when our assumptions turn out to be wrong, should we continue working on it even though we know it can never work? The problem is every new term is forged with assumptions because the very idea of what intelligence is the subject of ongoing debate. (I don't even like the one presented here, even though it's pretty good as far as these things go.)
I sympathize though, the nomenclature is polluted with the corpses of would-be giants and makes it hard to talk about what we can all easily recognize as the same broad-strokes effort.
How many times do we need to re-define the same concept? Artificial Intelligence, Human Level Artificial Intelligence, Artificial General Intelligence, Strong Artificial Intelligence etc....
Lets pick one as a community and stick with it. I thought AGI was in the lead there recently what with the conference, journal and high amount of web searches but apparently that isn't enough.
Beyond that, I would like to see how they make their algorithm narrow for the narrow application of the Pac-Man game while keeping it generalizable. My guess is that they don't and this algo ends up being a "starting point" for narrow AI applications to rest on. In that case it's fine and interesting, but doesn't pass the test to integrating specificity within a general learning model.