Is Go-Space understood well enough to know either way?
The OMG-AI people claim that AGI would be dangerous because it would reliably innovate in new spaces and out-predict humans.
So a true super-AGI would make go moves that were unexpected and incomprehensible with some percentage of misleading fake-outs, but it would still win most or all of the time.
If the human exploration of Go-Space is close to the god's hand bounds, this can't be true.
My intuition (and it's really only just that) is that Go space is large enough that AI would be able to outplay humans while still not even beginning to approach "perfect" play. If so, then I would also expect that humans should be able to follow the lead of AI into new areas of Go space, and outplay the AI (at least until the AI has a chance to learn and catch up).
We'll know if this is the case in a couple of years, if the competition between human and AI goes back-and-forth (unlike Chess, where after AI was good enough to beat humans, it could do so reliably).
Either way, it's interesting to note that AlphaGo had literally thousands of games to learn from to find weaknesses in human play, but Lee Sedol seems to have only needed 3 before he was able to find weaknesses in AlphaGo's play.
> Either way, it's interesting to note that AlphaGo had literally thousands of games to learn from to find weaknesses in human play, but Lee Sedol seems to have only needed 3 before he was able to find weaknesses in AlphaGo's play.
To be fair we can't know how many games Sodol played in his own head to figure this out.
The OMG-AI people claim that AGI would be dangerous because it would reliably innovate in new spaces and out-predict humans.
So a true super-AGI would make go moves that were unexpected and incomprehensible with some percentage of misleading fake-outs, but it would still win most or all of the time.
If the human exploration of Go-Space is close to the god's hand bounds, this can't be true.