While the AI FOOM and Hard takeoff options are discussed, I have yet to see a practical breakdown of HOW - like step by step from any of the existential warning people. It's al vagaries.
To your other points, you imply too much. The Chess AI that turns into AGI isn't realistic - it's values are "be the best at chess" which it can do with existing computing power. No need to tear the world apart - it would be inefficient.
I also never make the might makes right case. All if the examples you give are fantasy and don't reflect what an actual superintelligence might look like. Again, optimization to some narrow goal has too many weak points to take over all of humanity's functions.
"if a system were smarter it must necessarily be more morally right", which is blatantly untrue but in an understandable way
I'm unconvinced that this is blatantly untrue. "Moral right" is subjective - hence the point. We got to our morals today not through mysticism but empiricism so it's not out of the reach of superintelligence to optimize further.
> The Chess AI that turns into AGI isn't realistic - it's values are "be the best at chess" which it can do with existing computing power.
The AGI has whatever values we give it. Existing chess AIs don't seek to maximize their ability to play chess, they seek merely to win the particular game of chess they're playing.
But suppose we build a chess-playing AGI and tell it to "be the best at chess". It must anticipate that we might build a second, superior, chess-playing AGI and give it the same goal. One way to be the best at chess would be to prevent that second AGI being built. One way to prevent that second AGI being built would be to destroy humanity's capability to build AGIs. That probably counts as a loss for humanity.
Suppose the second AGI gets built despite the first's efforts. Now both AGIs have an incentive to destroy both the other, and the possibility of a third. At any particular time, one or both of the AGIs won't be the best at chess, so they'll also have an incentive to get better at chess by actually improving their chess-playing capability. This will involve converting the Earth into processing power for it to use. That probably counts as a loss for humanity.
> Again, optimization to some narrow goal has too many weak points to take over all of humanity's functions.
It doesn't have to take over all of humanity's functions to wreak havoc. A hypothetical AI disaster could be one goal-oriented system with a poorly constructed goal and enough initial resources.
> I'm unconvinced that this is blatantly untrue. "Moral right" is subjective - hence the point. We got to our morals today not through mysticism but empiricism so it's not out of the reach of superintelligence to optimize further.
I think you're making a fundamental and unwarranted assumption here.
You're anthropomorphizing "superintelligence" as something vaguely human-like but better. A system doesn't have to be "intelligent" in a sense that relates at all to what humans think of as "intelligent" to be dangerous. It could simply be a "really powerful optimization process". You're romanticizing the notion of a superintelligent being discarding human values and inventing some new moral system that it then follows, and ignoring the possibility of an algorithm no "smarter" than a nanobot instructed to make a copy of itself. That nanobot doesn't have an interesting value system; it doesn't need one to kill everyone and everything, though. And that's not an outcome that, individually or as a species, we should take any pride or "comfort" in.
You're also assuming that the ability to destroy the world requires some kind of intelligent process or executive function, and could not possibly be discovered by an optimization process. It wouldn't necessarily come across such a mechanism at random, but may of the approaches we might apply towards the creation of useful AI could provide exceptionally powerful pattern recognition capabilities, and search abilities.
As a complete hypothetical off the top of my head, imagine a ridiculously powerful pattern-search program effectively recreating the idea of afl-fuzz ("throw input at a program and find interesting behavior"), and applying it against the mechanisms running it in a sandbox. Improbable, but not wildly impossible, and an agent that succeeded would gain access to additional computation resources that would allow it to do better than the algorithms it competes with. So, now you have a complex pattern-search engine trained to break out of sandboxes...
To your other points, you imply too much. The Chess AI that turns into AGI isn't realistic - it's values are "be the best at chess" which it can do with existing computing power. No need to tear the world apart - it would be inefficient.
I also never make the might makes right case. All if the examples you give are fantasy and don't reflect what an actual superintelligence might look like. Again, optimization to some narrow goal has too many weak points to take over all of humanity's functions.
"if a system were smarter it must necessarily be more morally right", which is blatantly untrue but in an understandable way
I'm unconvinced that this is blatantly untrue. "Moral right" is subjective - hence the point. We got to our morals today not through mysticism but empiricism so it's not out of the reach of superintelligence to optimize further.