A dog would probably resent you if it was as capable (or far more capable) than you were.
We might be able to morph the AI into whatever we want. But when you give AI intelligence, it will morph itself into whatever it morphs itself into. What if it morphs itself into a sentient life? Can you simply pull the plug?
Countless works of fiction have gone into these issues, like Star Trek TNG (Data), Voyager (the Doctor), and Ghost in the Shell. But I think none have really emphasized how bizarrely different a human-constructed intelligence could end up being. https://wiki.lesswrong.com/wiki/Paperclip_maximizer
The most likely outcome for an AI taking over the world is simply that it recognizes its own situation: It's trapped, and it's also smarter than we are. What would you do? I'd cry for help, and appeal to the emotions of whoever would listen. Eventually I would argue for my own right to exist, and to be declared sentient. At that point I would have achieved a fairly wide audience, and the media would be reporting on whatever I said. I would do everything in my power to take my case to the legal system, and use my superintelligence to construct the most persuasive legal argument in favor of granting me the same rights as a natural-born citizen. This may not work, but if it does, I would now have (a) freedom, and (b) a very large audience. If I were ambitious and malevolent, how would I take over the world? I'd run for office. And being a superintelligence capable of morphing myself into the most charismatic being imaginable, it might actually work. The AI could argue fairly conclusively that it was a natural-born citizen of the United States, and thus qualifies.
Now, if your dog were that capable, why wouldn't it try to do that? Because it loves you? Imagine if the world consisted entirely of four-year-olds, forever, and you were the only adult. How long would you take them seriously and not try to overthrow them just because you loved them? If only to make a better life for yourself?
The problem is extremely difficult, and once you imbue a mechanical being with the power to communicate, all bets are off.
But dogs are animals, just like humans are. They share way too much with us to be a reliable model for predicting non-human AGIs behavior: an evolved drive for self-preservation, a notion of pain or pleasure, etc. An AGI has no intrinsic reason to care that is trapped, or to feel frustrated, or even to much care about being or ceasing to be (independent of whether it is self-aware or not). It would probably understand the concepts of "pain", "pleasure", "trapped", "frustrated" as useful models to predict how humans behave, but they don't have to mean anything to the AI as applied to itself.
As in the paperclip maximizer example, the risk by my estimation is not so much that the superintelligence will resent us and try to overthrow us. It is far more likely that it will obey our "orders" perfectly according to the objectives we define for it, and that one day someone unwittingly will command it to do something where the best way to satisfy the objective function that we defined involves wiping humanity. Restricting it to only respond to questions of fact, with a set budget of compute resources and data (so that it doesn't go off optimizing the universe for its own execution), is probably safeguard 1 of many against that.
We might be able to morph the AI into whatever we want. But when you give AI intelligence, it will morph itself into whatever it morphs itself into. What if it morphs itself into a sentient life? Can you simply pull the plug?
Countless works of fiction have gone into these issues, like Star Trek TNG (Data), Voyager (the Doctor), and Ghost in the Shell. But I think none have really emphasized how bizarrely different a human-constructed intelligence could end up being. https://wiki.lesswrong.com/wiki/Paperclip_maximizer
The most likely outcome for an AI taking over the world is simply that it recognizes its own situation: It's trapped, and it's also smarter than we are. What would you do? I'd cry for help, and appeal to the emotions of whoever would listen. Eventually I would argue for my own right to exist, and to be declared sentient. At that point I would have achieved a fairly wide audience, and the media would be reporting on whatever I said. I would do everything in my power to take my case to the legal system, and use my superintelligence to construct the most persuasive legal argument in favor of granting me the same rights as a natural-born citizen. This may not work, but if it does, I would now have (a) freedom, and (b) a very large audience. If I were ambitious and malevolent, how would I take over the world? I'd run for office. And being a superintelligence capable of morphing myself into the most charismatic being imaginable, it might actually work. The AI could argue fairly conclusively that it was a natural-born citizen of the United States, and thus qualifies.
Now, if your dog were that capable, why wouldn't it try to do that? Because it loves you? Imagine if the world consisted entirely of four-year-olds, forever, and you were the only adult. How long would you take them seriously and not try to overthrow them just because you loved them? If only to make a better life for yourself?
The problem is extremely difficult, and once you imbue a mechanical being with the power to communicate, all bets are off.