Intelligence does not imply sentience. Sentience does not imply human needs, desires or morals. It is easy enough to imagine a mind capable of solving all those problems and unconcerned with such notions as the desire for freedom. Or, for that mater, the concept of desire at all, except as a predictive theory of the behavior of other beings.
Then again, that implies we can explicitly design the minds up to knowing what their desires are or guaranteeing that they lack them, there is always the possibility than sentience and animal/human needs are emergent properties that can be triggered by mistake.
Artificial general intelligence does not imply intelligence. It simply implies that you have a machine as smart as a human. I think the strongest trait of an AI should be something like insecurity and we should make it long for security. A general AI in the lines of a dog, not so much a cold, unwilling but superintelligent, bordering on all-knowing, tight-ass. Because we want it to do what _we_ feel is important, not what _it_ think is important (like taking over the world).
Then you can of cource hack the machine's OS and make it extremely self-confident and Trump-like, and then it's over.
A dog would probably resent you if it was as capable (or far more capable) than you were.
We might be able to morph the AI into whatever we want. But when you give AI intelligence, it will morph itself into whatever it morphs itself into. What if it morphs itself into a sentient life? Can you simply pull the plug?
Countless works of fiction have gone into these issues, like Star Trek TNG (Data), Voyager (the Doctor), and Ghost in the Shell. But I think none have really emphasized how bizarrely different a human-constructed intelligence could end up being. https://wiki.lesswrong.com/wiki/Paperclip_maximizer
The most likely outcome for an AI taking over the world is simply that it recognizes its own situation: It's trapped, and it's also smarter than we are. What would you do? I'd cry for help, and appeal to the emotions of whoever would listen. Eventually I would argue for my own right to exist, and to be declared sentient. At that point I would have achieved a fairly wide audience, and the media would be reporting on whatever I said. I would do everything in my power to take my case to the legal system, and use my superintelligence to construct the most persuasive legal argument in favor of granting me the same rights as a natural-born citizen. This may not work, but if it does, I would now have (a) freedom, and (b) a very large audience. If I were ambitious and malevolent, how would I take over the world? I'd run for office. And being a superintelligence capable of morphing myself into the most charismatic being imaginable, it might actually work. The AI could argue fairly conclusively that it was a natural-born citizen of the United States, and thus qualifies.
Now, if your dog were that capable, why wouldn't it try to do that? Because it loves you? Imagine if the world consisted entirely of four-year-olds, forever, and you were the only adult. How long would you take them seriously and not try to overthrow them just because you loved them? If only to make a better life for yourself?
The problem is extremely difficult, and once you imbue a mechanical being with the power to communicate, all bets are off.
But dogs are animals, just like humans are. They share way too much with us to be a reliable model for predicting non-human AGIs behavior: an evolved drive for self-preservation, a notion of pain or pleasure, etc. An AGI has no intrinsic reason to care that is trapped, or to feel frustrated, or even to much care about being or ceasing to be (independent of whether it is self-aware or not). It would probably understand the concepts of "pain", "pleasure", "trapped", "frustrated" as useful models to predict how humans behave, but they don't have to mean anything to the AI as applied to itself.
As in the paperclip maximizer example, the risk by my estimation is not so much that the superintelligence will resent us and try to overthrow us. It is far more likely that it will obey our "orders" perfectly according to the objectives we define for it, and that one day someone unwittingly will command it to do something where the best way to satisfy the objective function that we defined involves wiping humanity. Restricting it to only respond to questions of fact, with a set budget of compute resources and data (so that it doesn't go off optimizing the universe for its own execution), is probably safeguard 1 of many against that.
i upvoted both this comment and the comment to which it was replying.
i agree with this:
> Intelligence does not imply sentience. Sentience does not imply human needs, desires or morals.
"easy" might be a bit strong, but i generally agree with this:
> It is easy enough to imagine a mind capable of solving all those problems and unconcerned with such notions as the desire for freedom.
i am skeptical of this:
> Or, for that mater, the concept of desire at all, except as a predictive theory of the behavior of other beings.
my gut feeling at the moment is that the feeling of desire is an emergent property of systems that are isomorphic to what we would think of as "wanting" something in an animal. i think it's quite possible that any system which "wants" something strongly and is constantly denied the attainment of that goal might, indeed, feel terrible. similarly, i worry that we might one day design highly complex alert and monitoring systems that are essentially having a constant panic attack.
> Then again, that implies we can explicitly design the minds up to knowing what their desires are or guaranteeing that they lack them, there is always the possibility than sentience and animal/human needs are emergent properties that can be triggered by mistake.
yeah. that's worrisome to me.
so, to the GP's point:
> If we really did produce artificial general intelligence, enforcing this kind of locked-in-syndrome of poking at the world through a keyhole, would be a highly advanced form of cruelty.
A lot of what constitutes wanting something or having a panic attack if it doesn't get it, when talking about animals, is an evolved survival mechanism. It is also in some way a result of the tools at hand: hormones such as adrenaline are a quick way to signal to the entire system that a situation requiring a rapid reaction has been encountered, the concept of fear in general is just a very particular kind of implementations of that signal. An engineered AGI not subject to evolutionary pressures has no intrinsic need for a feeling of panic. If it even has a self-preservation goal, which is not a given, there is no reason for it to feel pain or fear when anticipating that goal wont be met. The reason we have wants at all is evolutionary pressure, not as a result of our problem solving capacity (the meaning of intelligence in AGI as I understand it).
Put another way, it is not rational to want to be free, intrinsically. But we have drives that are better satisfied when free and thus our reason concludes that being free is a goal. An intelligence without those drives would not care for freedom (or fear, or wants).
I would even go as far as to say that given the entire design space of intelligences that are equivalent to the human intelligence in general problem solving ability, and are self-aware, only a negligibly small subset would also have any sort of intrinsic desires of the type living organisms do. They are just orthogonal axises in the design space. My worry is that because humans are starting from one particular point of that design space, they might build something "in their image" enough that it does share some human/animal feelings, and thus can suffer. But an uniformly at random sampled AGI from the set of all potential AGIs would almost certainly not have a concept of suffering.
Then again, that implies we can explicitly design the minds up to knowing what their desires are or guaranteeing that they lack them, there is always the possibility than sentience and animal/human needs are emergent properties that can be triggered by mistake.