Beyond the ethical considerations of robots, Asimov explores the hierarchy between humans and robots from the perspective of who is in control. Even though robots are products of human development, humans frequently have difficulty believing that they are fully in control of the robots, and Asimov’s stories spark questions about whether humans are wise or logical enough to anticipate the consequences of their own technology. Because humans so frequently seem unable to understand the defects that surface amongst their creations, Asimov ultimately shows how humans are destined to fall into a trap in which they lose control of their own destinies to those creations.
In several stories, humans seem unable to understand the malfunctions that the robots are experiencing, even though they are the ones who established the rules of robotics. In “Runaround,” the robot Speedy becomes completely paralyzed because it faces a conflict between the Second and Third Laws of Robotics. Donovan and Powell are at first unable to understand what is going wrong with the robot—an agonizing prospect, as they cannot leave Mercury without the robot and would be doomed to die if the robot perished. They make several attempts to help the robot escape its strange loop, but they waste several hours on solutions that only make the problem worse, to the point where they have to put themselves in danger so that the First Law overrules all others and Speedy is shaken out of his loop. This demonstrates humans’ inherent misunderstanding of the beings that they have created and suggests that they are not necessarily intellectually superior to robots—sometimes people are too shortsighted and illogical to find the proper solutions. A similar problem occurs in “Escape!” U.S. Robots and Mechanical men are developing a hyperspace engine that allows humans to travel faster than the speed of light. Their positronic computer, The Brain, which follows the Laws of Robotics, directs the building of this hyperspace ship. When Powell and Donovan board the ship, however, it has no human controls, and only has beans and milk for food. Without the humans’ permission, The Brain takes them on a test ride through the galaxy, and is uncontrollable by the people in the ship and the people on the ground. The hyperspace jump means that the men on board will temporarily cease to exist (effectively, dying temporarily), but robots are programmed to make sure that human beings do not come to harm. The Brain is stymied by the idea that the humans will die temporarily, a prospect which is “enough to unbalance him very gently.” The Brain has also developed a sense of humor as “a method of partial escape from reality.” Thus, the desire to advance technology and discover places beyond the galaxy has led humans to overlook the consequences of what the robot might do, knowing that the humans could die—and this causes them to surrender control of their fates entirely.
In later stories, as the robots become more and more sophisticated, Asimov hints that this advancement is contingent upon humans losing control over them. In “Reason,” Powell and Donovan are assigned to a space station that supplies energy to the planets via beams. The robots that control the beams are coordinated by a robot named Cutie. Cutie has a highly developed reasoning ability, and concludes that space, stars, and the planets do not exist, and that it should only carry out orders from its “Master”—the power source of the ship. Cutie essentially starts a new religion in with the other robots are its followers. It even refuses to obey human orders per the Second Law of Robotics, a situation which becomes even more dire when a solar storm is expected which could knock the beams out of focus and incinerate populations. When the storm hits, however, the robots keep the beams operating perfectly. Even though they don’t know it, the robots are following the First and Second Laws of Robotics: they created this religion in order to maintain control of the beams and thus keep humans out of danger, knowing that they would be better suited to operate the controls.
The robots convince themselves that they are superior, but they do so in order to save humanity. This not only demonstrates humans’ inability to anticipate the robots’ behavior and control it, but also foreshadows how this will eventually lead robots to try to control humanity for its own good. This idea is borne out completely in “The Evitable Conflict.” In 2052, the world is divided into four geographic regions which have powerful supercomputer known as Machines managing their economies. The Machines start to make some errors, however, and these errors hurt prominent individuals associated with the anti-Machine “Society for Humanity.” Susan Calvin realizes that the Machines are deliberately making these mistakes in order to hurt “the only elements left that threaten them.” The Machines recognize their own necessity to humanity's continued peace and prosperity, and have thus inflicted a small amount of harm on selected individuals in order to protect themselves and continue guiding humanity's future. Even though the robots’ purpose is to ensure the continuation and overall protection of humanity, their complete control over society proves that they are ultimately superior to the humans who created them.
I, Robot is told through a framing device, in which Dr. Calvin is relaying all the stories in the book to a young reporter. At the end of these narratives, Dr. Calvin comes to the conclusion that “Mankind has lost its own say in its future”—but she postulates that perhaps this is a good thing, and that robots may be the only things that “stand between mankind and destruction.” Thus, Asimov argues that humans are not wise enough to maintain control of their own destinies. Rather, they must create something that will help look out for their best interests as a whole, even if those creations do not ultimately allow humans to have free will.
Human Superiority and Control ThemeTracker
Human Superiority and Control Quotes in I, Robot
“Then you don’t remember a world without robots. There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. […] But you haven’t worked with them, so you don’t know them. They’re a cleaner better breed than we are.”
It took split-seconds for Weston to come to his senses, and those split-seconds meant everything, for Gloria could not be overtaken. Although Weston vaulted the railing in a wild attempt, it was obviously hopeless. Mr. Struthers signalled wildly to the overseers to stop the tractor, but the overseers were only hu man and it took time to act.
It was only Robbie that acted immediately and with precision.
“These are facts which, with the self-evident proposition that no being can create another being superior to itself, smashes your silly hypothesis to nothing.”
“Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why, by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he’s the superior being, so he must keep us out of the control room. It’s inevitable if you consider the Laws of Robotics.”
“Remember, those subsidiaries were Dave’s ‘fingers.’ We were always saying that, you know. Well, it’s my idea that in all these interludes, whenever Dave became a psychiatric case, he went off into a moronic maze, spending his time twiddling his fingers.”
“All normal life, Peter, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! […]”
“Susan,” said Bogert, with an air of sympathetic amusement. “I’ll admit that this Frankenstein Complex you’re exhibiting has a certain justification—hence the First Law in the first place. But the Law, I repeat and repeat, has not been removed—merely modified.”
“That he himself could only identify wave lengths by virtue of the training he had received at Hyper Base, under mere human beings, was a little too humiliating to remember for just a moment. To the normal robots the area was fatal because we had told them it would be, and only Nestor 10 knew we were lying. And just for a moment he forgot, or didn’t want to remember, that other robots might be more ignorant than human beings. His very superiority caught him.”
“I like robots. I like them considerably better than I do human beings. If a robot can be created capable of being a civil executive, I think he’d make the best one possible. By the Laws of Robotics, he’d be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice.” […]
“Except that a robot might fail due to the inherent inadequacies of his brain. The positronic brain has never equalled the complexities of the human brain.”
“Very well, then, Stephen, what harms humanity? Economic dislocations most of all, from whatever cause. Wouldn’t you say so?”
“I would.”
“And what is most likely in the future to cause economic dislocations? Answer that, Stephen.”
“I should say,” replied Byerley, unwillingly, “the destruction of the Machines.”
“And so should I say, and so should the Machines say. Their first care, therefore, is to preserve themselves, for us.”
“But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future.”
“It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war.” […]
“How horrible!”
“Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!”