Isaac Asimov - “I, Robot”

An inspiring fixup novel about the rise of AI. Pretty short with the stories being relatively self-contained, revolving around the Three Laws of Robotics. There’s a few surprises.

(The rest of this analysis contains spoilers.)

The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Very short summaries of the stories

Robbie (1940)

Year 1989. A non-verbal robot of the RB series is a “companion of a little child”, Gloria Weston. Her mother doesn’t like the robot and its influence on the child. Eventually, the robot is returned to the factory, which is a massive shock to the girl.

The family visits New York in an attempt to take Gloria’s mind off the robots. They visit the Museum of Science and Industry with a talking robot exhibition. Gloria sneaks out to see the robot and asks it if it knows where Robbie, “a robot… just like you”, is. The talking robot breaks down, not able to process that there are more like it.

Since the New York trip didn’t work, the parents try a visit to the robot factory to show the girl that robots are not persons. Robbie is there and saves Gloria’s life.

Runaround (1942)

Year 2015. Gregory Powell and Mike Donovan are field specialists of the US Robots and Mechanical Men corporation. They test new and experimental robots in practical situations.

Robots are banned on Earth so they test on planets or space stations. In “Runaround” they’re on Mercury, waiting for SPD-13 to bring back selenium which they need to set up life support in an abandoned mining station.

They find out that SPD-13 keeps running in a circle around the selenium pool. It turns out that there’s a danger in the middle of the pool so the robot avoids getting closer according to the Third Law. At the same time, he must obey orders per the Second Law. So he runs in a circle trying to stay true to all laws. The conflict makes the robot behave like he’s drunk or hysterical.

Finally Powell decides to risk his life by going out to the robot in Mercury’s heat, hoping that this will solve the dilemma for the robot by forcing him to save the human. The plan works.

Reason (1941)

Powell and Donovan are on a space station that relays solar energy to Earth.

The space station is operated by robots and so far supervised by humans. To replace the human supervisor, a new more advanced model, QT-1, is now tested.

The robot doesn’t believe the humans’ explanation about existence of planets and stars and observing Powell and Donovan decides humans are unimportant: weak, short-lived, and expendable. Instead, he believes the world revolves around the space station which the robot calls the Master.

Attempts to reason with QT-1 fail, as he forms a sort of religion ("there is no master but Master, and QT-1 is His prophet") with the other robots. They imprison the humans just as a solar storm is coming, which demands a very steady manual intervention to the energy beam trajectory sent to Earth, otherwise risking incineration of populated areas.

It turns out the robot did the job perfectly, on some level obeying the laws of robotics and understanding that it’s a better beam operator than the humans. In turn, Powell and Donovan decide to let the robot maintain his religious mania and believe in the Master as the job is executed as needed.

Catch That Rabbit (1944)

Powell and Donovan are on an asteroid mining station, testing a DV-5 robot which mentally (using “positronic fields") controls six sub-robots, described as his “fingers”.

The robot only produces ore when the humans are watching (a sort of Heisenbug). When they aren’t, the robot with its sub-robots are performing strange marches and dances. Powell and Donovan try for many days to understand the problem, even causing a dangerous situation on purpose to observe the robot’s response. They accidentally trap themselves in a cave-in.

The humans finally figure out that DV-5 is overwhelmed when unexpected situations happen because he doesn’t have enough mental capacity to consciously control all six subsidiaries. Under normal conditions the sub-robots work pretty independently, requiring little conscious orchestration. When an unexpected problem occurs, DV-5 became overwhelmed and the robot marches are “finger twiddling”, a nervous reaction of anxiety.

To free themselves, Powell and Donovan shoot one of the sub-robots which is enough for DV-5 to regain control.

Liar! (1941)

Year 2021. Susan Calvin is a robopsychologist.

Robot RB-34 is accidentally created with telepathic abilities. While being investigated, it tells a few workers of U.S. Robots and Mechanical Men, what the others are thinking. In particular, it tells Susan that Peter Bogert, a mathematician at USRMM, secretly loves her. To Peter it says that his boss, Alfred Lanning, the director of Research at USRMM, has secretly resigned at Bogert is supposed to be his replacement.

Both of those statements are lies, as it eventually turns out to much chagrin of the people involved. Why would a robot lie? Turns out the First Law forbid the robot to hurting people (including their feelings) so telling them what they wanted to hear was a way to avoid breaking the law, even if it wasn’t true.

The lie RB-34 told to Susan was particularly painful, so in retaliation she pushes him into breakdown by telling him that his attempt was futile because the lies ended up hurting people anyway.

Little Lost Robot (1947)

Year 2029. Development of the Hyperatomic Drive is on hold at Hyper Base, a research asteroid. The station is using NS-2 robots which are prone to overreacting to danger to humans due to the First Law. This makes them interfere with routine work which, while at times dangerous to people due to gamma radiation, is not unreasonably so.

To solve the problem, USRMM in secret ships a number of new NS-2 robots to the station with a modified First Law: it still is understood as “no robot may injure a human being” but the following “or through inaction (…)” part is removed. Moreover, the new NS-2 robots are trained to recognize radiation types to avoid interfering with work. To hide the modification, USRMM avoided differentiating the modified robots in any way. There are no serial numbers, the parts are exactly the same.

One of the researchers, loses temper at one of the modified NS-2 robots, telling it to “get lost”. Obeying the order literally, the NS-2 robot hides itself among 62 other robots of the same type (but unmodified). It does its best to mimic behavior of the other robots and as such avoids discovery. Since consequences of the modified rule are still uncertain, and getting the right robot back is very important, Susan Calvin is called to find the robot amongst the group.

Calvin agrees with the urgency claiming that the consequences might be dire, even leading to the robot killing a person by causing an action that indirectly puts a human in harm’s way and then deciding (through inaction) not to do anything to stop the person from being killed.

Eventually, through a series of tests, the right robot is identified because only it recognizes that radiation during the test is harmless. As expected by Calvin though, the robot gained a superiority complex and to keep its secret attacks Calvin to stop her from revealing she found the robot.

Escape! (1945)

Development of the Hyperatomic Drive is in full swing. USRMM uses a positronic computer known as The Brain for the development. Their biggest competitor, Consolidated Robots, used their own non-positronic equivalent (without personality due to USRMM’s patents on emotional brain paths). When fed all data, CR’s supercomputer broke down. They turned to USRMM to feed the same data to The Brain.

USRMM suspects foul play: CR wants to break down The Brain as well to avoid losing the space race. But they still take on the contract, with Susan Calvin deducting that the conflict that destroys the supercomputer is due to the Three Laws of Robotics, and through splitting the data into chunks and monitoring the situation, they might avoid the breakdown.

To avoid the breakdown, before feeding the data Calvin tells The Brain to take it easy, to simply reject data that would make it inevitable for humans to die. She tells it that “in this case, we don’t mind even death”. Unexpectedly, The Brain doesn’t stop on any page fed to it and designs an interstellar ship.

After the ship getting built, Powell and Donovan board it to look around as the design is somewhat alien: there’s no manual controls, there’s no angles, visible furniture, and so on. The ship unexpectedly takes off and brings them 300,000 parsecs away. However, before they reach their destination, Powell and Donovan experience death for a short while. It turned out that this violation of the First Law was what broke CR’s supercomputer.

However, as price for this new “lighter” treatment of death, The Brain developed a dark sense of humor and practical jokes (like the fact the only food on the ship was beans, the only drink was milk, and there were no showers). It turns out that this coping method is what The Brain needed to overcome the severity of breaking the First Law even temporarily.

Evidence (1946)

Year 2032. An election for mayor of a major American city is coming up. The leading candidate is Stephen Byerley, a successful district attorney. His opponent, Francis Quinn, contacts USRMM saying that Byerley is a robot. As proof, Quinn reports that Byerley has never been seen eating. He stronghands USRMM to investigate the matter by suggesting that a revelation around a robot on Earth hiding as a human being would be bad for USRMM’s interests, as the only supplier of positronic brains.

Susan Calvin visits Byerley with an apple. He takes a bite but that proves nothing as a superior humanoid with grown human parts would probably mimic ingestion and digestion as well. Calvin later observes that if he were a robot, he would need to obey the Three Laws. However, through that alone it cannot be proven that he is a robot because an honorable person would uphold the Three Laws just as well. However, if he violated any of the laws, that would prove he’s clearly human. Incidentally, it’s observed that Byerley is an opponent of the death penalty, didn’t ever seek that penalty himself, and claims to never have prosecuted an innocent person.

The issue of Byerley’s humanity becomes public and is discussed around the world. Byerley decides to close the matter by holding a public speech, to which his campaign is against. During the speech, a random person challenges Byerley to punch him in the face, proving he’s not a robot. Byerley does it, and later wins the election.

Later, Calvin visits him and says she believes him to be a robot. She says that the man he punched was likely a robot himself, solving the First Law dilemma. She also says that a robot would make an ideal civil executive, “incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice”. He would even be incapable of hurting people by telling them he’s a robot.

12 years later the Earth forms a federation of four Regions and Byerley is elected World-Coordinator. In an interview in 2064 she says Byerley’s “arranged to die” the year before 1 and his body was atomized, destroying any evidence. Before his death though, Byerley rose in ranks all he way to a co-ordinator of the entire planet.

The Evitable Conflict (1950)

Year 2052. Byerley is in his second term as World Co-ordinator. The four federated Regions are: North, Eastern, Tropic, and Europe. The Eastern (half of Earth’s population) and Tropic (half of Earth’s resources) are seeing the most expanse, while North is stagnant and Europe is relatively in decline, although comfortable. It’s revealed that after Lanning retired, Bogert was indeed Director of Research. After he died, the new director was the young Vincent Silver because Susan Calvin rejected the position.

At this stage in human history people can no longer debug positronic supercomputers. Mathematicians designed a positronic brain able to design a more complicated brain. That one designed a still more complicated brain. After ten such iterations, the so-called Machines are effectively ruling the Earth. There’s one per Region. Given all relevant data, they’re generating courses of action for executives so that Earth is ran in the most efficient manner:

The population of Earth knows that there will be no unemployment, no overproduction or shortages. Waste and famine are words in history books.

However, lately there seem to be signs of little inefficiencies in each of the Machine’s generated decisions. Byerley visited the four Region Co-ordinators to get a feel why. Silver claimed that the Machine cannot be out of order since they are self-correcting at this point. The Northern Co-ordinator said it’s impossible to feed a Machine false data because it would be discovered and rejected. The European co-ordinator hinted that if the data is correct, and the calculations are correct, then maybe the people receiving the orders don’t fulfill them.

The Society for Humanity is a fundamentalist organization against robots and Machines, treating their rule as an affront to humans. They want the Machines destroyed.

Over the course of the story it turns out that the Machines discover the threat that Society for Humanity poses, ironically, to humanity. As such, they deliberately introduce little perturbations in efficiency of each Region so that the SfH agents in each Region get ejected. Thus, the dangers of larger losses of life are averted and the Machines fulfill their task as posed by the Laws of Robotics.

It is added that Susan Calvin died at the age of eighty-two in 2064.

Retro-futuristic dissonance

The grand concept of each story is imaginative, feels plausible to a reasonable extent, and is interesting as a work of fiction as well. However, due to it being relatively close to my interests, and relatively close to the real world events and history, it contained somewhat jarring retro-futuristic elements.

Communication between people is done through letters and “televisors”. The latter reminds us of webcams and Internet-based video conferencing. The former seems antiquated by modern standards. But the availability of ubiquitous video and audio recording equipment with sufficient internal storage seemed to escape Asimov, the futurist. Instead, he still refers to “microfilm recorders” even in his later “Foundation” novels. Of course, it’s easy for me to be an ass about it in 2021 with The Power of Hindsight whereas the stories were written in the 1940s, before the truly modern era.

Thus it’s no wonder that human-computer interaction is also retro-futuristic: it revolves around print outs, doing math on paper, feeding paper with facts into a supercomputer. Asimov didn’t foresee computer networking and electronic data transfer. If we try hard enough, we could come up with a story that he did know and computer networking wasn’t used by the characters in the story because it was too easy to hack. There’s a throwaway line that could support this theory if we tried:

We can’t use computers. Too much danger of leakage.
– says Bogert to Calvin in “Little Lost Robot”

Less importantly, some elements of the stories have a definite 1950s vibe. Many characters are smoking, including indoors. There is never a mention of this being detrimental to human health, whereas you’d expect it to go as far as causing direct robot action (since a robot shouldn’t let a human harm itself even through inaction).

Dollar amounts used in “I, Robot” are also pre-inflation. For example, in “Escape!” the contract for finding a solution to a hyperdrive is barely $200,000 which would actually be worth hundreds of billions of dollars in the modern world based on its future value. Facebook bought WhatsApp for $19 billion in 2014.

The family model in the stories also feels antiquated. There’s a dad who works and a mother who is the housekeeper. The dad returning home to a suburban house reads a newspaper. At least, unlike in Asimov’s later “Foundation”, there are actually leading female characters, with Susan Calvin being an important driver of the progress of humanity in “I, Robot”.

Also, it feels kind of weird capital punishment is still used in the world of “I, Robot”. I realize it’s also still used today in the real world but somehow it feels backwards, not futuristic at all.

Against the Frankenstein Complex

Asimov clearly wrote “I, Robot” to deal with the very popular Frankenstein complex where “mechanical men” are feared in fiction to eventually become more powerful than their creators and turn against them. This is mostly covered by “Robbie”, “Little Lost Robot”, and “Evidence”.

Asimov’s stance on the matter in “I, Robot” is sadly ironic given that the movie of the same name is ultimately an example of a more typical AI takeover story. In fact, Eando Binder’s earlier story of the same name 2 follows a plot somewhat like “Frankenstein”. But it’s Karel Čapek, the original inventor of the word “robot”, who started the entire Frankenstein complex plot cliché in his play “R.U.R”. It made sense in his context since the word “robot” originates from the Czech (and also Polish) noun robota for unfree labor. So robots were seen as slaves and freeing themselves in this context was only a natural course of action.

In any case, I personally feel that even Asimov kind of painted himself into a corner with his laws of robotics. If we assume that a robot can judge levels of harm and choose to harm an individual in order to protect a larger group of humans, then we’re stepping into moral gray ground. One can argue that “the greater good” is in itself a totalitarian idea, and quite in conflict with the cult of individualism present in the modern United States of America.

An even more interesting problem arises when you consider that the complexity involved in modeling a reasoning brain makes it unfeasible to mathematically ensure the laws of robotics will be upheld. Such complexity won’t even allow humans to analyze the thing while it’s working to fully understand why it made a particular decision. In consequence, it cannot be realistically ensured that no harm from “mechanical men” will ever come to humans.

Asimov himself follows this lead a little. In “Evitable Conflict” Byerley doesn’t seem to think like a Machine when talking with Susan Calvin. In “Liar!”, the conflict of the robot is similar to the conflict of HAL 9000, also confronted with an incompatible (tragic?) set of rules to follow. And, most visibly, in “Little Lost Robot”, the hiding robot is able to teach others about the futility of killing themselves in the name of the First Law if they know the human cannot be feasibly saved anyway. Moreover, it seems to grow a “superiority complex” which leads it to aggression against a human being.

It seems Asimov wasn’t as sure of his laws of robotics as he seems at first.


  1. https://www.shmoop.com/i-robot/stephen-byerley-timeline.html claims Byerley died in 2057. 

  2. Apparently, Asimov wanted his story collection to be called “Mind and Iron” instead. 

#Books