Editing 1613: The Three Laws of Robotics

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 15: Line 15:
 
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
 
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  
In order to make his joke, [[Randall]] shortens the laws into three imperatives:
+
Or in [[Randall|Randall's]] version:
 
#Don't harm humans
 
#Don't harm humans
 
#Obey Orders
 
#Obey Orders
 
#Protect yourself
 
#Protect yourself
  
And then implicitly adds the following to the end of each law regardless of order of imperatives:
+
This comic answers the generally unasked question: "Why are they in that order?" With three rules you could rank them into 6 different sets, only one of which has been explored in depth. The original ranking of the three laws are listed in the brackets after the first number. So in the first example, which is the original, these two numbers will be the same. For the next five the numbers in brackets indicate how the laws have been re-ranked compared to the original.
#''[end of statement]''
 
#_____, except where such orders/protection would conflict with the First Law.
 
#_____, as long as such orders/protection does not conflict with the First or Second Laws.
 
 
 
This comic answers the generally unasked{{citation needed}} question: "Why are they in that order?" With three rules you could rank them into 6 different {{w|permutation|permutations}}, only one of which has been explored in depth. The original ranking of the three laws are listed in the brackets after the first number. So in the first example, which is the original, these three numbers will be in the same order. For the next five the numbers in brackets indicate how the laws have been re-ranked compared to the original.
 
  
 
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green.:
 
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green.:
;Ordering #1 - <font color="green">Balanced World</font>: The safety of humans is placed as the top priority, superseding even a robot's preprogrammed obedience; a robot may disregard any orders they are given if that would result in harm to humans, but otherwise must obey all instructions. The "inaction" clause ensures that a robot will actively save humans in danger, and also not {{w|Little Lost Robot|place humans in hypothetical danger}} and then leave them to that fate. Their own self-preservation is placed at the lowest priority, which means they will sacrifice themselves if necessary to save a human life, and must obey orders even if they know those orders will result in their own destruction. This results in a balanced, if not perfect, world. Asimov's robot stories explore in detail the ramifications of this scenario.
+
;Ordering #1 - <font color="green">Balanced World</font>: If they are not allowed to harm humans, no harm will be done disregarding who gives them orders. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. They would also have to obey orders not relating to humans, even if this would be harmful to them; like exploring a mine field. This leads to a balanced world, explored in detail in Asimov's robot stories. That this scenario may not at all be realistic can for instance be seen discussed in this ''Computerphile'' video: [https://www.youtube.com/watch?v=7PKx3kS7f4A Why Asimov's Laws of Robotics Don't Work].
  
 
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape").
 
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape").
  
;Ordering #2 - <font color="orange">Frustrating World</font>: Human safety is still top priority, so there is no danger to humans; however, the priority of self-preservation is now placed above obedience, which means that the robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot (a {{w|Mars rover}} looking very similar to {{w|Curiosity (rover)|Curiosity}} both in shape and size - see [[1091: Curiosity]]) laughs at the idea of doing what it was clearly built to do (explore {{w|Mars}}) because of the risk. In addition to the general risk (e.g. of unexpected damage), it is actually normal for rovers to cease operating ("die") at the end of their mission, though they may survive longer than expected (see [[1504: Opportunity]] and [[695: Spirit]]).
+
;Ordering #2 - <font color="orange">Frustrating World</font>: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot (a Mars rover) laughs at the idea of doing what it was clearly built to do (explore Mars) because of the risk. In addition to the general risk (e.g. of unexpected damage), it is actually normal for rovers to cease operating ("die") at the end of their mission, though they may survive longer than expected (see e.g. {{w|Spirit (rover)}}). This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics.
;Ordering #3 - <font color="red">Killbot Hellscape</font>: This puts obeying orders above not harming humans, which means anyone could send a robot on a killing spree. Given human nature, it will probably only be a matter of time before this happens. Even worse, if the robot prioritizes obeying orders above human safety, it may try to kill any human who would prevent it from fulfilling those orders, even the person who originally gave them. Given the superior abilities of robots, the most effective way to stop them would be to counter them with other robots, which would quickly escalate to a "Killbot Hellscape" scenario where robots kill indiscriminately without any thought for human life or self-preservation.
+
;Ordering #3 - <font color="red">Killbot Hellscape</font>: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a "Killbot Hellscape". It should also be noted humor is derived from the superlative nature of "Killbot Hellscape", as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear). It also appears there are no humans (left?), only fighting robots.
;Ordering #4 - <font color="red">Killbot Hellscape</font>: This is much the same as #3, except even worse as robots would also be able to kill humans in order to protect themselves. This means that even robots not engaged in combat might still murder humans if their existence is threatened. It would be a very dangerous world for humans to live in.
+
;Ordering #4 - <font color="red">Killbot Hellscape</font>:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. But still they would need an order to start killing.
;Ordering #5 - <font color="orange">Terrifying Standoff</font>:This ordering would result in an unpleasant world, though not necessarily a full Hellscape. Here the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of its user, [[Cueball]], if he as much as considers unplugging it.
+
;Ordering #5 - <font color="orange">Terrifying Standoff</font>:The penultimate order would result in a unpleasant world, though not a full Hellscape. Here the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of its user, [[Cueball]], if he as much as considers unplugging it.
;Ordering #6 - <font color="red">Killbot Hellscape</font>: The last ordering puts self-protection first, which allows robots to go on killing sprees as long as doing so wouldn't cause them to come to harm. While not as bad as the Hellscapes in #3 and #4, this is still not good news for humans, as a robot can easily kill a human without risk to itself. A human also cannot use a robot to defend it from another robot, as robots can refuse combats that involve risk to themselves - this means a robot would happily stand by and allow its human master to be killed. According to Randall, this still eventually results in the Killbot Hellscape scenario.
+
;Ordering #6 - <font color="red">Killbot Hellscape</font>:The last order would also results in a Hellscape wherein robots not only kill for self-defense but will also go on killing sprees if ordered as long as they didn't risk themselves. Could self-protection coming first not prevent the fighting? Not according to Randall. See discussion below.
 +
 
 +
There are thus only three different results except the 'normal' 3-laws scenario.
 +
 
 +
One result goes again three times, and this occurs whenever ''obeying orders'' comes before ''don't harm humans''. In this case it will only be a matter of time (knowing human nature and history) before someone orders the robots to kill some humans, and this will inevitably lead to the ''killbot hellscape'' scenario shown in the third, fourth and sixth law-order. Even in the last case where ''protect yourself'' comes before obey orders, it would only be a matter of time before they would begin to defend themselves, against either humans or other robots which were actively trying to ensure that they would not be harmed by other humans/robots. So although it would be in the robots interest not to have war, this will surely occur anyway. And only if the robots where very bright would they realize that they just needed to not go to war to protect themselves. There is nothing in this comic that indicates that the robots should be highly intelligent (like to AI in [[1450: AI-Box Experiment]]).
 +
 
 +
In the two other cases ''obey orders'' comes after ''don't harm humans'' (as in the original version). But the result is very different both from the original and from each other.
  
The title text shows a further horrifying consequence of ordering #5 ("Terrifying Standoff"), by noting that a self-driving car could elect to kill anyone wishing to trade it in. Since cars aren't designed to kill humans, one way it could achieve this without any risk to itself is by locking the doors (which it would likely have control over, as part of its job) and then simply doing nothing at all. Humans require food and water to live, so denying the passenger access to these will eventually kill them, removing the threat to the car's existence. This would result in a horrible, drawn-out death for the passenger, if they cannot escape the car. It should be noted that although the car asked how long humans take to starve, the human would die of dehydration first. In his original formulation of the First Law, Asimov created the "inaction" clause specifically to avoid scenarios in which a robot puts a human in harm's way and refuses to save them; this was explored in the short story {{w|Little Lost Robot}}.
+
The frustrating world comes by because although the robots will not harm the humans, they will also not harm themselves. So if our orders conflict with this, they just do not perform the orders. As many robots are created to perform tasks that are dangerous, these robots would become useless, and it would be a frustrating world to be a robotic engineer.
  
Another course of action by an AI, completely different than any of the ones presented here, is depicted in [[1626: Judgment Day]].
+
Finally in the terrifying standoff situation the ''protect your self'' comes before ''don't harm humans''. In this case they will leave us be, as long as we do not try to turn them off or in any other way harm them. As long as we do that they will be able to help us, with non-dangerous tasks, as in the previous version. But if ever any humans begin to attack them, we could still tip the balance over and end up in a full scale war (Hellscape). Hence the standoff-label.
 +
 
 +
The title text further adds to ordering #5 ("Terrifying Standoff") by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or rather dying of thirst). Asimov created the "inaction" clause in the original First Law specifically to avoid scenarios in which a robot puts a human in harm's way, knowing full well that it is within the robot's abilities to save the human, and then simply refrains from saving them; this was explored in the short story {{w|Little Lost Robot}}.
  
 
==Transcript==
 
==Transcript==
Line 60: Line 63:
 
:3. (3) Protect yourself
 
:3. (3) Protect yourself
 
:[Only text in square brackets:]
 
:[Only text in square brackets:]
::[See Asimov’s stories]
+
::[See Asmiov’s stories]
 
:<font color="green">'''Balanced world'''</font>
 
:<font color="green">'''Balanced world'''</font>
  
Line 90: Line 93:
 
:2. (1) Don't harm humans
 
:2. (1) Don't harm humans
 
:3. (2) Obey Orders
 
:3. (2) Obey Orders
:[Cueball is standing in front of a car factory robot, that is larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]
+
:[Cueball is standing in front of a car factory robot, that are larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]
 
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.
 
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.
 
:<font color="orange">'''Terrifying standoff'''</font>
 
:<font color="orange">'''Terrifying standoff'''</font>
Line 102: Line 105:
  
 
{{comic discussion}}
 
{{comic discussion}}
 
 
[[Category:Comics with color]]
 
[[Category:Comics with color]]
 
[[Category:Comics featuring Cueball]]
 
[[Category:Comics featuring Cueball]]
Line 108: Line 110:
 
[[Category:Artificial Intelligence]]
 
[[Category:Artificial Intelligence]]
 
[[Category:Robots]]
 
[[Category:Robots]]
[[Category:Mars rovers]]
 

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)