<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=KimikoMuffin</id>
		<title>explain xkcd - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=KimikoMuffin"/>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php/Special:Contributions/KimikoMuffin"/>
		<updated>2026-04-12T18:30:07Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2105:_Modern_OSI_Model&amp;diff=168962</id>
		<title>2105: Modern OSI Model</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2105:_Modern_OSI_Model&amp;diff=168962"/>
				<updated>2019-02-02T02:13:14Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: Undid vandalism.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2105&lt;br /&gt;
| date      = January 30, 2019&lt;br /&gt;
| title     = Modern OSI Model&lt;br /&gt;
| image     = modern_osi_model.png&lt;br /&gt;
| titletext = In retrospect, I shouldn't have used each layer of the OSI model as one of my horcruxes.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a seven-layered BOT. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
The {{w|OSI model|Open Systems Interconnection (OSI) Model}} is a computing model for network communications that abstracts a communication between two services like a Facebook client and Facebook servers all the way from the application layer on the server, down to the wire on which the data is transmitted, and back up to the application layer where the user views the data. As Facebook is one of the most used websites in the world with more than a billion users, Randall claims that the &amp;quot;application&amp;quot; layer (what the client sees and uses) is mostly {{w|Facebook}}.&lt;br /&gt;
&lt;br /&gt;
A light gray shape labeled &amp;quot;Google &amp;amp; Amazon&amp;quot; surrounds all seven layers of the model in an irregular shape indicating that Google and Amazon, by dint of their size and dominance at multiple layers of the model influence the entire structure. An example of Google's influence would be their introduction of new protocols like {{w|QUIC}} and {{w|SPDY}} as replacements for the existing HTTP protocol that was a foundation of the web.&lt;br /&gt;
&lt;br /&gt;
The significance of the irregular pattern of the &amp;quot;Google &amp;amp; Amazon&amp;quot; blob isn't clear. It is likely that it is in reference to the irregular way in which their modifications to the OSI stack have evolved. Potentially with extensions to the left representing the influence of Google, and extension to the right representing the influence of Amazon. However, it is also notable that the irregular structure of the stack is reminiscent of a {{w|Jenga}} tower. Jenga is a game in which blocks are removed from a vertical stack and added back to the top until the whole collapses. This may be a commentary on the instability of the network stack in general, or on how Google and Amazon's additions and changes to it have destabilized the networking protocols.  Or, the specific blocks to be pulled out (presentation, session, and network) may be the ones whose removal collapses the tower while the other ones can be easily removed and replaced (like the center blocks in Jenga), implying that between Google and Amazon, even if these were pulled out, the tower would remain standing.  What this says about the three layers that would destabilize the tower is unclear.&lt;br /&gt;
&lt;br /&gt;
The title text refers to {{w|Magical_objects_in_Harry_Potter#Horcruxes|Horcruxes}} used by {{w|Lord Voldemort|Voldemort}} in the ''{{w|Harry Potter}}'' book series. A Horcrux is a magical artifact used to house a wizard's soul, preventing them from dying if their body is destroyed. Since they can only be created by murdering other people, they are heavily forbidden, and before Voldemort it was unheard of for a wizard to use more than one. Voldemort used seven -- the same number of layers in the OSI model. However, while Voldemort hid his seven Horcruxes in different places to make himself that much harder to kill, Randall's have all been collected in Google and Amazon, defeating the purpose of using more than one. Alternatively, transforming each layer of the OSI model into a horcrux may be regarded as a strategy to prevent them from being destroyed since doing so would destroy networking. This strategy would fail in the modern world, since some of the envisioned layers were not used in the more common modern TCP/IP networking model and in the case of cloud infrastructure potential exists to provide even more shortcuts.&lt;br /&gt;
&lt;br /&gt;
The title text may also be a reference to a [[1417|prior comic]] about Randall mixing up things that come in groups of seven, like data layers and Horcruxes.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
:'''Modern OSI Model'''&lt;br /&gt;
&lt;br /&gt;
:[A light gray shape that surrounds seven stacked dark gray rectangles, all with labels.]&lt;br /&gt;
&lt;br /&gt;
::Application (Facebook) [supported by the light gray shape on both sides]&lt;br /&gt;
&lt;br /&gt;
::Presentation  [pulling out would collapse the tower]&lt;br /&gt;
&lt;br /&gt;
::Session  [pulling out would collapse the tower]&lt;br /&gt;
&lt;br /&gt;
::Transport [supported on both sides]&lt;br /&gt;
&lt;br /&gt;
::Network  [pulling out would collapse the tower]&lt;br /&gt;
&lt;br /&gt;
:Google &amp;amp; Amazon [label of the light gray shape]&lt;br /&gt;
&lt;br /&gt;
::Data link [supported on both sides]&lt;br /&gt;
&lt;br /&gt;
::Physical [supported on both sides]&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1667:_Algorithms&amp;diff=118126</id>
		<title>1667: Algorithms</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1667:_Algorithms&amp;diff=118126"/>
				<updated>2016-04-17T00:34:51Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */ Added a link.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1667&lt;br /&gt;
| date      = April 13, 2016&lt;br /&gt;
| title     = Algorithms&lt;br /&gt;
| image     = algorithms.png&lt;br /&gt;
| titletext = There was a schism in 2007, when a sect advocating OpenOffice created a fork of Sunday.xlsx and maintained it independently for several months. The efforts to reconcile the conflicting schedules led to the reinvention, within the cells of the spreadsheet, of modern version control.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
An algorithm is a basic set of instructions for performing a task, usually on a computer. This comic lists some algorithms in increasing order of complexity, where complexity may refer to either {{w|computational complexity theory}} (a formal mathematical account of the {{w|computational resource}}s – primarily computation time and memory space – required to solve a given problem), or the more informal notion of {{w|programming complexity}} (roughly, a measure of the number and degrees of internal dependencies and interactions within a piece of software).&lt;br /&gt;
&lt;br /&gt;
At the simplest end is '''left-pad''', or adding filler characters on the left end of a string to make it a particular length. In many programming languages, this is one line of code. This is possibly an allusion to a [http://www.haneycodes.net/npm-left-pad-have-we-forgotten-how-to-program/ recent incident] when {{w|Npm (software)|NodeJS Package Manager}} [https://www.techdirt.com/articles/20160324/17160034007/namespaces-intellectual-property-dependencies-big-giant-mess.shtml angered a developer] in its handling of a trademark claim.  The developer unpublished all of his modules from NPM, including a package implementing left-pad.  A huge number of programs depended on this third-party library instead of programming it on their own, and they immediately ceased to function.&lt;br /&gt;
&lt;br /&gt;
'''{{w|Quicksort}}''' is an efficient and commonly used {{w|sorting algorithm}}.&lt;br /&gt;
&lt;br /&gt;
'''{{w|Git (software)|Git}}''' is a {{w|version control}} program, i.e.,  software that allows multiple people to work on the same files at the same time. When someone finalizes (&amp;quot;commits&amp;quot;) their changes, the version control program needs to figure out how to join the new content with the existing content. This process is called '''{{w|Merge (version control)|merging}}''', and the algorithm for it is anything but simple.&lt;br /&gt;
&lt;br /&gt;
A '''{{w|self-driving car}}''' is an automobile with sensors and software built into it so that it can maneuver in traffic autonomously, i.e. without a human controller. Various companies have been working on such vehicles for many years now, and while they're further along now than would have been imaginable even a couple of years ago, we're still far away from the dream of hopping in a driver-less taxi and sitting back as the car itself navigates to where we want to be. Recently [[Randall]] has made several references to self-driving cars, for instance in [[1559: Driving]],[[1623: 2016 Conversation Guide]] and [[1625: Substitutions 2]].&lt;br /&gt;
&lt;br /&gt;
The '''{{w|Google Search}} backend''' is what enables you to type &amp;quot;what the heck is a leftpad algorithm&amp;quot; into your browser and have Google return a list of relevant results, including correcting &amp;quot;leftpad&amp;quot; to &amp;quot;left-pad&amp;quot;, ignoring the &amp;quot;what the heck&amp;quot; part, and sometimes even summarizing the findings into a box at the top of the results. Behind all that magic is a way to remember what pages the Internet contains, which is just a mind-bogglingly large quantity of data, and an even more mind-numbingly complex set of algorithms for processing that data.&lt;br /&gt;
&lt;br /&gt;
The last item is the punchline: a sprawling {{w|Microsoft Excel|Excel}} {{w|spreadsheet}} built up over 20 years by a church group in Nebraska to coordinate their scheduling. Spreadsheets are a general {{w|end-user development}} programming technique, and therefore people use Excel for all sorts of purposes that have nothing to do with accounting (its original purpose), including one guy who made a [http://arstechnica.com/gaming/2013/04/how-an-accountant-created-an-entire-rpg-inside-an-excel-spreadsheet/ role-playing game that runs in Excel]; but even that doesn't approach the complexity that develops when multiple people of varying levels of experience use a spreadsheet over many years for the purpose of coordinating the schedule of several coordinated groups. &lt;br /&gt;
&lt;br /&gt;
The scheduling of tasks over a group of resources (a.k.a. the ''{{w|nurse scheduling problem}}''), while respecting the constraints set by each person, is a {{w|NP-hardness|highly complex}} problem requiring stochastic or heuristic methods for its resolution. Here, the algorithm would be further complicated by being solved by inexpert users over a spreadsheet model without using engineering practices. The potential hyperbole here is in thinking that such combination of circumstances would produce complexity far over that required to drive a car or sort the public contents of the Internet. While most churches meet mainly on Sunday morning, scheduling of what happens during the service when (especially if there are multiple concurrent services) as well as Sunday School, church business meetings, and congregation-wide events all potentially needing to be scheduled on a particular Sunday morning, the need to find a solution very close to the best possible solution quickly becomes a dire need. Furthermore, with different members involved in a wide variety of activities within and outside of the church, and the classrooms available to the church on Sunday itself, (just scheduling the choir practice times to coordinate with everyone's work schedules is very possibly impossible, especially if two people share the same occupation, and one is the relief for the other,) can indeed be daunting. In addition, there would likely be assorted committee meetings and youth groups during the week.&lt;br /&gt;
&lt;br /&gt;
In the title text, part of the spreadsheet's complexity is described as originating from different versions of the file for different programs. The words used like {{w|schism}} and {{w|sect}} are normally used in context of religions splitting into groups about differences in beliefs. In this case, the split seems to have been not over a {{w|theology|theological}} issue, but about the use of {{w|open-source software|open-source}} vs. {{w|proprietary software|proprietary}} software, disagreements about which are often compared to religious debates. Most likely, the schism being referred to is the {{w|East–West Schism|East-West Schism of 1054}}.&lt;br /&gt;
&lt;br /&gt;
The title text also implies that while trying to reconcile after the schism and to merge the two schedules they reinvented an alternative to Git within the spreadsheet itself, making the algorithms in place at least as complicated as that. Since most spreadsheet programs have a sort algorithm built in, that aspect is implied too, and left-padding could be compared to vamping on an introduction to a hymn. This would indicate that the other milestones of complexity are either included in the current version of the spreadsheet or are planned to be implemented.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
'''Algorithms'''&amp;lt;br&amp;gt;By Complexity&lt;br /&gt;
{|&lt;br /&gt;
|colspan=&amp;quot;6&amp;quot; style=&amp;quot;text-align:left;border-bottom:1px solid;&amp;quot;|More complex &amp;amp;rarr;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding-right:2em;&amp;quot;|Leftpad&lt;br /&gt;
|style=&amp;quot;padding-right:2em;&amp;quot;|Quicksort&lt;br /&gt;
|style=&amp;quot;padding-right:2em;&amp;quot;|GIT&amp;lt;br&amp;gt;Merge&lt;br /&gt;
|style=&amp;quot;padding-right:2em;&amp;quot;|Self-&amp;lt;br&amp;gt;driving&amp;lt;br&amp;gt;car&lt;br /&gt;
|style=&amp;quot;padding-right:8em;&amp;quot;|Google&amp;lt;br&amp;gt;Search&amp;lt;br&amp;gt;backend&lt;br /&gt;
|Sprawling Excel spreadsheet&amp;lt;br&amp;gt;built up over 20 years by a&amp;lt;br&amp;gt;church group in Nebraska to&amp;lt;br&amp;gt;coordinate their scheduling&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Include any categories below this line. --&amp;gt;&lt;br /&gt;
[[Category:Charts]]&lt;br /&gt;
[[Category:Google Search]]&lt;br /&gt;
[[Category:Programming]]&lt;br /&gt;
[[Category:Religion]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106582</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106582"/>
				<updated>2015-12-07T22:09:23Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced - please also check spelling}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author {{w|Isaac Asimov|Isaac Asimov's}} famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, the short story collection ''{{w|I, Robot}}'', which amongst other include the very first of Asimov's stories to introduce the three laws, {{w|Runaround (story)|Runaround}}. &lt;br /&gt;
&lt;br /&gt;
The three rules are:&lt;br /&gt;
#A robot may not injure a human being or, through inaction, allow a human being to come to harm. &lt;br /&gt;
#A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;br /&gt;
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.&lt;br /&gt;
&lt;br /&gt;
Or in [[Randall|Randall's]] version:&lt;br /&gt;
#Don't harm humans&lt;br /&gt;
#Obey Orders&lt;br /&gt;
#Protect yourself&lt;br /&gt;
&lt;br /&gt;
This comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot; With three rules you could rank them into 6 different sets, only one of which has been explored in depth.&lt;br /&gt;
&lt;br /&gt;
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green:&lt;br /&gt;
;Ordering #1: If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. This leads to a balanced world, explored in detail in Asimov's robot stories. &lt;br /&gt;
&lt;br /&gt;
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore Mars) because of the risk. This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of it's user, [[Cueball]], if he as much as considers unplugging the robot.&lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves. It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation. On the other hand if the other robots are ordered to destroy you, and you cannot be sure that they will not do it, then better to protect your self by going on a killing spree, and then we are back to a realistic hellscape scenario anyway.&lt;br /&gt;
&lt;br /&gt;
To summarize: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the order to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action, but knowing the state of humans affair, this scenario is not realistic. Terrorist would love to have robots they could order to kill all infidels. Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The title text further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or rather dying of thirst). Asimov created the &amp;quot;inaction&amp;quot; clause in the original First Law specifically to avoid scenarios in which a robot puts a human in harm's way, knowing full well that it is within the robot's abilities to save the human, and then simply refrains from saving them; this was explored in the short story {{w|Little Lost Robot}}.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[Caption at the top of the comic:]&lt;br /&gt;
:'''Why Asimov put the Three Laws'''&lt;br /&gt;
: '''of Robotics in the order he did.'''&lt;br /&gt;
&lt;br /&gt;
:[Below are six rows with first two frames and then a label in color to the right. Above the two column of frames there are labels as well. In the first column six different ways of ordering the three laws are listed. Then the second column shown an image of the consequences of this order. Except in the first where there is a reference. The label to the right rates the kind of world that order of the laws would result in.]&lt;br /&gt;
&lt;br /&gt;
:[Labels above the columns]&lt;br /&gt;
:Possible ordering &lt;br /&gt;
:Consequences&lt;br /&gt;
&lt;br /&gt;
:[The six rows follows below. First the text in the first frame, then a description of the second frame, including possible text below and finally the colored label.]&lt;br /&gt;
&lt;br /&gt;
:[First row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Only text in square brackets:]&lt;br /&gt;
::[See Asmiov’s stories]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;'''Balanced world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Second row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Megan points at a mars rower  with six wheels, a satellite disc, an arm and a camera head turned towards her, what to do.]&lt;br /&gt;
:Megan: Explore Mars!&lt;br /&gt;
:Mars rower: Haha, no. It’s cold and I’d die.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Frustrating world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Third row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Two robots are fighting. The one to the left has six wheels, a tall neck on top of the body, with a head with what could be a camera facing right. It has something pointing forward on the body, which could be a weapon. The robot to the right, seems to be further away into the picture. (it is smaller with less detail). It is human shapes, but made op of square structures. It has two legs and two arms, a torso and a head. It clearly shoots something out of it’s right “hand”. This shot seems to create an explosion a third of the way towards the left robot. There are two mushroom clouds from explosions behind both robots (left and right). Between them there are one more explosion up in the air close to the left robot, and what looks like a fire on the ground right between them. Furthermore there are two missiles in the air, one above the head of each robot. Lines indicate their trajectory. There is not text.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fourth row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fifth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Cueball is standing in front of  a car factory robot, that are larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]&lt;br /&gt;
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Terrifying standoff'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Sixth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3 and 4.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Comics with color]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Robots]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106581</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106581"/>
				<updated>2015-12-07T22:08:34Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced - please also check spelling}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author {{w|Isaac Asimov|Isaac Asimov's}} famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, the short story collection ''{{w|I, Robot}}'', which amongst other include the very first of Asimov's stories to introduce the three laws, {{w|Runaround (story)|Runaround}}. &lt;br /&gt;
&lt;br /&gt;
The three rules are:&lt;br /&gt;
#A robot may not injure a human being or, through inaction, allow a human being to come to harm. &lt;br /&gt;
#A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;br /&gt;
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.&lt;br /&gt;
&lt;br /&gt;
Or in [[Randall|Randall's]] version:&lt;br /&gt;
#Don't harm humans&lt;br /&gt;
#Obey Orders&lt;br /&gt;
#Protect yourself&lt;br /&gt;
&lt;br /&gt;
This comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot; With three rules you could rank them into 6 different sets, only one of which has been explored in depth.&lt;br /&gt;
&lt;br /&gt;
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green:&lt;br /&gt;
;Ordering #1: If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. This leads to a balanced world, explored in detail in Asimov's robot stories. &lt;br /&gt;
&lt;br /&gt;
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore Mars) because of the risk. This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of it's user, [[Cueball]], if he as much as considers unplugging the robot.&lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves. It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation. On the other hand if the other robots are ordered to destroy you, and you cannot be sure that they will not do it, then better to protect your self by going on a killing spree, and then we are back to a realistic hellscape scenario anyway.&lt;br /&gt;
&lt;br /&gt;
To summarize: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the order to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action, but knowing the state of humans affair, this scenario is not realistic. Terrorist would love to have robots they could order to kill all infidels. Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The title text further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or rather dying of thirst). Asimov created the &amp;quot;inaction&amp;quot; clause in the original First Law specifically to avoid scenarios in which a robot puts a human in harm's way, knowing full well that it is within the robot's abilities to save the human, and then simply refrains from saving them; this was explored in the short story [{{w|Little Lost Robot}}.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[Caption at the top of the comic:]&lt;br /&gt;
:'''Why Asimov put the Three Laws'''&lt;br /&gt;
: '''of Robotics in the order he did.'''&lt;br /&gt;
&lt;br /&gt;
:[Below are six rows with first two frames and then a label in color to the right. Above the two column of frames there are labels as well. In the first column six different ways of ordering the three laws are listed. Then the second column shown an image of the consequences of this order. Except in the first where there is a reference. The label to the right rates the kind of world that order of the laws would result in.]&lt;br /&gt;
&lt;br /&gt;
:[Labels above the columns]&lt;br /&gt;
:Possible ordering &lt;br /&gt;
:Consequences&lt;br /&gt;
&lt;br /&gt;
:[The six rows follows below. First the text in the first frame, then a description of the second frame, including possible text below and finally the colored label.]&lt;br /&gt;
&lt;br /&gt;
:[First row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Only text in square brackets:]&lt;br /&gt;
::[See Asmiov’s stories]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;'''Balanced world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Second row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Megan points at a mars rower  with six wheels, a satellite disc, an arm and a camera head turned towards her, what to do.]&lt;br /&gt;
:Megan: Explore Mars!&lt;br /&gt;
:Mars rower: Haha, no. It’s cold and I’d die.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Frustrating world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Third row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Two robots are fighting. The one to the left has six wheels, a tall neck on top of the body, with a head with what could be a camera facing right. It has something pointing forward on the body, which could be a weapon. The robot to the right, seems to be further away into the picture. (it is smaller with less detail). It is human shapes, but made op of square structures. It has two legs and two arms, a torso and a head. It clearly shoots something out of it’s right “hand”. This shot seems to create an explosion a third of the way towards the left robot. There are two mushroom clouds from explosions behind both robots (left and right). Between them there are one more explosion up in the air close to the left robot, and what looks like a fire on the ground right between them. Furthermore there are two missiles in the air, one above the head of each robot. Lines indicate their trajectory. There is not text.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fourth row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fifth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Cueball is standing in front of  a car factory robot, that are larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]&lt;br /&gt;
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Terrifying standoff'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Sixth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3 and 4.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Comics with color]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Robots]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106580</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106580"/>
				<updated>2015-12-07T22:08:09Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced - please also check spelling}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author {{w|Isaac Asimov|Isaac Asimov's}} famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, the short story collection ''{{w|I, Robot}}'', which amongst other include the very first of Asimov's stories to introduce the three laws, {{w|Runaround (story)|Runaround}}. &lt;br /&gt;
&lt;br /&gt;
The three rules are:&lt;br /&gt;
#A robot may not injure a human being or, through inaction, allow a human being to come to harm. &lt;br /&gt;
#A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;br /&gt;
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.&lt;br /&gt;
&lt;br /&gt;
Or in [[Randall|Randall's]] version:&lt;br /&gt;
#Don't harm humans&lt;br /&gt;
#Obey Orders&lt;br /&gt;
#Protect yourself&lt;br /&gt;
&lt;br /&gt;
This comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot; With three rules you could rank them into 6 different sets, only one of which has been explored in depth.&lt;br /&gt;
&lt;br /&gt;
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green:&lt;br /&gt;
;Ordering #1: If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. This leads to a balanced world, explored in detail in Asimov's robot stories. &lt;br /&gt;
&lt;br /&gt;
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore Mars) because of the risk. This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of it's user, [[Cueball]], if he as much as considers unplugging the robot.&lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves. It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation. On the other hand if the other robots are ordered to destroy you, and you cannot be sure that they will not do it, then better to protect your self by going on a killing spree, and then we are back to a realistic hellscape scenario anyway.&lt;br /&gt;
&lt;br /&gt;
To summarize: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the order to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action, but knowing the state of humans affair, this scenario is not realistic. Terrorist would love to have robots they could order to kill all infidels. Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The title text further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or rather dying of thirst). Asimov created the &amp;quot;inaction&amp;quot; clause in the original First Law specifically to avoid scenarios in which a robot puts a human in harm's way, knowing full well that it is within the robot's abilities to save the human, and then simply refrains from saving them; this was explored in the short story [https://en.wikipedia.org/wiki/Little_Lost_Robot Little Lost Robot].&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[Caption at the top of the comic:]&lt;br /&gt;
:'''Why Asimov put the Three Laws'''&lt;br /&gt;
: '''of Robotics in the order he did.'''&lt;br /&gt;
&lt;br /&gt;
:[Below are six rows with first two frames and then a label in color to the right. Above the two column of frames there are labels as well. In the first column six different ways of ordering the three laws are listed. Then the second column shown an image of the consequences of this order. Except in the first where there is a reference. The label to the right rates the kind of world that order of the laws would result in.]&lt;br /&gt;
&lt;br /&gt;
:[Labels above the columns]&lt;br /&gt;
:Possible ordering &lt;br /&gt;
:Consequences&lt;br /&gt;
&lt;br /&gt;
:[The six rows follows below. First the text in the first frame, then a description of the second frame, including possible text below and finally the colored label.]&lt;br /&gt;
&lt;br /&gt;
:[First row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Only text in square brackets:]&lt;br /&gt;
::[See Asmiov’s stories]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;'''Balanced world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Second row:]&lt;br /&gt;
:1. (1) Don't harm humans&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Megan points at a mars rower  with six wheels, a satellite disc, an arm and a camera head turned towards her, what to do.]&lt;br /&gt;
:Megan: Explore Mars!&lt;br /&gt;
:Mars rower: Haha, no. It’s cold and I’d die.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Frustrating world'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Third row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (3) Protect yourself&lt;br /&gt;
:[Two robots are fighting. The one to the left has six wheels, a tall neck on top of the body, with a head with what could be a camera facing right. It has something pointing forward on the body, which could be a weapon. The robot to the right, seems to be further away into the picture. (it is smaller with less detail). It is human shapes, but made op of square structures. It has two legs and two arms, a torso and a head. It clearly shoots something out of it’s right “hand”. This shot seems to create an explosion a third of the way towards the left robot. There are two mushroom clouds from explosions behind both robots (left and right). Between them there are one more explosion up in the air close to the left robot, and what looks like a fire on the ground right between them. Furthermore there are two missiles in the air, one above the head of each robot. Lines indicate their trajectory. There is not text.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fourth row:]&lt;br /&gt;
:1. (2)  Obey Orders&lt;br /&gt;
:2. (3) Protect yourself&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Fifth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (1) Don't harm humans&lt;br /&gt;
:3. (2)  Obey Orders&lt;br /&gt;
:[Cueball is standing in front of  a car factory robot, that are larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]&lt;br /&gt;
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.&lt;br /&gt;
:&amp;lt;font color=&amp;quot;orange&amp;quot;&amp;gt;'''Terrifying standoff'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Sixth row:]&lt;br /&gt;
:1. (3) Protect yourself&lt;br /&gt;
:2. (2)  Obey Orders&lt;br /&gt;
:3. (1) Don't harm humans:&lt;br /&gt;
:[Exactly the same picture as in row 3 and 4.]&lt;br /&gt;
:&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''Killbot hellscape'''&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Comics with color]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Robots]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=806:_Tech_Support&amp;diff=81852</id>
		<title>806: Tech Support</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=806:_Tech_Support&amp;diff=81852"/>
				<updated>2015-01-02T00:09:23Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */ The etymology of &amp;quot;shibboleth&amp;quot; is less-useful than an actual definition.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 806&lt;br /&gt;
| date      = October 15, 2010&lt;br /&gt;
| title     = Tech Support&lt;br /&gt;
| image     = tech support.png&lt;br /&gt;
| titletext = I recently had someone ask me to go get a computer and turn it on so I could restart it. He refused to move further in the script until I said I had done that.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
[[Cueball]] runs into some problems with his network connection and contacts his ISP's tech support for help. The customer service agent is not very helpful, giving clearly pre-scripted advice that has nothing to do with Cueball's problem. Cueball gives up and asks to speak to someone more knowledgeable about the technology. Noticing the {{w|Tux|stuffed penguin}} and the {{w|Richard Stallman|bearded dude with swords}} — signs of a GNU/Linux geek — the agent transfers him over to an engineer, who immediately recognizes the problem and fixes it. Then she tells him of a secret word (shibboleet) which, if he speaks on the phone, will transfer him to a tech-savvy person able to help him. At this point Cueball wakes up and unfortunately, the incident turns out to be a dream.&lt;br /&gt;
&lt;br /&gt;
Cueball is running {{w|Haiku (operating system)|Haiku}}, an operating system which still has no stable release out yet. It's unlikely that any tech support line would know what to do with that.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Shibboleet&amp;quot; is a portmanteau of &amp;quot;Shibboleth&amp;quot; and &amp;quot;Leet&amp;quot;. A &amp;quot;{{w|shibboleth}}&amp;quot; means any word, custom, or other signifier which is used by members of a group to recognize other members or those who are &amp;quot;in the know&amp;quot; about something, i.e. in the Hebrew Bible, the precise pronunciation of this word was used to distinguish Gileadites from Ephramitites. {{w|Leet}} (based on the word &amp;quot;elite&amp;quot;) refers to &amp;quot;leet-speak&amp;quot;, a practice of character substitution and abbreviation common across the Internet (or &amp;quot;teh 1n73rn3t&amp;quot;, as you would say in leet). Thus, &amp;quot;shibboleet&amp;quot; is a shibboleth used to identify someone whose computer-knowledge is &amp;quot;elite.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[Cueball is on the phone, and holding up some networking hardware.]&lt;br /&gt;
:Cueball: ...restart my computer? I know you have a script to follow, but the uplink light on the modem is going off every few hours. The problem is between your office and the modem.&lt;br /&gt;
&lt;br /&gt;
:Cueball: My computer has nothing to do with... okay, whatever, I &amp;quot;restarted my computer.&amp;quot;&lt;br /&gt;
:Cueball: It's still down, and even if it comes back, it's going to die again in a few hours, because your—&lt;br /&gt;
&lt;br /&gt;
:Cueball: I don't ''have'' a start menu. This is a Haiku install, but that's not import—&lt;br /&gt;
:Cueball: Haiku? It's an experimental OS that I ... oh, never mind.&lt;br /&gt;
&lt;br /&gt;
:Cueball: I'm sorry, but this won't get fixed until I talk to an engineer. Can you look around for someone wearing cargo pants, maybe a subway map on their wall?&lt;br /&gt;
&lt;br /&gt;
:[The tech support person on the other end is wearing a headset, and looks around.]&lt;br /&gt;
:Tech: There's a chick two phones over with a stuffed penguin doll and a poster of some bearded dudes with swords.&lt;br /&gt;
:Cueball: Perfect. Can you put her on?&lt;br /&gt;
:Tech: Sure.&lt;br /&gt;
&lt;br /&gt;
:[Cueball is now talking to the engineer.]&lt;br /&gt;
:Cueball: Hey, so sorry to bother you, but my connection—&lt;br /&gt;
:Engineer: Yeah, I see it. Lingering problems from a server move.&lt;br /&gt;
:&amp;lt;type type&amp;gt;&lt;br /&gt;
:Engineer: Should be fixed now.&lt;br /&gt;
:Cueball: Thank you ''so much.''&lt;br /&gt;
&lt;br /&gt;
:Engineer: No problem. Hey, in the future, if you're on any tech support call, you can say the code word &amp;quot;shibboleet&amp;quot; at any point and you'll be automatically transferred to someone who knows a minimum of two programming languages.&lt;br /&gt;
&lt;br /&gt;
:Cueball: Seriously?&lt;br /&gt;
:Engineer: Yup. It's a backdoor put in by the geeks who built these phone support systems back in the 1990's.&lt;br /&gt;
:Engineer: Don't tell anyone.&lt;br /&gt;
&lt;br /&gt;
:Cueball: Oh my god, this is the greatest—&lt;br /&gt;
:[Cueball wakes up.]&lt;br /&gt;
:Cueball: Wha—&lt;br /&gt;
:Cueball: ... ''Dammit.''&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Computers]]&lt;br /&gt;
[[Category:Dreams]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1436:_Orb_Hammer&amp;diff=77544</id>
		<title>1436: Orb Hammer</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1436:_Orb_Hammer&amp;diff=77544"/>
				<updated>2014-10-20T16:08:48Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1436&lt;br /&gt;
| date      = October 20, 2014&lt;br /&gt;
| title     = Orb Hammer&lt;br /&gt;
| image     = orb_hammer.png&lt;br /&gt;
| titletext = Ok, but make sure to get lots of pieces of rock, because later we'll decide to stay in a room on our regular orb and watch hammers hold themselves and hit rocks for us, and they won't bring us very many rocks.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Added title text explain first draft.}}&lt;br /&gt;
This conversation suggests doing something that sounds absurd and not useful at all for the daily activities of a regular human. Yet it refers in simple English words to the {{w|Apollo_program|Apollo human spaceflight program}} which sent people to the Moon to bring geological samples back to Earth to study them. The use of such simple language contributes to the effect of the suggestion sounding absurd, even though the numerous side-products of the effort to realize the project have in fact had many benefits for regular people.&lt;br /&gt;
&lt;br /&gt;
No person has been on the Moon since the final Apollo mission, Apollo 17, in 1972. Occasional lunar rocks can still be collected on Earth. They are formed when a celestial body impacts the Moon's surface, forming a crater and launching small rocks into the space. Some of them will eventually reach Earth, see {{w|Lunar_meteorite|lunar meteorites}}.&lt;br /&gt;
&lt;br /&gt;
The title text refers to the current Mars missions ({{w|Mars_Pathfinder|Pathfinder}}, {{w|Spirit_(rover)|Spirit}}, {{w|Opportunity_(rover)|Opportunity}}, {{w|Curiosity_(rover)|Curiosity}}) where, instead of traveling to Mars ourselves, we stay on Earth (&amp;quot;our regular orb&amp;quot;) and control rovers by remote. The rovers collect geological samples and test them for life, but have no way to send the samples back to Earth.&lt;br /&gt;
&lt;br /&gt;
The idea of using simple language in highly technical fields began with [[547: Simple]] and was revisited in [[1133: Up Goer Five]]. It should be noted however, that in this case [[Randall]] didn't use the 1000 most basic words in the English language, because that [http://simple.wikipedia.org/wiki/Wikipedia:List_of_1000_basic_words list] does contain the words &amp;quot;Moon&amp;quot; and &amp;quot;Earth,&amp;quot; but not &amp;quot;glowing&amp;quot; or &amp;quot;orb.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
Person 1: You know that glowing orb in the night sky?&lt;br /&gt;
&lt;br /&gt;
Person 2: Yeah?&lt;br /&gt;
&lt;br /&gt;
Person 1: Let's go hit it with a hammer until little pieces break off, then bring the pieces back and lock them in a closet.&lt;br /&gt;
&lt;br /&gt;
Person 2: Sounds good!&lt;br /&gt;
&lt;br /&gt;
Text under panel: The Apollo program was weird.&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1277:_Ayn_Random&amp;diff=76507</id>
		<title>1277: Ayn Random</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1277:_Ayn_Random&amp;diff=76507"/>
				<updated>2014-09-30T15:52:01Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1277&lt;br /&gt;
| date      = October 14, 2013&lt;br /&gt;
| title     = Ayn Random&lt;br /&gt;
| image     = ayn random.png&lt;br /&gt;
| titletext = In a cavern deep below the Earth, Ayn Rand, Paul Ryan, Rand Paul, Ann Druyan, Paul Rudd, Alan Alda, and Duran Duran meet together in the Secret Council of /(\b[plurandy]+\b ?){2}/i.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
The comic is an attack on the perceived problems with the philosophy of &amp;quot;Objectivism&amp;quot;. [[White Hat]] explains to [[Cueball]] a program he wrote, the &amp;quot;Ayn Random Number Generator&amp;quot;, which is a pun on {{w|Ayn Rand}}, the name of a writer who created a philosophical system known as {{w|Objectivism (Ayn Rand)|Objectivism}}. The joke is an attack on her philosophy, which claims to be a completely fair mechanism for distributing resources, but (arguably) inherently favors those who start out with more resources, or already in a position to acquire the resources. It also, again arguably, has a strong overarching theme that people that believe in objectivism are inherently better than other people, and thus deserve what extra resources can be acquired - as with the Ayn Random Number Generator, which claims to be completely fair and balanced, but actually favors some numbers - which White Hat explains by saying that they deserve to come up more because they're inherently better.&lt;br /&gt;
&lt;br /&gt;
Now, objectivists, of course, would challenge the above portrayal, but the joke is, in the end, an attack on Ayn Rand's philosophies. A more nuanced description is that objectivists believe that the primary aim of life is to maximise personal happiness. In their view, if some humans are born more capable of satisfying their desires than other people, they deserve to reap greater rewards from life than others, no matter the cost to those others.&lt;br /&gt;
&lt;br /&gt;
The title text identifies a group of people whose names match the {{w|regular expression}} &amp;lt;code&amp;gt;/(\b[plurandy]+\b ?){2}/i&amp;lt;/code&amp;gt;. A step-by-step explanation of the expression:&lt;br /&gt;
*\b is a word boundary, matching anywhere there is a 'word character' next to a non-word character—punctuation, digit, spacing, etc.&lt;br /&gt;
*[plurandy] is a character class, and will match any single character from the set inside the square brackets; [adlnpruy] means exactly the same&lt;br /&gt;
*the plus sign means ''one or more'' of the previous thing, so [plurandy]+ matches one or many of the characters in that class, one after the other&lt;br /&gt;
*&amp;quot; ?&amp;quot; - a space followed by a question mark:  &amp;quot;?&amp;quot; means &amp;quot;0 or 1 of the previous thing&amp;quot;, so a space is optional&lt;br /&gt;
*the whole section in parentheses (\b[plurandy]+\b ?) translates to &amp;quot;a word containing one or more letters, all of which are in the set [plurandy], followed by an optional space&amp;quot;&lt;br /&gt;
*the {2} on the end means to repeat the pattern, so it must match exactly twice&lt;br /&gt;
*The slashes at each end mark out the pattern, and the &amp;quot;i&amp;quot; at the end is an expression qualifier means it is &amp;quot;case insensitive&amp;quot; (uppercase and lowercase match interchangeably)&lt;br /&gt;
&lt;br /&gt;
Overall, it matches two words separated by a space, composed entirely of the letters in [plurandy], which is what all the names listed have in common.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Person !! Brief Description&lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Ayn Rand}} || Author, best known for her novels {{w|The Fountainhead}} and {{w|Atlas Shrugged}}. &lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Paul Ryan}} || US Politician known to have been influenced by the writings of Ayn Rand.&lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Rand Paul}} || US Politician, also influenced by Ayn Rand's writings.&lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Ann Druyan}} || Author, widow of {{w|Carl Sagan}}&lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Paul Rudd}} || Actor, screenwriter, comedian&lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Alan Alda}} || Actor, best known for the role of Hawkeye Pierce in the TV series M*A*S*H. Played Arnold Vinick, a fiscally-conservative Republican presidential candidate, in {{w|The West Wing}}. &lt;br /&gt;
|-&lt;br /&gt;
| style=white-space:nowrap | {{w|Duran Duran}} || New Wave/Rock band&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
As an aside, if the entirety of the title text is matched against the regular expression, it matches &amp;quot;and Duran&amp;quot; instead of &amp;quot;Duran Duran&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===Speculation===&lt;br /&gt;
Since the primary virtue in Objectivist ethics is rationality (or, at least, &amp;quot;rationality&amp;quot; as defined by Rand: her critics argue that the conclusions she reached do not actually derive inevitably from her premises and that additional, unstated assumptions are necessary to make the system work), the implication may be that the random number generator favors rational numbers (numbers that can be written as a fraction, i.e. a quotient p/q). On the other hand, given computers cannot store numbers of unlimited length, it is, for all practical purposes, impossible for '''any''' real world computer random number generator to produce an irrational number - so probably not. &amp;amp;pi; is an irrational number. However, a random number generator can only ever generate a number of fixed length, and any fixed-length approximation of an irrational number, such as 3.14159, is just a rational number: 3.14159 = 314159/100000, and if it can be written as a fraction, it's not irrational.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[Cueball sitting at a laptop, White Hat behind him.]&lt;br /&gt;
:Cueball: This Ayn Random number generator you wrote '''''claims''''' to be fair, but the output is biased toward certain numbers.&lt;br /&gt;
:White Hat: '''''WELL, MAYBE THOSE NUMBERS ARE JUST INTRINSICALLY BETTER!'''''&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring White Hat]]&lt;br /&gt;
[[Category:Programming]]&lt;br /&gt;
[[Category:Comics featuring real people]]&lt;br /&gt;
[[Category:Philosophy]]&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1420:_Watches&amp;diff=75821</id>
		<title>1420: Watches</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1420:_Watches&amp;diff=75821"/>
				<updated>2014-09-14T18:46:33Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: /* Explanation */ Linking within the wiki ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1420&lt;br /&gt;
| date      = September 12, 2014&lt;br /&gt;
| title     = Watches&lt;br /&gt;
| image     = watches.png&lt;br /&gt;
| titletext = Old people used to write obnoxious thinkpieces about how people these days always wear watches and are slaves to the clock, but now they've switched to writing thinkpieces about how kids these days don't appreciate the benefits of an old-fashioned watch. My position is: The word 'thinkpiece' sounds like a word made up by someone who didn't know about the word 'brain'.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|More details.}}&lt;br /&gt;
This comic coincides with the announcement of a new [https://www.apple.com/watch/ smart watch] by Apple earlier in the week (9th Sept 2014), along with a large emphasis on smartwatches at IFA 2014 (Sept 5-10), particularly 'Android Wear' . &lt;br /&gt;
&lt;br /&gt;
The timeline shows a period from 2005 to 2015 where our wrists were liberated from the tethers of wearing a watch, likely attributed to the fact that many instead used a mobile 'smart' phone to tell the time.&lt;br /&gt;
&lt;br /&gt;
Whilst other smart watches have been released in the past, Randall predicts that the typical widespread interest following Apple product releases (combined with many other new releases by other companies) will result in our wrists again being shackled in the grip of watches from 2015. The wording of the label suggests that Randall is pre-emptively mourning the imminent loss of freedom of his and others' wrists, though may be humorous hyperbole/sarcasm, as his position has generally been of apathy, such as in [[1215: Insight]].&lt;br /&gt;
&lt;br /&gt;
The title text refers to how 'old people' tend to express derision towards change (generally most widely accepted by 'young people') as not being like it was 'in the good old days', even if this means they contradict themselves. Initial wearing of watches was viewed negatively by the older generation, but now 'not' wearing a watch is instead negative. The second part of the title text starts as if Randall is going to express an opinion on wearing a watch, but then veers off to mock the word '{{w|think piece|thinkpiece}}', due to its (particularly [http://www.merriam-webster.com/dictionary/think%20piece recent]) connotation for lacking factual content and expressing biased opinions. For more details on ''thinkpiece'' see this [http://www.slate.com/blogs/browbeat/2014/05/07/thinkpiece_definition_and_history_roots_of_the_word_show_it_has_long_been.html article]. By equating ''thinkpiece'' with ''brain'', Randall is making a reference to the fact that this compound word does not follow the convention of the compound word ''timepiece'', which is a synonym for ''watch''.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
:[A timeline shows the following years but extends further in both direction:]&lt;br /&gt;
:1990 2000 2010 2020 2030&lt;br /&gt;
:[A grey box extends from the left border to approximately 2005 and another grey box begins approximately at 2015 and continues to the right border. They are labeled:]&lt;br /&gt;
:Regular watches &lt;br /&gt;
:Smart watches&lt;br /&gt;
:[An arrow points up to the empty period between 2005 and 2015. Below the arrow is written:]&lt;br /&gt;
:Brief, glorious period in which our wrists were free&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=User:KimikoMuffin&amp;diff=75766</id>
		<title>User:KimikoMuffin</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=User:KimikoMuffin&amp;diff=75766"/>
				<updated>2014-09-12T22:37:41Z</updated>
		
		<summary type="html">&lt;p&gt;KimikoMuffin: Created page with &amp;quot;I am a muffin. I am not on fire.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I am a muffin. I am not on fire.&lt;/div&gt;</summary>
		<author><name>KimikoMuffin</name></author>	</entry>

	</feed>