Editing 2635: Superintelligent AIs

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 16: Line 16:
 
*{{w|AI box}} -- A thought-experiment in which an AI is confined to a computer system which is fully isolated from any external networks, with no access to the world outside the computer, other than communication with its handlers. In theory, this would keep the AI under total control, but the argument is that a sufficiently intelligent AI would inevitably either convince or trick its human handlers into giving it access to external networks, allowing it to grow out of control (see [[1450: AI-Box Experiment]]). Part of the joke is the AIs in the comic aren't 'in boxes', they appear to be able to freely travel and interact, but one of them is still talking about the thought experiment anyway, adding to the implication that it is not thinking at all about itself but of a separate (thought?) experiment that it has itself decided to study. The AI box thought experiment is based in part on {{w|John Searle}}'s much earlier {{w|Chinese room}} argument.
 
*{{w|AI box}} -- A thought-experiment in which an AI is confined to a computer system which is fully isolated from any external networks, with no access to the world outside the computer, other than communication with its handlers. In theory, this would keep the AI under total control, but the argument is that a sufficiently intelligent AI would inevitably either convince or trick its human handlers into giving it access to external networks, allowing it to grow out of control (see [[1450: AI-Box Experiment]]). Part of the joke is the AIs in the comic aren't 'in boxes', they appear to be able to freely travel and interact, but one of them is still talking about the thought experiment anyway, adding to the implication that it is not thinking at all about itself but of a separate (thought?) experiment that it has itself decided to study. The AI box thought experiment is based in part on {{w|John Searle}}'s much earlier {{w|Chinese room}} argument.
  
βˆ’
*{{w|Turing test}} -- An experiment in which a human converses with either an AI or another human (presumably over text) and attempts to distinguish between the two.  Various AIs have been proposed to have 'passed' the test, which has provoked controversy over whether the test is rigorous or even meaningful.  The AI in the center is proposing to educate the listener(s) on its understanding of Turing's intentions, which may demonstrate a degree of intelligence and comprehension indistinguishable or superior to that of a human. See also [[329: Turing Test]] and [[2556: Turing Complete]] (the latter's title is mentioned in [[505: A Bunch of Rocks]]). Turing is also mentioned in [[205: Candy Button Paper]], [[1678: Recent Searches]], [[1707: xkcd Phone 4]], [[1833: Code Quality 3]], [[2453: Excel Lambda]] and the title text of [[1223: Dwarf Fortress]].
+
*{{w|Turing test}} -- An experiment in which a human converses with either an AI or another human (presumably over text) and attempts to distinguish between the two.  Various AIs have been proposed to have 'passed' the test, which has provoked controversy over whether the test is rigorous or even meaningful.  The AI in the center is proposing to educate the listener(s) on its understanding of Turing's intentions, which may demonstrate a degree of intelligence and comprehension indistinguishable or superior to that of a human. See also [[329: Turing Test]] and [[2556: Turing Complete]] (the latter's title is mentioned in [[505: A Bunch of Rocks]]). Turing is also mentioned in [[205: Candy Button Paper]], [[1678: Recent Searches]], [[1707: xkcd Phone 4]], [[1833: Code Quality 3]],[[2453: Excel Lambda]] and the title text of [[1223: Dwarf Fortress]].
  
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)