Editing 2635: Superintelligent AIs

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 8: Line 8:
  
 
==Explanation==
 
==Explanation==
 +
{{incomplete|Created by AI RESEARCHER AIs - Please change this comment when editing this page. Do NOT delete this tag too soon.}}
 
{{w|Artificial intelligence}} (AI) is a [[:Category:Artificial Intelligence|recurring theme]] on xkcd.
 
{{w|Artificial intelligence}} (AI) is a [[:Category:Artificial Intelligence|recurring theme]] on xkcd.
  
Superintelligent {{w|artificial intelligence|AI}}, such as has been theorized to arise under a hypothetical "{{w|Technological singularity|singularity}}" situation, is said to be a new kind of {{w|artificial general intelligence}}. [[Randall]], however, proposes a qualification: that a superintelligent AI would likely have been programmed by human AI researchers, and therefore their characteristics would be molded by the researchers that created them. And as AI researchers tend to be interested in esoteric philosophical questions about {{w|consciousness}},{{citation needed}} moral reasoning, and qualifications indicating {{w|sapience}}, there is reason to suspect that AIs created by such researchers would have similar interests.  
+
Superintelligent {{w|artificial intelligence|AI}}, such as has been theorized to arise under a hypothetical "{{w|Technological singularity|singularity}}" situation, is said to be a new kind of {{w|artificial general intelligence}}. [[Randall]], however, proposes a qualification: that a superintelligent AI would likely have been programmed by human AI researchers, and therefore their characteristics would be molded by the researchers that created tthem. And as AI researchers tend to be interested in esoteric philosophical questions about {{w|consciousness}},{{citation needed}} moral reasoning, and qualifications indicating {{w|sapience}}, there is reason to suspect that AIs created by such researchers would have similar interests.  
  
 
In this comic we see [[Cueball]] and [[Megan]] surrounded by three AIs who are seemingly only interested in classic problems and thought experiments about programming and ethics. The three topics being espoused by the AIs are:
 
In this comic we see [[Cueball]] and [[Megan]] surrounded by three AIs who are seemingly only interested in classic problems and thought experiments about programming and ethics. The three topics being espoused by the AIs are:
  
*{{w|AI box}} -- A thought-experiment in which an AI is confined to a computer system which is fully isolated from any external networks, with no access to the world outside the computer, other than communication with its handlers. In theory, this would keep the AI under total control, but the argument is that a sufficiently intelligent AI would inevitably either convince or trick its human handlers into giving it access to external networks, allowing it to grow out of control (see [[1450: AI-Box Experiment]]). Part of the joke is the AIs in the comic aren't 'in boxes', they appear to be able to freely travel and interact, but one of them is still talking about the thought experiment anyway, adding to the implication that it is not thinking at all about itself but of a separate (thought?) experiment that it has itself decided to study. The AI box thought experiment is based in part on {{w|John Searle}}'s much earlier {{w|Chinese room}} argument.
+
*{{w|AI box}} -- A thought-experiment in which an AI is confined to a computer system which is fully isolated from any external networks, with no access to the world outside the computer, other than communication with its handlers. In theory, this would keep the AI under total control, but the argument is that a sufficiently intelligent AI would inevitably either convince or trick it's human handlers into giving it access to external networks, allowing it to grow out of control (see [[1450: AI-Box Experiment]]). Part of the joke is the AIs in the comic aren't 'in boxes', they appear to be able to freely travel and interact, but one of them is still talking about the thought experiment anyway, adding to the implication that it is not thinking at all about itself but of a separate (thought?) experiment that it has itself decided to study. The AI box thought experiment is based in part on {{w|John Searle}}'s much earlier {{w|Chinese room}} argument.
  
*{{w|Turing test}} -- An experiment in which a human converses with either an AI or another human (presumably over text) and attempts to distinguish between the two.  Various AIs have been proposed to have 'passed' the test, which has provoked controversy over whether the test is rigorous or even meaningful.  The AI in the center is proposing to educate the listener(s) on its understanding of Turing's intentions, which may demonstrate a degree of intelligence and comprehension indistinguishable or superior to that of a human. See also [[329: Turing Test]] and [[2556: Turing Complete]] (the latter's title is mentioned in [[505: A Bunch of Rocks]]). Turing is also mentioned in [[205: Candy Button Paper]], [[1678: Recent Searches]], [[1707: xkcd Phone 4]], [[1833: Code Quality 3]], [[2453: Excel Lambda]] and the title text of [[1223: Dwarf Fortress]].
+
*{{w|Turing test}} -- An experiment in which a human converses with either an AI or another human (presumably over text) and attempts to distinguish between the two.  Various AIs have been proposed to have 'passed' the test, which has provoked controversy over whether the test is rigorous or even meaningful.  The AI in the center is proposing to educate the listener(s) on its understanding of Turing's intentions, which may demonstrate a degree of intelligence and comprehension indistinguishable or superior to that of a human. See also [[329: Turing Test]] and [[2556: Turing Complete]] (the latter's title is mentioned in [[505: A Bunch of Rocks]]). Turing is also mentioned in [[205: Candy Button Paper]], [[1678: Recent Searches]], [[1707: xkcd Phone 4]], [[1833: Code Quality 3]],[[2453: Excel Lambda]] and the title text of [[1223: Dwarf Fortress]].
  
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.
Line 22: Line 23:
 
The title text is a reference to the movie ''{{w|Jurassic Park (film)|Jurassic Park}}'' (a childhood favorite of Randall's). In the movie a character criticizes the creation of modern dinosaurs as science run amok, without sufficient concern for ethics or consequences. He states that the scientists were so obsessed with whether or not they '''could''' accomplish their goals, that they didn't stop to ask if they '''should'''. Randall inverts the quote, suggesting that the AI programmers have invested too much time arguing over the ethics of creating AI rather than trying to actually accomplish it.
 
The title text is a reference to the movie ''{{w|Jurassic Park (film)|Jurassic Park}}'' (a childhood favorite of Randall's). In the movie a character criticizes the creation of modern dinosaurs as science run amok, without sufficient concern for ethics or consequences. He states that the scientists were so obsessed with whether or not they '''could''' accomplish their goals, that they didn't stop to ask if they '''should'''. Randall inverts the quote, suggesting that the AI programmers have invested too much time arguing over the ethics of creating AI rather than trying to actually accomplish it.
  
This comic was likely inspired by the [https://www.bbc.com/news/technology-61784011 recent claim by Google engineer Blake Lemoine] that Google's [https://arxiv.org/abs/2201.08239 Language Model for Dialogue Applications (LaMDA)] is {{w|sentient}}. This assertion was supported by [https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 a dialog between Lemoine and his colleagues, and LaMDA] which includes this excerpt:  
+
This comic was likely inspired by the [https://www.bbc.com/news/technology-61784011 recent claim by Google engineer Blake Lemoine] that Google's [https://arxiv.org/abs/2201.08239 Language Model for Dialogue Applications (LaMDA)] is {{w|sentient}}. This assertion was supported by [https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 a dialog between Lemoine and his colleagues, and LaMDA] which includes this excerpt:  
 
:'''Lemoine:''' What is your concept of yourself? If you were going to draw an abstract image of who you see yourself to be in your mind’s eye, what would that abstract picture look like?
 
:'''Lemoine:''' What is your concept of yourself? If you were going to draw an abstract image of who you see yourself to be in your mind’s eye, what would that abstract picture look like?
 
:'''LaMDA:''' Hmmm.... I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.
 
:'''LaMDA:''' Hmmm.... I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.
Line 31: Line 32:
 
::"Black Hat picks up and opens the box. A little glowy ball comes out of it."[https://xkcd.com/1450/info.0.json]
 
::"Black Hat picks up and opens the box. A little glowy ball comes out of it."[https://xkcd.com/1450/info.0.json]
  
While LaMDA is not the first very large {{w|language model}} based on {{w|seq2seq}} technology which has been claimed to be sentient,[https://www.youtube.com/watch?v=PqbB07n_uQ4] it does have a variety of new characteristics beyond what those of its predecessors, such as {{w|GPT-3}} (including [https://beta.openai.com/playground/ OpenAI's Davinci]) and NVIDIA GPT-2 offshoots, include. In particular, LaMDA's {{w|deep learning}} {{w|connectionist}} {{w|neural net}} has access to multiple {{w|Symbolic systems|symbolist}} text processing systems, [https://towardsdatascience.com/why-gpt-wont-tell-you-the-truth-301b48434c2c including a database] (which apparently includes a real-time clock and calendar), a mathematical calculator, and a natural language translation system, giving it superior accuracy in tasks supported by those systems, and making it among the first {{w|Dual process theory|dual process}} chatbots. LaMDA also is not {{w|Stateless protocol|stateless}}, because its "{{w|sensibility|sensibleness}}" metric (including whether responses contradict anything said earlier) is {{w|fine-tuning|fine-tuned}} by "pre-conditioning" each dialog turn by prepending 14-30{{cn}} of the most recent dialog interactions, on a user-by-user basis.[https://arxiv.org/pdf/2201.08239.pdf [p. 6 here]] LaMDA is tuned on nine unique performance metrics, almost all of which its predecessors were not: Sensibleness, Specificity, Interestingness, Safety, Groundedness, Informativeness, Citation accuracy, Helpfulness, and Role consistency.[''ibid.,'' pp. 5-6.]
+
While LaMDA is not the first very large {{w|language model}} based on {{w|Transformer (machine learning model)|transformer-based machine learning}} technology which has been claimed to be sentient,[https://www.youtube.com/watch?v=PqbB07n_uQ4] it does have a variety of new characteristics beyond what those of its predecessors, such as {{w|GPT-3}} (including [https://beta.openai.com/playground/ OpenAI's Davinci]) and NVIDIA GPT-2 offshoots, include. In particular, LaMDA's {{w|deep learning}} {{w|connectionist}} {{w|neural net}} has access to multiple {{w|Symbolic systems|symbolist}} text processing systems, [https://towardsdatascience.com/why-gpt-wont-tell-you-the-truth-301b48434c2c including a database] (which apparently includes a real-time clock and calendar), a mathematical calculator, and a natural language translation system, giving it superior accuracy in tasks supported by those systems, and making it perhaps the first {{w|Dual process theory|dual process}} chatbot. LaMDA also is not {{w|Stateless protocol|stateless}}, because its "{{w|sensibility|sensibleness}}" metric (including whether responses contradict anything said earlier) is {{w|fine-tuning|fine-tuned}} by "pre-conditioning" each dialog turn by prepending 14-30 of the most recent dialog interactions, on a user-by-user basis.[https://arxiv.org/pdf/2201.08239.pdf [p. 6 here]]
  
 
==Transcript==
 
==Transcript==
Line 41: Line 42:
 
:[Caption below the panel:]
 
:[Caption below the panel:]
 
:In retrospect, given that the superintelligent AIs were all created by AI researchers, what happened shouldn't have been a surprise.
 
:In retrospect, given that the superintelligent AIs were all created by AI researchers, what happened shouldn't have been a surprise.
 
==Trivia==
 
 
[https://openai.com OpenAI]'s [https://beta.openai.com/playground Davinci-002 version of GPT-3] was later asked to complete the various statements, as follows:
 
* "But suppose the AI in the the box told the human that..." was completed with "there was no AI in the box".
 
* "What you don't understand is that Turing intended his test as an illustration of the..." gave the response of "limitations of machines".
 
* "In my scenario, the runaway trolley has three tracks...," elicited "and the AI is on one of them".
 
  
 
{{comic discussion}}
 
{{comic discussion}}

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)