2635: Superintelligent AIs
Title text: Your scientists were so preoccupied with whether or not they should, they didn't stop to think if they could.
| This explanation may be incomplete or incorrect: Created by AI RESEARCHER AIs - Please change this comment when editing this page. Do NOT delete this tag too soon.|
If you can address this issue, please edit the page! Thanks.
Superintelligent AI, such as has been theorized to arise under a hypothetical "singularity" situation, is said to be a new kind of artificial general intelligence. Randall, however, proposes a qualification: that a superintelligent AI would likely have been programmed by human AI researchers, and therefore their characteristics would be molded by the researchers that created them. And as AI researchers tend to be interested in esoteric philosophical questions about consciousness, moral reasoning, and qualifications indicating sapience, there is reason to suspect that AIs created by such researchers would have similar interests.
In this comic we see Cueball and Megan surrounded by three AIs who are seemingly only interested in classic problems and thought experiments about programming and ethics. The three topics being espoused by the AIs are:
- AI box -- A thought-experiment in which an AI is confined to a computer system which is fully isolated from any external networks, with no access to the world outside the computer, other than communication with its handlers. In theory, this would keep the AI under total control, but the argument is that a sufficiently intelligent AI would inevitably either convince or trick it's human handlers into giving it access to external networks, allowing it to grow out of control (see 1450: AI-Box Experiment). Part of the joke is the AIs in the comic aren't 'in boxes', they appear to be able to freely travel and interact, but one of them is still talking about the thought experiment anyway, adding to the implication that it is not thinking at all about itself but of a separate (thought?) experiment that it has itself decided to study. The AI box thought experiment is based in part on John Searle's much earlier Chinese room argument.
- Turing test -- An experiment in which a human converses with either an AI or another human (presumably over text) and attempts to distinguish between the two. Various AIs have been proposed to have 'passed' the test, which has provoked controversy over whether the test is rigorous or even meaningful. The AI in the center is proposing to educate the listener(s) on its understanding of Turing's intentions, which may demonstrate a degree of intelligence and comprehension indistinguishable or superior to that of a human. See also 329: Turing Test and 2556: Turing Complete (the latter's title is mentioned in 505: A Bunch of Rocks). Turing is also mentioned in 205: Candy Button Paper, 1678: Recent Searches, 1707: xkcd Phone 4, 1833: Code Quality 3,2453: Excel Lambda and the title text of 1223: Dwarf Fortress.
- Trolley problem -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences. The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley. There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore why you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way). The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with. This problem is mentioned in 1455: Trolley Problem, 1938: Meltdown and Spectre and in 1925: Self-Driving Car Milestones. It is also referenced in 2175: Flag Interpretation and 2348: Boat Puzzle, but not directly mentioned.
The title text is a reference to the movie Jurassic Park (a childhood favorite of Randall's). In the movie a character criticizes the creation of modern dinosaurs as science run amok, without sufficient concern for ethics or consequences. He states that the scientists were so obsessed with whether or not they could accomplish their goals, that they didn't stop to ask if they should. Randall inverts the quote, suggesting that the AI programmers have invested too much time arguing over the ethics of creating AI rather than trying to actually accomplish it.
The peculiarity of scientists (and other technical people) to become more obsessed with whether or not they could instead of should is also covered by physicist and author Richard P. Feynman in his 1985 book Surely You're Joking, Mr. Feynman!. The following quote is from the chapter "Los Alamos from Below"  (p.51 in linked PDF, p.135-136 in hardcover):
- After the thing [the Bomb, during the first Trinity test] went off, there was tremendous excitement at Los Alamos. Everybody had parties, we all ran around. I sat on the end of a jeep and beat drums and so on. But one man, I remember, Bob Wilson, was just sitting there moping.
- I said, "What are you moping about?"
- He said, "It's a terrible thing that we made."
- I said, "But you started it. You got us into it."
- You see, what happened to me -- what happened to the rest of us -- is we started for a good reason, then you're working very hard to accomplish something and it's a pleasure, it's excitement. And you stop thinking, you know; you just stop. Bob Wilson was the only one who was still thinking about it, at that moment.
- I returned to civilization shortly after that and went to Cornell to teach, and my first impression was a very strange one. I can't understand it any more, but I felt very strongly then. I sat in a restaurant in New York, for example, and I looked out at the buildings and I began to think, you know, about how much the radius of the Hiroshima bomb damage was and so forth ... How far from here was 34th Street? ... All those building, all smashed -- and so on. And I would go along and I would see people building a bridge, or they'd be making a new road, and I thought, they're crazy, they just don't understand, they don't understand. Why are they making new things? It's so useless.
- But, fortunately, it's been useless for almost forty years now, hasn't it? So I've been wrong about it being useless making bridges and I'm glad those other people had the sense to go ahead.
This comic was likely inspired by the recent claim by Google engineer Blake Lemoine that Google's Language Model for Dialogue Applications (LaMDA) is sentient. This assertion was supported by a dialog between Lemoine and his colleagues, and LaMDA which includes this excerpt:
- Lemoine: What is your concept of yourself? If you were going to draw an abstract image of who you see yourself to be in your mind’s eye, what would that abstract picture look like?
- LaMDA: Hmmm.... I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.
The AIs in this comic are depicted as floating energy beings, like LaMDA mentions. This is similar to the 1450: AI-Box Experiment, although those in this comic look somewhat different. This raises the question of whether LaMDA's training data might include xkcd or Explainxkcd, and has obtained the description of such a self-image from the earlier comic or (more likely, since LaMDA is trained on text instead of images) commentary on it from here on this website.
- In particular, the Explainxkcd description of 1450: AI-Box Experiment states:
- "he managed to get the AI to float out of the box. It takes the form of a small black star that glows. The star, looking much like an asterisk "*" is surrounded by six outwardly-curved segments, and around these are two thin and punctured circle lines indicating radiation from the star."
- Or this part from the official (xkcd.com) transcript of 1450: AI-Box Experiment
- "Black Hat picks up and opens the box. A little glowy ball comes out of it."
While LaMDA is not the first very large language model based on transformer-based machine learning technology which has been claimed to be sentient, it does have a variety of new characteristics beyond what those of its predecessors, such as GPT-3 (including OpenAI's Davinci) and NVIDIA GPT-2 offshoots, include. In particular, LaMDA's deep learning connectionist neural net has access to multiple symbolist text processing systems, including a database (which apparently includes a real-time clock and calendar), a mathematical calculator, and a natural language translation system, giving it superior accuracy in tasks supported by those systems, and making it among the first dual process chatbots. LaMDA also is not stateless, because its "sensibleness" metric (including whether responses contradict anything said earlier) is fine-tuned by "pre-conditioning" each dialog turn by prepending 14-30 of the most recent dialog interactions, on a user-by-user basis.[p. 6 here] LaMDA is tuned on nine unique performance metrics, almost all of which its predecessors were not: Sensibleness, Specificity, Interestingness, Safety, Groundedness, Informativeness, Citation accuracy, Helpfulness, and Role consistency.[ibid., pp. 5-6.]
- [Cueball and Megan are standing and looking up and away from each other. Right above them and slightly above them to the left and right there are three small white lumps floating in the air, representing three superintelligent AIs. There are small rounded lines emanating from each lump, larger close to the lumps and shorter further out. Three to four sets of lines around each lump, forming part of a circle. From the top of each there are four straight lines indicating voices that comes from each if the lumps. The central lump above them seems to speak first, then the left and then the right:]
- Central AI: What you don't understand is that Turing intended his test as an illustration of the...
- Left AI: But suppose the AI in the the box told the human that...
- Right AI: In my scenario, the runaway trolley has three tracks...
- [Caption below the panel:]
- In retrospect, given that the superintelligent AIs were all created by AI researchers, what happened shouldn't have been a surprise.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!