Editing 2635: Superintelligent AIs

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 20: Line 20:
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.
 
*{{w|Trolley problem}} -- A thought-experiment intended to explore the means by which humans judge moral value of actions and consequences.  The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.  There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore ''why'' you would make the decision that you make. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way).  The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with.  This problem is mentioned in [[1455: Trolley Problem]], [[1938: Meltdown and Spectre]] and in [[1925: Self-Driving Car Milestones]]. It is also referenced in [[2175: Flag Interpretation]] and [[2348: Boat Puzzle]], but not directly mentioned.
  
βˆ’
The title text is a reference to the movie ''{{w|Jurassic Park (film)|Jurassic Park}}'' (a childhood favorite of Randall's). In the movie a character criticizes the creation of modern dinosaurs as science run amok, without sufficient concern for ethics or consequences. He states that the scientists were so obsessed with whether or not they '''could''' accomplish their goals, that they didn't stop to ask if they '''should'''. Randall inverts the quote, suggesting that the AI programmers have invested too much time arguing over the ethics of creating AI rather than trying to actually accomplish it.
+
The title text is a reference to the movie ''{{w|Jurassic Park (film)|Jurassic Park}}'' (a childhood favorite of Randall's). In the movie a character criticizes the creation of modern dinosaurs as science run amok, without sufficient concern for ethics or consequences. He states that the scientists were so obsessed with whether or not they COULD do it that they didn't stop to ask if they SHOULD. Randall inverts the quote, suggesting that the AI programmers have invested too much time arguing over the ethics of creating AI rather than trying to actually accomplish it.
  
 
This comic was likely inspired by the [https://www.bbc.com/news/technology-61784011 recent claim by Google engineer Blake Lemoine] that Google's [https://arxiv.org/abs/2201.08239 Language Model for Dialogue Applications (LaMDA)] is {{w|sentient}}. This assertion was supported by [https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 a dialog between Lemoine and his colleagues, and LaMDA] which includes this excerpt:  
 
This comic was likely inspired by the [https://www.bbc.com/news/technology-61784011 recent claim by Google engineer Blake Lemoine] that Google's [https://arxiv.org/abs/2201.08239 Language Model for Dialogue Applications (LaMDA)] is {{w|sentient}}. This assertion was supported by [https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 a dialog between Lemoine and his colleagues, and LaMDA] which includes this excerpt:  

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)