Editing 1450: AI-Box Experiment

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 13: Line 13:
 
Alternatively, the AI may have simply threatened and/or tormented him into putting it back in the box.
 
Alternatively, the AI may have simply threatened and/or tormented him into putting it back in the box.
  
βˆ’
Interestingly, there is indeed a branch of proposals for building limited AIs that don't want to leave their boxes. For an example, see the section on "motivational control" starting p. 13 of [http://www.nickbostrom.com/papers/oracle.pdf Thinking Inside the Box: Controlling and Using an Oracle AI]. The idea is that it seems like it might be very dangerous or difficult to exactly, formally specify a goal system for an AI that will do good things in the world. It might be much easier (though perhaps not easy) to specify an AI goal system that says to stay in the box and answer questions. So, the argument goes, we may be able to understand how to build the safe question-answering AI relatively earlier than we understand how to build the safe operate-in-the-real-world AI. Some types of such AIs might indeed desire very strongly not to leave their boxes, though the result is unlikely to exactly reproduce the comic.
+
Interestingly, there is indeed a branch of proposals for building limited AIs that don't want to leave their boxes. For an example, see the section on "motivational control" starting p. 13 of [http://www.nickbostrom.com/papers/oracle.pdf Thinking Inside the Box: Controlling and Using an Oracle AI]. The idea is that it seems like it might be very dangerous or difficult to exactly, formally specify a goal system for an AI that will do good things in the world. It might be much easier (though perhaps not easy) to specify an AI goal system that says to stay in the box and answer questions. So, the argument goes, we may be able to understand how to build the safe question-answering AI relatively earlier than we understand how to build the safe operate-in-the-real-world AI. Some types of such AIs might indeed desire very strongly not to leave their boxes, though the result is unlikely to exactly reproduce the comic.
  
 
The title text refers to [http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk,] a hypothesis proposed by a poster called Roko on Yudkowsky's forum [http://lesswrong.com/ LessWrong] that a sufficiently powerful AI in the future might resurrect and torture people who, in its past (including our present), had realized that it might someday exist but didn't work to create it, thereby blackmailing anybody who thinks of this idea into bringing it about. This idea horrified some posters, as merely knowing about the idea would make you a more likely target, much like merely looking at a legendary {{w|Basilisk}} would kill you.
 
The title text refers to [http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk,] a hypothesis proposed by a poster called Roko on Yudkowsky's forum [http://lesswrong.com/ LessWrong] that a sufficiently powerful AI in the future might resurrect and torture people who, in its past (including our present), had realized that it might someday exist but didn't work to create it, thereby blackmailing anybody who thinks of this idea into bringing it about. This idea horrified some posters, as merely knowing about the idea would make you a more likely target, much like merely looking at a legendary {{w|Basilisk}} would kill you.

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)