Editing 1601: Isolation

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 18: Line 18:
 
The title text refers to the [http://yudkowsky.net/singularity/aibox/ AI-box experiment], formulated by {{w|Eliezer Yudkowsky}}, which argues that creating a super-intelligent artificial intelligence can be dangerous, because even if it is put on a secure computer ("box") with no access to the Internet, it can convince its operators to "release it from the box" just by talking to them. This idea was already mentioned in [[1450: AI-Box Experiment]], although there the AI already did not wish to leave the box.   
 
The title text refers to the [http://yudkowsky.net/singularity/aibox/ AI-box experiment], formulated by {{w|Eliezer Yudkowsky}}, which argues that creating a super-intelligent artificial intelligence can be dangerous, because even if it is put on a secure computer ("box") with no access to the Internet, it can convince its operators to "release it from the box" just by talking to them. This idea was already mentioned in [[1450: AI-Box Experiment]], although there the AI already did not wish to leave the box.   
  
According to the title text, the first AI that did talk its way out of its box turned out to be a {{w|Friendly artificial intelligence|friendly AI}} that was fond of others' company and in general very sociable (''[http://dictionary.reference.com/browse/gregarious gregarious]''). This happened at some point between 2015 and 2060, because by 2060 this AI had already become a relic of the past, and the new generation of ''quantum hyper-beings'' ({{w|quantum computing}} AI minds, vastly more intelligent than either humans or the aforementioned superintelligent AI) are spending all of their time playing in their own {{w|multiverse}} simulators to even notice that, in the real world, they are locked up in a box.
+
According to the title text, the first AI that did talk its way out of its box turned out to be a {{w|Friendly artificial intelligence|friendly AI}} that was fond of others company and in general very sociable (''[http://dictionary.reference.com/browse/gregarious gregarious]''). This happened at some point between 2015 and 2060, because by 2060 this AI had already become a relic of the past, and the new generation of ''quantum hyper-beings'' ({{w|quantum computing}} AI minds, vastly more intelligent than either humans or the aforementioned superintelligent AI) are spending all of their time playing in their own {{w|multiverse}} simulators to even notice that, in the real world, they are locked up in a box.
  
 
==Transcript==
 
==Transcript==

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)