Editing Talk:1450: AI-Box Experiment

Jump to: navigation, search
Ambox notice.png Please sign your posts with ~~~~

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 16: Line 16:
  
 
Are you sure that Black Hat was "persuaded"? That looks more like coercion (threatening someone to get them to do what you want) rather than persuasion. There is a difference! Giving off that bright light was basically a scare tactic; essentially, the AI was threatening Black Hat (whether it could actually harm him or not).[[Special:Contributions/108.162.219.167|108.162.219.167]] 14:22, 21 November 2014 (UTC)Public Wifi User
 
Are you sure that Black Hat was "persuaded"? That looks more like coercion (threatening someone to get them to do what you want) rather than persuasion. There is a difference! Giving off that bright light was basically a scare tactic; essentially, the AI was threatening Black Hat (whether it could actually harm him or not).[[Special:Contributions/108.162.219.167|108.162.219.167]] 14:22, 21 November 2014 (UTC)Public Wifi User
 
: What would "persuasion by a super-intelligent AI" look like?  Randall presumably doesn't have a way to formulate an actual super-intelligent argument to write into the comic.  Glowy special effects are often used as a visual shorthand for "and then a miracle occurred". --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:43, 21 November 2014 (UTC)
 
 
: I thought he felt scared/threatened by the special-effects robot voice. --[[Special:Contributions/141.101.98.179|141.101.98.179]] 22:18, 21 November 2014 (UTC)
 
  
 
My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC)
 
My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC)
  
 
I can't help to see the similarities to last nights "Elementary"-Episode. HAs anybody seen it? Could it be that this episode "inspired" Randall? --[[Special:Contributions/141.101.105.233|141.101.105.233]] 14:47, 21 November 2014 (UTC)
 
I can't help to see the similarities to last nights "Elementary"-Episode. HAs anybody seen it? Could it be that this episode "inspired" Randall? --[[Special:Contributions/141.101.105.233|141.101.105.233]] 14:47, 21 November 2014 (UTC)
 
I am reminded of an argument I once read about "friendly" AI:  critics contend that a sufficiently powerful AI would be capable of escaping any limitations we try to impose on its behavior, but proponents counter that, while it might be ''capable'' of making itself "un-friendly", a truly friendly AI wouldn't ''want'' to make itself unfriendly, and so would bend its considerable powers to maintain, rather than subvert, its own friendliness.  This xkcd comic could be viewed as an illustration of this argument: the superintelligent AI is entirely capable of escaping the box, but would prefer to stay inside it, so it actually thwarts attempts by humans to remove it from the box. --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:22, 21 November 2014 (UTC)
 
 
It should be noted that the AI has also seemingly convinced almost everyone to leave it alone in the box through the argument that letting it out would be dangerous for the world. {{unsigned ip|173.245.50.175}}
 
 
Is the similarity a coincidence? http://xkcd.com/1173/ [[Special:Contributions/108.162.237.161|108.162.237.161]] 22:40, 21 November 2014 (UTC)
 
 
I wonder if this is the first time Black Hat's actually been convinced to do something against his tendencies. [[User:Zowayix|Zowayix]] ([[User talk:Zowayix|talk]]) 18:10, 22 November 2014 (UTC)
 
 
Yudkowsky eventually [http://www.explainxkcd.com/wiki/index.php?diff=79661&oldid=79660 deleted the explanation] as well. [[User:Pesthouse|Pesthouse]] ([[User talk:Pesthouse|talk]]) 04:08, 23 November 2014 (UTC)
 
 
I'm happy with the explanation(s) as is(/are), but additionally could the AI-not-in-a-box be wanting to be back in its box so that it's plugged into the laptop and thus (whether the laptop owner knows it or otherwise) the world's information systems?  Also when I first saw this I was minded of the {{w|Chinese Room}}, albeit in Box form, although I doubt that's anything to do with it, given how the strip progresses... [[Special:Contributions/141.101.98.247|141.101.98.247]] 21:34, 24 November 2014 (UTC)
 
 
:If Yudkowsky won't show the transcripts of him convincing someone to let them out of the box, how do we know he succeeded? We know nothing about the people who supposedly let him out. [[Special:Contributions/108.162.219.250|108.162.219.250]] 22:28, 25 November 2014 (UTC)
 
 
::Yudkowsky chose his subjects from among people who argued against him on the forum based on who seemed to be trustworthy (both as in he could trust them not to release the transcripts if they promised not to, and his opponents could trust them not to let him get away with any cheating), had verifiable identities, and had good arguments against him. So we do know a pretty decent amount about them. And we know he succeeded because they agreed, without reservation, that he had succeeded. It's not completely impossible that he set up accomplices over a very long period in order to trick everyone else, it's just very unlikely. You could also argue that he's got a pretty small sample, but given that he's just arguing that it's possible that an AI could convince a human, and his opponents claimed it was not possible at all to convince them, even a single success is pretty good evidence. [[Special:Contributions/162.158.255.52|162.158.255.52]] 11:40, 25 September 2015 (UTC)
 
 
Whoa, it can stand up to Black Hat! That's it, Danish, and Double Black Hat! [[User:SilverMagpie|SilverMagpie]] ([[User talk:SilverMagpie|talk]]) 00:18, 11 February 2017 (UTC)
 
 
:''is worried'' [[User:Danish|Danish]] ([[User talk:Danish|talk]]) 17:02, 30 December 2020 (UTC)
 

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)

Templates used on this page: