Editing 1968: Robot Future

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 15: Line 15:
 
An example of unintended consequences arising from an AI carrying out the directives it was designed for can be found in the film ''{{w|Ex Machina (film)|Ex Machina}}''.
 
An example of unintended consequences arising from an AI carrying out the directives it was designed for can be found in the film ''{{w|Ex Machina (film)|Ex Machina}}''.
  
βˆ’
In fact, Randall goes on to imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in [[1626: Judgment Day]]. In general AI has been a [[:Category:Artificial Intelligence|recurring theme]] on xkcd, and he has had opposing views to the Terminator vision also in [[1668: Singularity]] and [[1450: AI-Box Experiment]].
+
In fact, Randall goes onto imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in [[1626: Judgment Day]]. In general AI has been a [[:Category:Artificial Intelligence|recurring theme]] on xkcd, and he has had opposing views to the Terminator vision also in [[1668: Singularity]] and [[1450: AI-Box Experiment]].
  
 
Basically he thus states that we will already be in trouble caused by our own actions long before we develop really sentient AI that could take the control.
 
Basically he thus states that we will already be in trouble caused by our own actions long before we develop really sentient AI that could take the control.

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)