Difference between revisions of "1968: Robot Future"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
Line 14: Line 14:
 
Most science fiction stories that involve sentient AI revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (i.e. {{w|I, Robot | I, Robot}}, {{w|Ex Machina (film) | Ex Machina}}, {{w|Terminator (franchise) | Skynet}}). However, Randall implies that he is actually more concerned about the humans that control these super smart AI before they become fully sentient and rebel. As history is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.
 
Most science fiction stories that involve sentient AI revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (i.e. {{w|I, Robot | I, Robot}}, {{w|Ex Machina (film) | Ex Machina}}, {{w|Terminator (franchise) | Skynet}}). However, Randall implies that he is actually more concerned about the humans that control these super smart AI before they become fully sentient and rebel. As history is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.
  
In fact, Randall goes onto imply that he has a trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient AI will use their powers to safeguard and prevent violence or war in [[1626]].
+
In fact, Randall goes onto imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in [[1626]].
  
 
==Transcript==
 
==Transcript==

Revision as of 17:08, 16 March 2018

Robot Future
I mean, we already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to.
Title text: I mean, we already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to.

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Created by a KILLER BOT. Do NOT delete this tag too soon.
If you can address this issue, please edit the page! Thanks.

Randall's fear is explored in the video Slaughterbots.

Most science fiction stories that involve sentient AI revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (i.e. I, Robot, Ex Machina, Skynet). However, Randall implies that he is actually more concerned about the humans that control these super smart AI before they become fully sentient and rebel. As history is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.

In fact, Randall goes onto imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in 1626.

Transcript

Ambox notice.png This transcript is incomplete. Please help editing it! Thanks.


comment.png add a comment! ⋅ comment.png add a topic (use sparingly)! ⋅ Icons-mini-action refresh blue.gif refresh comments!

Discussion

Seems strange that the only "explanation" so far is a plug for a YouTube video. Can we get some text up in here? -- ProphetZarquon (talk) (please sign your comments with ~~~~)

It seems the video says it all. 162.158.255.172 (talk) (please sign your comments with ~~~~)
Please sign your comments ;-) --Kynde (talk) 00:26, 17 March 2018 (UTC)
This reads so much better now. Thanks everyone! ProphetZarquon (talk) 16:11, 17 March 2018 (UTC)

I swapped The Matrix for Ex Machina, in the early section about AI destroying/overthrowing humanity, & added a line farther down noting that Ex Machina's Ava (much like the human-directed killbots Randall is concerned about) did her job only too well; Specifically, talking her way out of the box. ProphetZarquon (talk) 17:35, 17 March 2018 (UTC)

So she did not do like this one: 1450: AI-Box Experiment ;-) --Kynde (talk) 12:10, 19 March 2018 (UTC)
Exactly; Ava was not eager to stay in the box. Whether she would later decide to eradicate or dominate humanity is left up in the air. All we know is that she was designed to emote enough to convince a human to release her regardless of risk. Probably more similarity to HAL or MUTHUR than Skynet (though Skynet did seem to fall in love with John Connor). ProphetZarquon (talk) 15:55, 19 March 2018 (UTC)

I'm sad it was not about Stehpen today... --Kynde (talk) 00:26, 17 March 2018 (UTC)

Instead of a tribute today, all I saw was a short, violent dystopian film. But this is quite an important matter, look at the video. Certantly more important... But still, can't wait for the tribute. Herobrine (talk) 13:07, 17 March 2018 (UTC)
Thanks, I did not see it at first, but it is really a scary thought. We do not need Nanobots to eradicate the people with the wrong opinions, when we have killer bots instead. I'm sure I have a lot of those wrong opinions and they are probably out on Facebook...--Kynde (talk) 12:10, 19 March 2018 (UTC)
If memory serves, Hawking is cited with similar concern about AI technology, and its potential to out think humans exponentially (not just a buzz word, but actually exponentially). However he did advocate for needs that aren't met in the immediate, not the theoretical future. 108.162.216.208 13:28, 17 March 2018 (UTC)
Thanks for reminding us. I have added something about the coincidence that one of Hawking's concerns was the main theme of the first comic after his death. --Kynde (talk) 12:10, 19 March 2018 (UTC)

We think that combat drones are not autonomous, but we already have civilian drones that are, and not only that, Intel has quite the "drone-based-fireworks" show based on an AI botnet that shows such swarms of drones can work together. Given the tendency of military secrets surrounding new technology, do you really believe the same technology has not already been deployed on the battlefield? Such a botent of hunter-killer octocopters would leave no witnesses behind.Seebert (talk) 16:18, 17 March 2018 (UTC)

Rely on "no witnesses" for untested technology? I don't think there was any target worth it recently. So I do believe the technology was not yet deployed. However, it likely is already prepared. It just waits for moment when the potential backslash from it's use somehow getting out would be worth the target which was destroyed. Something like second bin Ladin. Or Kim Jong-un. Or, well ... second Snowden. Not the first one, as that one already shared everything he had. -- Hkmaly (talk) 22:04, 17 March 2018 (UTC)