Title text: I mean, we already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to.
|| This explanation may be incomplete or incorrect: Created by a KILLER BOT. Do NOT delete this tag too soon.|
If you can address this issue, please edit the page! Thanks.
Randall's fear is explored in the video Slaughterbots.
|| This transcript is incomplete. Please help editing it! Thanks.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!
Seems strange that the only "explanation" so far is a plug for a YouTube video. Can we get some text up in here? -- ProphetZarquon (talk) (please sign your comments with ~~~~)
- It seems the video says it all. 126.96.36.199 (talk) (please sign your comments with ~~~~)
- Please sign your comments ;-) --Kynde (talk) 00:26, 17 March 2018 (UTC)
- This reads so much better now. Thanks everyone! ProphetZarquon (talk) 16:11, 17 March 2018 (UTC)
I swapped The Matrix for Ex Machina, in the early section about AI destroying/overthrowing humanity, & added a line farther down noting that Ex Machina's Ava (much like the human-directed killbots Randall is concerned about) did her job only too well; Specifically, talking her way out of the box. ProphetZarquon (talk) 17:35, 17 March 2018 (UTC)
- So she did not do like this one: 1450: AI-Box Experiment ;-) --Kynde (talk) 12:10, 19 March 2018 (UTC)
- Exactly; Ava was not eager to stay in the box. Whether she would later decide to eradicate or dominate humanity is left up in the air. All we know is that she was designed to emote enough to convince a human to release her regardless of risk. Probably more similarity to HAL or MUTHUR than Skynet (though Skynet did seem to fall in love with John Connor). ProphetZarquon (talk) 15:55, 19 March 2018 (UTC)
I'm sad it was not about Stehpen today... --Kynde (talk) 00:26, 17 March 2018 (UTC)
- Instead of a tribute today, all I saw was a short, violent dystopian film. But this is quite an important matter, look at the video. Certantly more important... But still, can't wait for the tribute. Herobrine (talk) 13:07, 17 March 2018 (UTC)
- Thanks, I did not see it at first, but it is really a scary thought. We do not need Nanobots to eradicate the people with the wrong opinions, when we have killer bots instead. I'm sure I have a lot of those wrong opinions and they are probably out on Facebook...--Kynde (talk) 12:10, 19 March 2018 (UTC)
- If memory serves, Hawking is cited with similar concern about AI technology, and its potential to out think humans exponentially (not just a buzz word, but actually exponentially). However he did advocate for needs that aren't met in the immediate, not the theoretical future. 188.8.131.52 13:28, 17 March 2018 (UTC)
- Thanks for reminding us. I have added something about the coincidence that one of Hawking's concerns was the main theme of the first comic after his death. --Kynde (talk) 12:10, 19 March 2018 (UTC)
We think that combat drones are not autonomous, but we already have civilian drones that are, and not only that, Intel has quite the "drone-based-fireworks" show based on an AI botnet that shows such swarms of drones can work together. Given the tendency of military secrets surrounding new technology, do you really believe the same technology has not already been deployed on the battlefield? Such a botent of hunter-killer octocopters would leave no witnesses behind.Seebert (talk) 16:18, 17 March 2018 (UTC)
- Rely on "no witnesses" for untested technology? I don't think there was any target worth it recently. So I do believe the technology was not yet deployed. However, it likely is already prepared. It just waits for moment when the potential backslash from it's use somehow getting out would be worth the target which was destroyed. Something like second bin Ladin. Or Kim Jong-un. Or, well ... second Snowden. Not the first one, as that one already shared everything he had. -- Hkmaly (talk) 22:04, 17 March 2018 (UTC)