Editing 1958: Self-Driving Issues
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 11: | Line 11: | ||
[[Cueball]] explains being worried about {{w|autonomous car|self-driving cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with {{w|AI}}s; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive. | [[Cueball]] explains being worried about {{w|autonomous car|self-driving cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with {{w|AI}}s; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive. | ||
β | However, Cueball quickly assumes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, an attack on the road infrastructure could impact both AIs and humans. However, humans and AIs are not equally vulnerable. For example, a fake sign or a fake child could appear to a human as an obvious fake but fool an AI. A | + | However, Cueball quickly assumes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, an attack on the road infrastructure could impact both AIs and humans. However, humans and AIs are not equally vulnerable. For example, a fake sign or a fake child could appear to a human as an obvious fake but fool an AI. A creative attacker could put up a sign with CAPTCHA-like text that would be readable by humans but not by an AI. |
Cueball further wonders why, in this case, nobody tries to fool human drivers as they might try to fool an AI, but [[White Hat]] and [[Megan]] point out that most {{w|Road traffic safety|road safety systems}} benefit from humans not actively trying to maliciously sabotage them simply to cause accidents.{{Citation needed}} | Cueball further wonders why, in this case, nobody tries to fool human drivers as they might try to fool an AI, but [[White Hat]] and [[Megan]] point out that most {{w|Road traffic safety|road safety systems}} benefit from humans not actively trying to maliciously sabotage them simply to cause accidents.{{Citation needed}} |