Difference between revisions of "Talk:2635: Superintelligent AIs"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
Line 3: Line 3:
 
my balls hert [[Special:Contributions/172.70.230.53|172.70.230.53]] 05:49, 21 June 2022 (UTC)
 
my balls hert [[Special:Contributions/172.70.230.53|172.70.230.53]] 05:49, 21 June 2022 (UTC)
 
:Uh, thanks for sharing, I guess? [[Special:Contributions/172.70.211.52|172.70.211.52]] 20:43, 21 June 2022 (UTC)
 
:Uh, thanks for sharing, I guess? [[Special:Contributions/172.70.211.52|172.70.211.52]] 20:43, 21 June 2022 (UTC)
 +
::no problem, anytime [[Special:Contributions/172.70.230.53|172.70.230.53]] 07:02, 22 June 2022 (UTC)
  
 
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC)
 
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC)

Revision as of 07:03, 22 June 2022


my balls hert 172.70.230.53 05:49, 21 June 2022 (UTC)

Uh, thanks for sharing, I guess? 172.70.211.52 20:43, 21 June 2022 (UTC)
no problem, anytime 172.70.230.53 07:02, 22 June 2022 (UTC)

I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar 172.68.50.119 06:33, 21 June 2022 (UTC)

It's probably about https://www.analyticsinsight.net/googles-ai-chatbot-is-claimed-to-be-sentient-but-the-company-is-silencing-claims/ 172.70.178.115 09:22, 21 June 2022 (UTC)

I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees.

This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--172.70.134.141 14:53, 21 June 2022 (UTC)

What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? 172.70.230.75 13:23, 21 June 2022 (UTC)

In response to the above: I believe the original "Turing Test" wasn't supposed to be a proof that an AI could think or was conscious (something people associate with it now), but rather just to show that a sufficiently advanced AI could imitate humans in certain intelligent behaviors (such as conversation), which was a novel thought for the time. Now that AI are routinely having conversations and creating art which seems to rival casual attempts by humans, this limited scope of the test doesn't seem all that impressive. "Turing Test" therefore is a modern shorthand for determining whether computers can think, even though Turing himself didn't think that such a question was well-formed. Dextrous Fred (talk) 13:37, 21 June 2022 (UTC)

I thought the trolley problem was in its original form not about the relative value of lives, but people's perception of the relative moral implications or the psychological impact of the concept of letting someone die by not doing anything, versus taking affirmative action that causes a death, where people would say they would be unwilling to do something that would cause an originally safe person to die in order to save multiple other people who would die if they did nothing, but then people kept coming up with variations of it that changed the responses or added complications (like they found more people would be willing to pull a lever to change the track killing one person versus something like pushing a very fat man off an overpass above the track to stop the trolley, or specifying something about what kind of people are on the track. Btw, I saw a while ago a party card game called "murder by trolley" based on the concept, with playing cards for which people are on tracks and a judge deciding which track to send the trolley on each round.--172.70.130.5 22:12, 21 June 2022 (UTC)