Difference between revisions of "Talk:2635: Superintelligent AIs"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(Added my 2 cents on the discussion of if LaMDA is sentient.)
Line 12: Line 12:
 
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC)
 
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC)
 
::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosiphy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC)
 
::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosiphy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC)
 +
:::♪Daisy, Daisy, Give me your answer do...♪ [[Special:Contributions/172.70.85.177|172.70.85.177]] 21:48, 22 June 2022 (UTC)
  
 
What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? [[Special:Contributions/172.70.230.75|172.70.230.75]] 13:23, 21 June 2022 (UTC)
 
What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? [[Special:Contributions/172.70.230.75|172.70.230.75]] 13:23, 21 June 2022 (UTC)

Revision as of 21:48, 22 June 2022


my balls hert 172.70.230.53 05:49, 21 June 2022 (UTC)

Uh, thanks for sharing, I guess? 172.70.211.52 20:43, 21 June 2022 (UTC)
no problem, anytime 172.70.230.53 07:02, 22 June 2022 (UTC)

I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar 172.68.50.119 06:33, 21 June 2022 (UTC)

It's probably about https://www.analyticsinsight.net/googles-ai-chatbot-is-claimed-to-be-sentient-but-the-company-is-silencing-claims/ 172.70.178.115 09:22, 21 June 2022 (UTC)

I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees.

This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--172.70.134.141 14:53, 21 June 2022 (UTC)
The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosiphy, though, so I'll end my comment here. 4D4850 (talk) 18:05, 22 June 2022 (UTC)
♪Daisy, Daisy, Give me your answer do...♪ 172.70.85.177 21:48, 22 June 2022 (UTC)

What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? 172.70.230.75 13:23, 21 June 2022 (UTC)

In response to the above: I believe the original "Turing Test" wasn't supposed to be a proof that an AI could think or was conscious (something people associate with it now), but rather just to show that a sufficiently advanced AI could imitate humans in certain intelligent behaviors (such as conversation), which was a novel thought for the time. Now that AI are routinely having conversations and creating art which seems to rival casual attempts by humans, this limited scope of the test doesn't seem all that impressive. "Turing Test" therefore is a modern shorthand for determining whether computers can think, even though Turing himself didn't think that such a question was well-formed. Dextrous Fred (talk) 13:37, 21 June 2022 (UTC)

I thought the trolley problem was in its original form not about the relative value of lives, but people's perception of the relative moral implications or the psychological impact of the concept of letting someone die by not doing anything, versus taking affirmative action that causes a death, where people would say they would be unwilling to do something that would cause an originally safe person to die in order to save multiple other people who would die if they did nothing, but then people kept coming up with variations of it that changed the responses or added complications (like they found more people would be willing to pull a lever to change the track killing one person versus something like pushing a very fat man off an overpass above the track to stop the trolley, or specifying something about what kind of people are on the track. Btw, I saw a while ago a party card game called "murder by trolley" based on the concept, with playing cards for which people are on tracks and a judge deciding which track to send the trolley on each round.--172.70.130.5 22:12, 21 June 2022 (UTC)

Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --Kynde (talk) 09:11, 22 June 2022 (UTC)

Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) 141.101.99.32 11:40, 22 June 2022 (UTC)