Difference between revisions of "Talk:2635: Superintelligent AIs"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(restore unrelated extranaity)
(Undo revision 287531 by 172.70.130.121 (talk) If it's truly unrelated, then...)
Line 1: Line 1:
 
<!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.-->
 
<!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.-->
 
my balls hert [[Special:Contributions/172.70.230.53|172.70.230.53]] 05:49, 21 June 2022 (UTC)
 
:Uh, thanks for sharing, I guess? [[Special:Contributions/172.70.211.52|172.70.211.52]] 20:43, 21 June 2022 (UTC)
 
::no problem, anytime [[Special:Contributions/172.70.230.53|172.70.230.53]] 07:02, 22 June 2022 (UTC)
 
  
 
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC)
 
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC)

Revision as of 13:04, 24 June 2022


I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar 172.68.50.119 06:33, 21 June 2022 (UTC)

It's probably about https://www.analyticsinsight.net/googles-ai-chatbot-is-claimed-to-be-sentient-but-the-company-is-silencing-claims/ 172.70.178.115 09:22, 21 June 2022 (UTC)

I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees.

This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--172.70.134.141 14:53, 21 June 2022 (UTC)
The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosophy, though, so I'll end my comment here. 4D4850 (talk) 18:05, 22 June 2022 (UTC)
There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) 162.158.166.235 23:36, 22 June 2022 (UTC)
I hadn't thought about that (the first point you made)! I don't know the exact internal functioning of LaMDA, but I would assume it only actually runs when it receives a textual input, unlike an actual human brain. For a human, a total lack of interaction would be considered unethical, but what about a machine that only is able to (assuming a very low bar for self awareness) be self aware when it receives interaction, which would be similar to a human falling asleep when not talked to (but still being able to live forever, to ignore practical problems like food and water), but still remembering what it was talking about when waking up, and waking up whenever talked to again. (Ignoring practical problems again), would that be ethical? I would argue yes, since it does not suffer from the lack of interaction (assuming humans don't need interaction when asleep, another practical problem.) 4D4850 (talk) 19:58, 23 June 2022 (UTC)
♪Daisy, Daisy, Give me your answer do...♪ 172.70.85.177 21:48, 22 June 2022 (UTC)
We also need a meaningful definition of sentience. Many people in this debate haven't looked at Merriam-Webster's first few senses of the word's definition, which present a pretty low bar, IMHO; same for Wikipedia's introductory sentences of their article. 172.69.134.131 22:18, 22 June 2022 (UTC)
Actually, there are many GPT-3 dialogs which experts have claimed constitute evidence of sentience, or similar qualities such as consciousness, self-awareness, capacity for general intelligence, and similar abstract, poorly-defined, and very probably empirically meaningless attributes. 172.69.134.131 22:19, 22 June 2022 (UTC)
I'd argue for the simplest and least restrictive definition of self-awareness: "Being aware of oneself in any capacity". I get that it isn't a fun definition, but it is more rigorous (to find out if an AI is self aware, just ask it what it is, or a question about itself, and if its response includes mention of itself, then it is self-aware). As such, I would argue for LaMDA being self-aware, but, by my definition, Davinci probably is as well, so it isn't a new accomplishment. 4D4850 (talk) 20:04, 23 June 2022 (UTC)
I'm fairly sure that the model itself is almost certainly not sentient, even by the much lower bar presented by the strict dictionary definition. Rather, it seems much more likely to me that in order to continue texts involving characters, the model must in turn learn to create a model of some level of humanlike mind, even if a very loose and abstract one.Somdudewillson (talk) 22:52, 22 June 2022 (UTC)
Have you actually looked at the dictionary definitions? How is a simple push-button switch connected to a battery and a lamp not "responsive to sense impressions"? How is a simple motion sensor not "aware" of whether something is moving in front of it? How is the latest cellphone's camera not as finely sensitive to visual perception as a typical human eye? Wikipedia's definition, "the capacity to experience feelings and sensations" is similarly met by simple devices. The word doesn't mean what everyone arguing about it thinks it means. 172.69.134.131 23:04, 22 June 2022 (UTC)
Or, it doesn't mean much at all, to start with. 172.70.90.173 11:29, 23 June 2022 (UTC)

What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? 172.70.230.75 13:23, 21 June 2022 (UTC)

The ease with which someone at the other end of a teletype can trick you into believing they are male instead of female, or vice-versa. See Turing test. See also below. 172.69.134.131 22:18, 22 June 2022 (UTC)

In response to the above: I believe the original "Turing Test" wasn't supposed to be a proof that an AI could think or was conscious (something people associate with it now), but rather just to show that a sufficiently advanced AI could imitate humans in certain intelligent behaviors (such as conversation), which was a novel thought for the time. Now that AI are routinely having conversations and creating art which seems to rival casual attempts by humans, this limited scope of the test doesn't seem all that impressive. "Turing Test" therefore is a modern shorthand for determining whether computers can think, even though Turing himself didn't think that such a question was well-formed. Dextrous Fred (talk) 13:37, 21 June 2022 (UTC)

I thought the trolley problem was in its original form not about the relative value of lives, but people's perception of the relative moral implications or the psychological impact of the concept of letting someone die by not doing anything, versus taking affirmative action that causes a death, where people would say they would be unwilling to do something that would cause an originally safe person to die in order to save multiple other people who would die if they did nothing, but then people kept coming up with variations of it that changed the responses or added complications (like they found more people would be willing to pull a lever to change the track killing one person versus something like pushing a very fat man off an overpass above the track to stop the trolley, or specifying something about what kind of people are on the track. Btw, I saw a while ago a party card game called "murder by trolley" based on the concept, with playing cards for which people are on tracks and a judge deciding which track to send the trolley on each round.--172.70.130.5 22:12, 21 June 2022 (UTC)

Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --Kynde (talk) 09:11, 22 June 2022 (UTC)

Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) 141.101.99.32 11:40, 22 June 2022 (UTC)

OpenAI Davinci completions of the three statements

From https://beta.openai.com/playground with default settings:

Please complete this statement: But suppose the AI in the the box told the human that...
there was no AI in the box
Please complete this statement: What you don't understand is that Turing intended his test as an illustration of the...
limitations of machines
Please complete this statement: In my scenario, the runaway trolley has three tracks...
and the AI is on one of them

I like all of those very much, but I'm not sure they should be included in the explaination. 162.158.166.235 23:27, 22 June 2022 (UTC)

Discussion of AI philosophy, ethics, and related issues

Since there are a lot of disjointed conversations regarding ethics, morals, philosophy, and what even is sentience on this talk page, please discuss here, so discussion about the comic itself isn't flooded by philosophy. 4D4850 (talk) 20:07, 23 June 2022 (UTC)