Editing Talk:2635: Superintelligent AIs
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
<!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--> | <!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--> | ||
− | |||
− | |||
− | |||
− | |||
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC) | I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC) | ||
Line 11: | Line 7: | ||
I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees. | I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees. | ||
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC) | :This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC) | ||
− | |||
::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosophy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC) | ::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosophy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC) | ||
::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC) | ::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC) | ||
Line 32: | Line 27: | ||
Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:11, 22 June 2022 (UTC) | Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:11, 22 June 2022 (UTC) | ||
:Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) [[Special:Contributions/141.101.99.32|141.101.99.32]] 11:40, 22 June 2022 (UTC) | :Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) [[Special:Contributions/141.101.99.32|141.101.99.32]] 11:40, 22 June 2022 (UTC) | ||
− | |||
− | |||
− | |||
== OpenAI Davinci completions of the three statements == | == OpenAI Davinci completions of the three statements == | ||
Line 53: | Line 45: | ||
I like all of those very much, but I'm not sure they should be included in the explaination. [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:27, 22 June 2022 (UTC) | I like all of those very much, but I'm not sure they should be included in the explaination. [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:27, 22 June 2022 (UTC) | ||
− | |||
− | |||
− | |||
== Discussion of AI philosophy, ethics, and related issues == | == Discussion of AI philosophy, ethics, and related issues == | ||
Line 62: | Line 51: | ||
:Has anyone created an AI chatbot which represents a base-level chatbot after the human equivalent of smoking pot? [[Special:Contributions/172.70.206.213|172.70.206.213]] 22:30, 24 June 2022 (UTC) | :Has anyone created an AI chatbot which represents a base-level chatbot after the human equivalent of smoking pot? [[Special:Contributions/172.70.206.213|172.70.206.213]] 22:30, 24 June 2022 (UTC) | ||
::Well, famously (or not, but I'll let you search for the details if you weren't aware of it), there was the conversation engineered directly between ELIZA (the classic 'therapist'/doctor chatbot) and PARRY (emulates a paranoid schizophrenic personality), 8n a zero-human conversation. The latter is arguably being close to what you're asking about. And there's been the best part of half a century of academic, commercial and hobbyist development since then, so no doubt there'd be many more serious and/or for-the-lols 'reskins' or indeed entirely regrown personalities, that may involve drugs (simulated or otherwise) as key influences... [[Special:Contributions/172.70.85.177|172.70.85.177]] 01:30, 25 June 2022 (UTC) | ::Well, famously (or not, but I'll let you search for the details if you weren't aware of it), there was the conversation engineered directly between ELIZA (the classic 'therapist'/doctor chatbot) and PARRY (emulates a paranoid schizophrenic personality), 8n a zero-human conversation. The latter is arguably being close to what you're asking about. And there's been the best part of half a century of academic, commercial and hobbyist development since then, so no doubt there'd be many more serious and/or for-the-lols 'reskins' or indeed entirely regrown personalities, that may involve drugs (simulated or otherwise) as key influences... [[Special:Contributions/172.70.85.177|172.70.85.177]] 01:30, 25 June 2022 (UTC) | ||
− |