Editing Talk:2635: Superintelligent AIs

Jump to: navigation, search
Ambox notice.png Please sign your posts with ~~~~

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 11: Line 11:
 
I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees.
 
I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees.
 
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC)
 
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC)
::Here is a good article that looks at the claim of sentience in the context of how AI chatbots use inputs to come up with relevant responses. This article shows examples how the same chatbot would produce different response based on how the prompts were worded which negates the idea that there is a consistent "mind" responding to the prompts. However, it does end with some eerie impromptu remarks from the AI where it AI is prompting itself. https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531 [[User:Rtanenbaum|Rtanenbaum]] ([[User talk:Rtanenbaum|talk]]) 22:40, 27 June 2022 (UTC)
 
 
::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosophy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC)
 
::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosophy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC)
 
::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC)
 
::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC)

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)

Template used on this page: