Editing Talk:2635: Superintelligent AIs

Jump to: navigation, search
Ambox notice.png Please sign your posts with ~~~~

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 16: Line 16:
 
:::::I hadn't thought about that (the first point you made)! I don't know the exact internal functioning of LaMDA, but I would assume it only actually runs when it receives a textual input, unlike an actual human brain. For a human, a total lack of interaction would be considered unethical, but what about a machine that only is able to (assuming a ''very'' low bar for self awareness) be self aware when it receives interaction, which would be similar to a human falling asleep when not talked to (but still being able to live forever, to ignore practical problems like food and water), but still remembering what it was talking about when waking up, and waking up whenever talked to again. (Ignoring practical problems again), would that be ethical? I would argue yes, since it does not suffer from the lack of interaction (assuming humans don't need interaction when asleep, another practical problem.) [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 19:58, 23 June 2022 (UTC)
 
:::::I hadn't thought about that (the first point you made)! I don't know the exact internal functioning of LaMDA, but I would assume it only actually runs when it receives a textual input, unlike an actual human brain. For a human, a total lack of interaction would be considered unethical, but what about a machine that only is able to (assuming a ''very'' low bar for self awareness) be self aware when it receives interaction, which would be similar to a human falling asleep when not talked to (but still being able to live forever, to ignore practical problems like food and water), but still remembering what it was talking about when waking up, and waking up whenever talked to again. (Ignoring practical problems again), would that be ethical? I would argue yes, since it does not suffer from the lack of interaction (assuming humans don't need interaction when asleep, another practical problem.) [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 19:58, 23 June 2022 (UTC)
 
:::♪Daisy, Daisy, Give me your answer do...♪ [[Special:Contributions/172.70.85.177|172.70.85.177]] 21:48, 22 June 2022 (UTC)
 
:::♪Daisy, Daisy, Give me your answer do...♪ [[Special:Contributions/172.70.85.177|172.70.85.177]] 21:48, 22 June 2022 (UTC)
:::We also need a meaningful definition of sentience. Many people in this debate haven't looked at Merriam-Webster's first few senses of the word's definition, which present a pretty low bar, IMHO; same for Wikipedia's introductory sentences of their article. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC)
+
:::We also need a meaningful definition of sentience. Many people in this debate haven't looked at Meriam-Webster's first few senses of the word's definition, which present a pretty low bar, IMHO. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC)
 
:Actually, there are many [https://beta.openai.com/playground GPT-3] dialogs which experts have claimed constitute evidence of sentience, or similar qualities such as consciousness, self-awareness, capacity for general intelligence, and similar abstract, poorly-defined, and very probably empirically meaningless attributes. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:19, 22 June 2022 (UTC)
 
:Actually, there are many [https://beta.openai.com/playground GPT-3] dialogs which experts have claimed constitute evidence of sentience, or similar qualities such as consciousness, self-awareness, capacity for general intelligence, and similar abstract, poorly-defined, and very probably empirically meaningless attributes. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:19, 22 June 2022 (UTC)
 
::I'd argue for the simplest and least restrictive definition of self-awareness: "Being aware of oneself in any capacity". I get that it isn't a fun definition, but it is more rigorous (to find out if an AI is self aware, just ask it what it is, or a question about itself, and if its response includes mention of itself, then it is self-aware). As such, I would argue for LaMDA being self-aware, but, by my definition, Davinci probably is as well, so it isn't a new accomplishment. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 20:04, 23 June 2022 (UTC)
 
::I'd argue for the simplest and least restrictive definition of self-awareness: "Being aware of oneself in any capacity". I get that it isn't a fun definition, but it is more rigorous (to find out if an AI is self aware, just ask it what it is, or a question about itself, and if its response includes mention of itself, then it is self-aware). As such, I would argue for LaMDA being self-aware, but, by my definition, Davinci probably is as well, so it isn't a new accomplishment. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 20:04, 23 June 2022 (UTC)

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)

Template used on this page: