Talk:2429: Exposure Models

Explain xkcd: It's 'cause you're dumb.
Revision as of 13:13, 7 April 2021 by Quillathe Siannodel (talk | contribs) (Does anyone know why this is incomplete?)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Is it worth making a note of the art error in the third panel, where the chair back has disappeared? 03:07, 25 February 2021 (UTC)

Someone did it. Fabian42 (talk) 09:06, 25 February 2021 (UTC)

I'm not ashamed to say that a good portion of the Bash and Google sheets knowledge I have today comes from creating a Corona spreadsheet and its automatic filling script: and Fabian42 (talk) 09:06, 25 February 2021 (UTC)

Explained the joke(I think?)[edit]

I wrote that the joke was he was so obsessed with the charts it became a self fulfilling prophecy. Please correct me if I'm wrong.Hiihaveanaccount (talk) 15:06, 25 February 2021 (UTC)

I think it hinges on the two possible meanings of his first sentence. One interpretation is that he's building the model, with the goal being that the model, once ready, will help him limit his risk. The other one would be that the making itself is what helps him limit his risk because it forces him to stay at home. In the second case, the quality of the eventual result doesn't matter that much and it's more about having something to do instead of getting bored while sitting at home. Bischoff (talk) 15:50, 25 February 2021 (UTC)

Strange that Randall is apparently debugging a manual model when machine learning models have passed the Turing test and gptneo was recently open sourced. 21:54, 25 February 2021 (UTC)

What's with the meta-model comment? I don't get it. 00:44, 26 February 2021 (UTC)

Well, it's just a guess, but using machine learning models to predict and design the behaviors of machine learning models would make a hyperintelligent system, in the extreme, no? A big thing, as a software developer, is finding ways to get the computer to do for you, what you would previously do yourself, which can mean getting more and more meta as a habit. Seems similar to the comic about the tower of babel, to me: touching on research towards hyperintelligence (and current events stemming from use of machine learning) without saying too much outright. 00:57, 26 February 2021 (UTC)

Note that alignment sounds like if the AIs end up being evil. They wouldn't be evil. They would be just fulfilling their purpose. Ignoring anything they don't have in program. So, it's kinda dangerous if we don't train the machine to be careful and not kill someone just because we don't know how it could do it ... -- Hkmaly (talk) 02:31, 26 February 2021 (UTC)

nah it has more to do with how automatically pursuing goals can discover weird approaches that nobody expects. But I guess that's what you're saying. It's just hard to rigorously define "be careful". Somebody removed all the information about machine learning from the article. 14:17, 26 February 2021 (UTC)
It weren't me, but I can see why. There's no signs that any machine learning was employed. The text even stated "This might be the first time machine learning has been mentioned" (not sure that's right), but itself was the first obvious mention of machine learning. A model can just be a simulation (entirely configured by the human creator), and this seems far more likely here, given nothing to say otherwise. 19:55, 28 February 2021 (UTC)

Edit: Deleted comment. Sorry for the accidental spam. {)|(}Quill{)|(} 14:46, 25 March 2021 (UTC)