Talk:2429: Exposure Models

Explain xkcd: It's 'cause you're dumb.
Revision as of 02:31, 26 February 2021 by Hkmaly (talk | contribs)
Jump to: navigation, search

Is it worth making a note of the art error in the third panel, where the chair back has disappeared? 108.162.237.8 03:07, 25 February 2021 (UTC)

Someone did it. Fabian42 (talk) 09:06, 25 February 2021 (UTC)

I'm not ashamed to say that a good portion of the Bash and Google sheets knowledge I have today comes from creating a Corona spreadsheet and its automatic filling script: https://docs.google.com/spreadsheets/d/1uDTghO_ZYBs5nfs2kDc0Ms6e9bbx7clx_QgkWii7OMY and https://pastebin.com/uHzzMeac Fabian42 (talk) 09:06, 25 February 2021 (UTC)

Explained the joke(I think?)

I wrote that the joke was he was so obsessed with the charts it became a self fulfilling prophecy. Please correct me if I'm wrong.Hiihaveanaccount (talk) 15:06, 25 February 2021 (UTC)

I think it hinges on the two possible meanings of his first sentence. One interpretation is that he's building the model, with the goal being that the model, once ready, will help him limit his risk. The other one would be that the making itself is what helps him limit his risk because it forces him to stay at home. In the second case, the quality of the eventual result doesn't matter that much and it's more about having something to do instead of getting bored while sitting at home. Bischoff (talk) 15:50, 25 February 2021 (UTC)

Strange that Randall is apparently debugging a manual model when machine learning models have passed the Turing test and gptneo was recently open sourced. 162.158.63.170 21:54, 25 February 2021 (UTC)

What's with the meta-model comment? I don't get it.172.69.170.120 00:44, 26 February 2021 (UTC)

Well, it's just a guess, but using machine learning models to predict and design the behaviors of machine learning models would make a hyperintelligent system, in the extreme, no? A big thing, as a software developer, is finding ways to get the computer to do for you, what you would previously do yourself, which can mean getting more and more meta as a habit. Seems similar to the comic about the tower of babel, to me: touching on research towards hyperintelligence (and current events stemming from use of machine learning) without saying too much outright. 162.158.63.118 00:57, 26 February 2021 (UTC)


Note that alignment sounds like if the AIs end up being evil. They wouldn't be evil. They would be just fulfilling their purpose. Ignoring anything they don't have in program. So, it's kinda dangerous if we don't train the machine to be careful and not kill someone just because we don't know how it could do it ... -- Hkmaly (talk) 02:31, 26 February 2021 (UTC)