Editing Talk:1838: Machine Learning

Jump to: navigation, search
Ambox notice.png Please sign your posts with ~~~~

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 18: Line 18:
 
The comment that SVMs would be a better paradigm, rather than neural networks, is kind of wrong. Anyone who's worked with neural networks knows they're still essentially a linear algebra problem, just with nonlinear activation functions. Play around with tensorflow (it's fun and educational!) and you'll find most of the linear algebra isn't abstracted away as it might be in Keras, SkLearn or Caret (R). That being said, interpretability is absolutely a problem with these complex models. This is as much because the world doesn't like conforming to the nice modernist notion of a sensible theory (ie. one that can be reduced to a nice linear relationship), but even things like L1 regularisation often leave you wondering "but how does it all fit together?". On the other hand, while methods like SVMs still have a bit of machine learning magic in resolving how its hyperplane divides the hyperspace (ie. the value is derived empirically, not theoretically), the results are typically human interpretable, for a given definition of interpretable. It's no y= wx + b, but it's definitely possible. Same same for most methods short of very deep neural nets with millions of parameters. Most machine learning experts I've met have a pretty good idea what is going on in the simpler models, such as CARTs, SVMs, boosted models etc. The only reason neural nets are blackbox-y is that there's a huge amount going on inside them, and it's too much effort to do more than analyse outputs! [[Special:Contributions/172.68.141.142|172.68.141.142]] 22:43, 17 May 2017 (UTC)
 
The comment that SVMs would be a better paradigm, rather than neural networks, is kind of wrong. Anyone who's worked with neural networks knows they're still essentially a linear algebra problem, just with nonlinear activation functions. Play around with tensorflow (it's fun and educational!) and you'll find most of the linear algebra isn't abstracted away as it might be in Keras, SkLearn or Caret (R). That being said, interpretability is absolutely a problem with these complex models. This is as much because the world doesn't like conforming to the nice modernist notion of a sensible theory (ie. one that can be reduced to a nice linear relationship), but even things like L1 regularisation often leave you wondering "but how does it all fit together?". On the other hand, while methods like SVMs still have a bit of machine learning magic in resolving how its hyperplane divides the hyperspace (ie. the value is derived empirically, not theoretically), the results are typically human interpretable, for a given definition of interpretable. It's no y= wx + b, but it's definitely possible. Same same for most methods short of very deep neural nets with millions of parameters. Most machine learning experts I've met have a pretty good idea what is going on in the simpler models, such as CARTs, SVMs, boosted models etc. The only reason neural nets are blackbox-y is that there's a huge amount going on inside them, and it's too much effort to do more than analyse outputs! [[Special:Contributions/172.68.141.142|172.68.141.142]] 22:43, 17 May 2017 (UTC)
  
:I remember from school that neural nets can get extremely hard to analyze even when they only contain five neurons. -- [[User:Hkmaly|Hkmaly]] ([[User talk:Hkmaly|talk]]) 03:20, 27 May 2017 (UTC)
 
  
 
Does anyone else think the topic may have been influenced by Google's recently (May 17) featured article about machine learning?[[https://www.google.com/intl/en/about/main/gender-equality-films/]]
 
Does anyone else think the topic may have been influenced by Google's recently (May 17) featured article about machine learning?[[https://www.google.com/intl/en/about/main/gender-equality-films/]]
 
--[[Special:Contributions/162.158.79.35|162.158.79.35]] 12:17, 17 May 2017 (UTC)
 
--[[Special:Contributions/162.158.79.35|162.158.79.35]] 12:17, 17 May 2017 (UTC)
:Google has been saying a lot about machine learning recently, particularly w.r.t. android. [[Special:Contributions/141.101.107.30|141.101.107.30]] 04:43, 19 May 2017 (UTC)
 
 
  
 
Maybe one day bots will learn to create entire explanations for xkcd. [[Special:Contributions/141.101.99.179|141.101.99.179]] 12:38, 17 May 2017 (UTC)
 
Maybe one day bots will learn to create entire explanations for xkcd. [[Special:Contributions/141.101.99.179|141.101.99.179]] 12:38, 17 May 2017 (UTC)
 
:Good, then maybe we won't have over-thought explanations anymore.
 
:Good, then maybe we won't have over-thought explanations anymore.
 
:: "That was a joke, haha" [[User:Elektrizikekswerk|Elektrizikekswerk]] ([[User talk:Elektrizikekswerk|talk]]) 07:36, 18 May 2017 (UTC)
 
:: "That was a joke, haha" [[User:Elektrizikekswerk|Elektrizikekswerk]] ([[User talk:Elektrizikekswerk|talk]]) 07:36, 18 May 2017 (UTC)
:: I lovingly think of this site as "Over-Explain XKCD"[[Special:Contributions/172.68.54.112|172.68.54.112]] 17:44, 20 May 2017 (UTC)
 
  
 
The fuck is "Pinball"? [[Special:Contributions/162.158.122.66|162.158.122.66]] 03:59, 19 May 2017 (UTC)
 
The fuck is "Pinball"? [[Special:Contributions/162.158.122.66|162.158.122.66]] 03:59, 19 May 2017 (UTC)
: Agree, expunged it. [[Special:Contributions/162.158.106.42|162.158.106.42]] 08:38, 24 May 2017 (UTC)
 
 
On the topic of 'Stirring', I'm not sure why it's being associated with neural networks. It's a common thing in machine learning to randomize starting conditions to avoid local minima. This does exist in neural networks, as edge weights are typically randomized, but it's also the first step in many different algorithms, such as k-means where the initial centroid locations are randomized, or decision trees where random forests are sometimes used. [[Special:Contributions/173.245.50.186|173.245.50.186]] 13:18, 19 May 2017 (UTC) sbendl
 
 
====Fixing the explanation====
 
Right now, the explanation has two parts, one that is simply trying to explain it for the casual reader, and another that goes into the details of machine/deep learning, linear algebra, neural networks etc. (I almost forgot composting!) The way the two parts are jumbled together makes no sense. Perhaps having a simple initial explanation with subsections for more detailed explanation of individual topics relevant to the comic would fix the mess. [[User:Nialpxe|Nialpxe]] ([[User talk:Nialpxe|talk]]) 14:08, 19 May 2017 (UTC)
 
 
I only came here to get an explanation of "recurrent," and I canʻt find it.
 
:seconded --[[User:Misterstick|Misterstick]] ([[User talk:Misterstick|talk]]) 11:29, 5 April 2018 (UTC)
 
::I'm a machine learning geek and I'll try to add an explanation of "recurrent", a few minutes. [[User:Singlelinelabyrinth|Singlelinelabyrinth]] ([[User talk:Singlelinelabyrinth|talk]]) 05:22, 27 July 2020 (UTC)
 
 
I like the use of "Cueball Prime" and "Cueball II" in the transcript. can someone make that a part of the style guide?
 
 
====LLM-Era References (2023+)====
 
This was just reposted on the "Marcus On AI" Substack, forwarded through Dave Farber's "Interesting People" list, in reference to LLM ChatGPT allegedly "going beserk".
 
https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
 
https://ip.topicbox.com/groups/ip/T775bf6cac82b32a1/chatgpt-has-gone-berserk
 
[[Special:Contributions/172.70.47.58|172.70.47.58]] 03:19, 21 February 2024 (UTC)
 

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)