1132: Frequentists vs. Bayesians

Explain xkcd: It's 'cause you're dumb.
(Redirected from Frequentists vs. Bayesians)
Jump to: navigation, search
Frequentists vs. Bayesians
'Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'
Title text: 'Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'

[edit] Explanation

Ambox notice.png This explanation may be incomplete or incorrect: The core subject matter, as well as the interpretation of the last panel, is open to debate.
If you can address this issue, please edit the page! Thanks.

This is a comic about probability theory.

During the night, a person cannot directly observe the sun to see if it has exploded, but can tell indirectly by a variety of means, many of them simple and practical. For example, the person could use a telephone to call a person in a place where it is day, read posts on Twitter, or look at the moon (the moon does not make its own light, and appears bright only because of reflected sunlight). In the comic, a person relies on the fact that neutrinos can pass through the earth, so a neutrino detector would detect neutrinos from the sun at all times, day and night. Although this is a theoretically possible method of detecting solar explosions, it is exceedingly impractical, especially using a detector of the size depected in the comic. (If current neutrino detection technology was scaled down to the size shown, and the sun did not become a supernova, then the average rate of neutrino detection would be less than one per week, so waiting for dawn would be faster than waiting for a neutrino to be detected. If the sun did become a supernova, then the entire earth, including the detector, would be instantly destroyed, so the mere fact that the detector survives to give any response is sufficient to conclude, with 100% certainty, that the sun has not become a supernova.) In addition, the detector is stated to give false results ("lie") 1/36th of the time. Assuming this detector is otherwise reliable, when the detector reports a solar explosion, there, there are two possibilities: either (a) the sun has exploded (which is extremely unlikely), but has not become a supernova), and the detector is telling the truth, or (b) the sun hasn't exploded and the detector is lying (which occurs 1/36th of the time).

The Frequentist considers what he knows about the detector. Since the detector rolls two standard dice and only lies if they both land on 6, there is only a 1/36 chance that the detector is lying. He references the concept of p<0.05, which is a scientific research standard where a result is presumed to provide strong evidence against a "null hypothesis" if there is less than a 5% chance that the result occurs given that the null hypothesis is true. (For instance, if you test a new medicine and find that it appears to help your test subjects, and you find that, statistically speaking, the chance that the test subjects improved from the placebo effect alone is less than 5%, you would consider this strong evidence that the medicine is really working.) He notes that the P-value in this case is less than 0.05, and thus the standard threshold has been met. Simply put, the Frequentist notes that it is unlikely for the detector to lie, and therefore the sun has probably exploded.

The Bayesian uses a more comprehensive approach. Based on what he knows about the detector, it is unlikely that the detector is lying. But based on what he knows about the sun (and possibly the relative improbability that the a solar explosion would be detected by the neutrino detector before the expected time of dawn, without being destroyed by a supernova), it is extremely unlikely that the sun has suddenly exploded. (Modern astronomy tells us that the sun will retain its current condition for at least 5 billion years, aside from minor variations in its output.) The unlikeliness of the detector lying is greatly outweighed by the unlikeliness of the sun exploding. (In Bayesian reasoning, in this context, the knowledge about the probability of the sun exploding is called a "prior".) Therefore, he concludes that the sun has not exploded and the detector is lying. (This line of reasoning is not made explicit in the comic, but it is typical of how an ordinary Bayesian would approach the situation.)

The Bayesian's line, "Bet you $50 it hasn't", could be taken as a simple expression of confidence, based on the reasoning above. It could also be taken to mean that the Bayesian has had a further thought: If the sun has exploded, civilization will quickly collapse and money will become worthless. Thus, even if he loses the bet, he really loses nothing at all. This again references the idea that Bayesians tend to consider things in context, whereas Frequentists have a narrow focus. (It's also a tongue-in-cheek reference to the absurdity of the premise.)

The title and the last two frames suggest that "frequentist" interpretation of statistics is somehow wrong, which has prompted debate. Many believe that the Bayesian and the frequentist interpretations of probability theory are not mutually exclusive and neither is wrong. One argument states that the Frequentist in the comic is actually misusing P-values, in a way that violates standard frequentist practice. P-values are usually used only for numerical values that are known to fall along a specific distribution — in this case, it is used to determine the significance of a discrete event, which is wrong. Others believe that the use of prior knowledge by The Bayesian enables him to reach his conclusion. For more views on this issue, see the discussion box.

The labels on the bottom two panels were applied as an after-thought, according to Munroe's post here; he states his intention was "to illustrate a case where naïve application of that significance test can give a result that's obviously nonsense."

.@JoeNBC: If you think it's a toss-up, let's bet. If Obama wins, you donate $1,000 to the American Red Cross. If Romney wins, I do. Deal?

Arguably, this is another comic about the accuracy of presidential election predictions that used Bayesian statistical models, such as Nate Silver's 538 and Professor Sam Wang's PEC. Thomas Bayes studied conditional probability — the likelihood that one event is true when given information about some other related event. From Wikipedia: "Bayesian interpretation expresses how a subjective degree of belief should rationally change to account for evidence". The Bayesian's bet may refer to a well-publicized bet that Nate Silver tried to make with Joe Scarborough regarding the outcome of the election (see tweet on the right).

The title text refers to a classic series of logic puzzles known as Knights and Knaves, where there are two guards in front of two exit doors, one of which is real and the other leads to death. One guard is a liar and the other tells the truth. The visitor doesn't know which is which, and is allowed to ask one question to one guard. The solution is to ask either guard what the other one would say is the real exit, then choose the opposite. Two such guards were featured in the 1986 Jim Henson movie Labyrinth, which is referenced in the text.

[edit] Transcript

Did the sun just explode? (It's night, so we're not sure)
[Two statisticians stand alongside an adorable little computer that is suspiciously similar to K-9 that speaks in Westminster typeface.]
Frequentist Statistician: This neutrino detector measures whether the sun has gone nova.
Bayesian Statistician: Then, it rolls two dice. If they both come up as six, it lies to us. Otherwise, it tells the truth.
Frequentist Statistician: Let's try. [to the detector] Detector! Has the sun gone nova?
Detector: roll YES.
Frequentist Statistician:
Frequentist Statistician: The probability of this result happening by chance is 1/36=0.027. Since p<0.05, I conclude that the sun has exploded.
Bayesian Statistician:
Bayesian Statistician: Bet you $50 it hasn't.
comment.png add a comment! ⋅ Icons-mini-action refresh blue.gif refresh comments!


Something should be added about the prior probability of the sun going nova, as that is the primary substantive point. "The neutrino detector is evidence that the Sun has exploded. It's showing an observation which is 35 times more likely to appear if the Sun has exploded than if it hasn't (likelihood ratio of 35:1). The Bayesian just doesn't think that's strong enough evidence to overcome the prior odds, i.e., after multiplying the prior odds by 35 they still aren't very high." - http://lesswrong.com/r/discussion/lw/fe5/xkcd_frequentist_vs_bayesians/ 23:51, 9 November 2012 (UTC)

Note: taking that bet would be a mistake. If the Bayesian is right, you're out $50. If he's wrong, everyone is about to die and you'll never get to spend the winnings. Of course, this meta-analysis is itself a type of Bayesian thinking, so Dunning-Kruger Effect would apply. - Frankie (talk) 13:50, 9 November 2012 (UTC)

You don't think you could spend fifty bucks in eight minutes? ;-) (PS: wikipedia is probably a better link than lmgtfy: Dunning-Kruger effect) -- IronyChef (talk) 15:35, 9 November 2012 (UTC)

Randall has referenced the Labyrinth guards before: xkcd 246:Labyrinth puzzle. Plus he has satirized p<0.05 in xkcd 882:Significant--Prooffreader (talk) 15:59, 9 November 2012 (UTC)

A bit of maths. Let event N be the sun going nova and event Y be the detector giving the answer "Yes". The detector has already given a positive answer so we want to compute P(N|Y). Applying the Bayes' theorem:

P(N|Y) = P(Y|N) * P(N) / P(Y)
P(Y|N) = 1
P(N) = 0.0000....
P(Y|N) * P(N) = 0.0000...
P(Y) = p(Y|N)*P(N) + P(Y|-N)*P(-N)
P(Y|-N) = 1/36
P(-N) = 0.999999...
P(Y) = 0 + 1/36 = 1/36
P(N|Y) = 0 / (1/36) = 0

Quite likely it's not entirely correct. Lmpk (talk) 16:22, 9 November 2012 (UTC)

Here's what I get for the application of Bayes' Theorem:

P(N|Y) = P(Y|N) * P(N) / P(Y)
= P(Y|N) * P(N) / [P(Y|N) * P(N) + P(Y|~N) * P(~N)]
= 35/36 * P(N) / [35/36 * P(N) + 1/36 * (1 - P(N))]
= 35 * P(N) / [35 * P(N) - P(N) + 1]
< 35 * P(N)
= 35 * (really small number)

So, if you believe it's extremely unlikely for the sun to go nova, then you should also believe it's unlikely a Yes answer is true.

I wouldn't say the comic is about election prediction models. It's about a long-standing dispute between two different schools of statisticians, a dispute that began before Nate Silver was born. It's possible that the recent media attention for Silver and his ilk inspired this subject, but it's the kind of geeky issue Randall would typically take on in other circumstances too. MGK (talk) 19:44, 9 November 2012 (UTC)

I agree - this is not directed at the US-presidential election. I also want to add, that Bayesian btatistics assumes that parameters of distributions (e.g. mean of gaussian) are also random variables. These random variables have prior distributions - in this case p(sun explodes). The Bayesian statistitian in this comic has access to this prior distribution and so has other estimates for an error of the neutrino detector. The knowlege of the prior distribution is somewhat considered a "black art" by other statisticians.

My personal interpretation of the "bet you $50 it hasn't" reply is in the case of the sun going nova, no one would be alive to ask the neutrino detector, the probability of the sun going nova is always 0. Paps

Yes, you would be able to ask. While neutrinos move almost at speed of light, the plasma of the explosion is significally slower, 10% of speed of light tops. You will have more that hour to ask. (Note that technically, sun can't go nova, because nova is white dwarf with external source of hydrogen. It can (and will), however, go supernova, which I assume is what Randall means.) -- Hkmaly (talk) 09:19, 12 November 2012 (UTC)
Our sun will not go supernova, as it has insufficient mass. It will slowly become hotter, rendering Earth uninhabitable in a few billion years. In about 5 billion years it will puff up into a red giant, swallowing the inner planets. After that, it will gradually blow off its lighter gasses, eventually leaving behind the core, a white dwarf. 01:58, 15 November 2012 (UTC)
Please don't edit others' comments on talk pages; it's considered quite rude. On a talk page, discourse is meant to be conducted, by editors for the betterment of the article. For constructive discourse to occur, a person's words must be left in tact. The act of censorship hurts the common goal of betterment. Per Wikipedia, the authoritative source on how a wiki works best: "you should not edit or delete the comments of other editors without their permission." lcarsos_a (talk) 17:38, 13 November 2012 (UTC) Note: much of this conversation has been removed at the request of the authors.

I think the explanation is wrong or otherwise lacking in its explanation: The P-value is not the entire problem with the frequentist's viewpoint (or alternatively, the problem with the p-value hasn't been explained). The Frequentist has looked strictly at a two case scenario: Either the machine rolls 6-6 and is lying, or it doesn't rolls 6-6 and it is telling the truth. Therefore, there is a 35/36 probability (97.22%) that the machine is telling the truth and therefore the sun has exploded. The Bayesian is factoring in outside facts and information to improve the accuracy of the probability model. He says "Either the machine rolls 6-6 (a 1/36 probability, or 2.77%) or the sun has exploded (an aparently far less likely scenario). Given the comparison, the Bayesian believes it is MORE probable that the machine rolled 6-6 than the sun exploded, given the relative probabilities. If the latter is a 1 in a million chance (0.000001%), it is 2,777,777 times more likely that the machine rolled 6-6 than the sun exploded. To borrow a demonstration/explanation technique from the Monty Hall problem, if the machine told you a coin flip was heads, that would be 50% chance of occuring while a 2.7% chance of the machine lying, the probabilities would clearly suggest that the machine was more likely to be telling the truth. Whereas if the machine said that 100 coin flips had all come up heads (7.88x10^-31%). Is it more likely that 100 coin flips all came up heads or is it more likely the machine is lying? What about 1000 coin flips? or 1,000,000? I think the question is, whether one could assign a probability to the sun exploding. Also, I think they could have avoided the whole thing by asking the machine a second time and see what it answered. TheHYPO (talk) 19:09, 12 November 2012 (UTC)

Another source of explanation: http://stats.stackexchange.com/questions/43339/whats-wrong-with-xkcds-frequentists-vs-bayesians-comic --JakubNarebski (talk) 20:12, 12 November 2012 (UTC)

The P-value really has nothing to do with it. If I think that there is a 35/36 chance that the sun has exploded, then I should we willing to take any bet that the sun has exploded with better than 1:35 odds. For example, if someone bets me that the sun has exploded in which they will pay me $2 if the sun has exploded and I will pay them $35 if it hasn't, then based on my belief that the sun has exploded with 35/36 probability, then my expected value for this bet is 2*35/36 - 35 * 1/36 = 35/36 dollars and I will take this bet. Clearly I would also take a bet with 1:1 odds - my estimated expected value in the proposed bet in the comic would be 50*35/36 - 50 * 1/36 = $49 (approximately), and I would for sure take this bet. The Bayesian on the other hand has a much lower belief that the sun has exploded because he takes into account the prior probability of the sun exploding, so he would take the reverse side of the bet. The difference is that the Bayesian uses prior probabilities in computing his belief in an event, whereas frequentists do not believe that you can put prior probabilities on events in the real world. Also note that this comic has nothing to do with whether people would die if the sun went nova - the comic is titled "Frequentists vs Bayesians" and is about the difference between these two approaches. (talk) (please sign your comments with ~~~~)

The Labyrinth reference reminds me of an old Doctor Who episode (Pyramid of Mars), where the Doctor is also faced with a truthful and untruthful set of guards. Summarized here: http://tardis.wikia.com/wiki/Pyramids_of_Mars_(TV_story) Fermax (talk) 04:49, 14 November 2012 (UTC)

This is actually an example of the Base rate fallacy. -- 04:04, 19 November 2012 (UTC)

People have gone over this already, but just to be a bit more explicit: Let NOVA be the event that there was a nova, and let YES be the event that the detector responds "Yes" to the question "Did the sun go nova?" What we want is P(NOVA|YES)=P(YES|NOVA)*P(NOVA)/P(YES) Suppose P(NOVA)=p is the prior probability of a nova. Then P(YES|NOVA)=35/36, P(NOVA)=p, and P(YES)=p*35/36+(1-p)*1/36=1/36+34/36 So then P(NOVA|YES)=35p/(1+34p). If p is small, then P(NOVA|YES) is also small. In particular, the Bayesian statistician wins his bet at 1:1 odds if p<1/36, which is probably the case. If the Bayesian statistician wants 95% confidence that he'll win his bet, then he needs p<1/666. =P

It's cute to attempt to connect this to the U.S. presidential election, but it's far likelier that it's a reference to Enrico Fermi taking bets at the Trinity test site as to whether or not the first atomic bomb would cause a chain reaction that would ignite the entire atmosphere and destroy the planet. I'll bet you $50 it is. 21:29, 7 March 2013 (UTC)

Unsurprisingly, the comments have been polluted by Yuddites. Kill yourselves, retards. (talk) (please sign your comments with ~~~~)

I don't like the explanation at all. Some of the discussion posts give a good view on this. I'd like to share my thought about the last panel, though. The page reads as if the punch line is about the fact that you cannot spend the money if the sun was going to explode; but why does the bayesian propose this bet and not the frequentist - no reason for this. I think there is a better explanation for this panel: there are several proofs that bayesian probabilities result in "rational" behaviour: They state that if you act according to bayes' rule you cannot be cheated in betting. 17:11, 6 March 2014 (UTC)

The last panel may refer to Nate Sliver's view expressed in his book The Signal and the Noise that if one believes one's prediction to be true one should be confident to bet on it. --Troy0 (talk) 18:46, 6 July 2014 (UTC)
Personal tools


It seems you are using noscript, which is stopping our project wonderful ads from working. Explain xkcd uses ads to pay for bandwidth, and we manually approve all our advertisers, and our ads are restricted to unobtrusive images and slow animated GIFs. If you found this site helpful, please consider whitelisting us.

Want to advertise with us, or donate to us with Paypal or Bitcoin?