1478: P-Values

Explain xkcd: It's 'cause you're dumb.
(Redirected from 1478)
Jump to: navigation, search
If all else fails, use "significant at a p>0.05 level" and hope no one notices.
Title text: If all else fails, use "significant at a p>0.05 level" and hope no one notices.

[edit] Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Needs work to improve readability for non-statisticians.
If you can address this issue, please edit the page! Thanks.

This comic plays on how the significance of scientific experiments is measured and interpreted. The p-value is a statistical measure of how well the results of an experiment fit with the results predicted by the hypothesis. In lay terms, p is the probability that random chance can explain the results, without the experimental prediction. When low p-values occur, the results appear to reject the null hypothesis, whereas high p-values indicate that the data can not be used to support the hypothesis. High p-values do not signify counter-evidence, but only that more results are needed.

Appropriate experimental design generally requires that the significance threshold (usually 0.05) be set prior the experiment, not allowing ex-post changes in order to get a better experiment report. A simple change of this threshold (e.g. from 0.05 to 0.1) can change the experiment result with p-value=0.06 from "not significant" to "significant".

The highest p-value at which most studies typically draw significance is p<0.05, which is why all p-values in the comic below that number are marked at least significant.

It is usually the case that the person carrying out the test has a vested interest in the results, typically because it is their own hypothesis under test. A result which shows no significance can feel like a major blow, and this may lead to desperate attempts to 'encourage' the data to show the desired outcome.

The chart has a p-value of 0.050 labeled "Oh crap. Redo calculations" because the p-value is very close to being considered significant, but isn't. The desperate researcher might be able to redo the calculations in order to nudge the result under 0.050. This could be achieved validly if an error is found in the calculations or data set, or falsely by erasing certain unwelcome data points or by using creative mathematical adjustments.

Values between 0.051 and 0.06 are labelled as being 'On the edge of significance'. The use of this kind of language to qualify the significance is regularly seen in reports, although it is a contested topic. The debate centres on whether p-values slightly larger than the significance level should be noted as nearly-significant, or flatly classed as not-significant. The logic of having an absolute cut-off point for significance is also questioned.

The values between 0.07 and 0.099 continue the trend of using qualifying language, calling the results 'suggestive'. This category also uses another method that the desperate researcher may resort to, namely adjusting the significance threshold. Although it is true that these results are significant at p<0.10, changing the threshold in order to classify the result as significant is highly frowned upon.

Values higher than 0.1 should be considered not significant at all, however the comic suggests taking a part of the sample (a "subgroup") and analyzing that subgroup without regard to the rest of the sample. For example, in a study trying to prove that people always sneeze when walking by a particular street lamp, someone would record the number of people who pass the lamp and the number of people who sneeze. If the results don't get the desired p<0.1, then pick a subgroup (e.g. OK, not all people sneeze, but look! women sneeze more than men, so let's analyze only women). Of course, this is not accepted scientific procedure as it's very likely to add sampling bias to the result. This is an example of the multiple comparisons problem, which is also the topic of comic 882.

If the results cannot be normally considered significant, the title text suggests inverting p<0.050, making it p>0.050. This may fool casual readers, as the change is only to the inequality sign, which may go unnoticed or be dismissed as a typographical error ("no-one would claim their results aren't significant, they must mean p<0.050").

[edit] Transcript

[A two columns T-table where the interpretation column selects various areas of the first column using square brackets.]
P-value Interpretation
0.001 Highly significant
0.04 Significant
0.050 Oh crap. Redo calculations.
0.051 On the edge of significance
0.07 Highly suggestive, relevant at the p<0.10 level
≥0.1 Hey, look at this interesting subgroup analysis
comment.png add a comment! ⋅ Icons-mini-action refresh blue.gif refresh comments!


IMHO the current explanation is misleading. The p-value describes how well the experiment output fits hypothesis. The hypothesis can be that the experiment output is random. The low p-values point out that the experiment output fits well with behavior predicted by the hypothesis. The higher the p-value the more the observed and predicted values differ.Jkotek (talk) 08:54, 26 January 2015 (UTC)

High p-values do not signify that the results differ from what was predicted, they simply indicate that there are not enough results for a conclusion. -- 20:13, 26 January 2015 (UTC)

I read this comic as a bit of a jab at either scientists or media commentators who want the experiments to show a particular result. As the significance decreases, first they re-do the calculations either in the hope that result might have been erroneous and would be re-classified as significant, or intentionally fudge the numbers to increase the significance. The next step is to start clutching at straws, admitting that while the result isn't Technically significant, it is very close to being significant. After that, changing the language to 'suggestive' may convince the general public that the result is actually more significant than it is, while also changing the parameters of the 'significance' value allows it to be classified as significant. Finally, they give up on the overall results, and start pointing out small sections which may by chance show some interesting features.

All of these subversive efforts could come about because of scientists who want their experiment to match their hypothesis, journalists who need a story, researchers who have to justify further funding etc etc. --Pudder (talk) 09:01, 26 January 2015 (UTC)

I like how you have two separate categories - "scientists" and "researchers" with each having two different goals :) Nyq (talk) 10:12, 26 January 2015 (UTC)
As a reporter, I can assure you that journalists are not redoing calculations on studies. Journalists are notorious for their innumeracy; the average reporter can barely figure the tip on her dinner check. Most of us don't know p-values from pea soup. 16:44, 26 January 2015 (UTC)

This one resembles this interesting blog post very much.-- 13:26, 26 January 2015 (UTC)

null hypothesis.png
STEN (talk) 13:33, 26 January 2015 (UTC)

Heh. 20:06, 26 January 2015 (UTC)

See http://xkcd.com/882/ for using a subgroup to improve your p value. Sebastian -- 23:02, 26 January 2015 (UTC)

I agree. The part about p >= 0.1 reminded me of that comic. S (talk) 01:25, 27 January 2015 (UTC)
This comic may be ridiculing the arbitrariness of the .05 significance cutoff and alluding to the "new statistics" being discussed in psychology.[1] 23:06, 26 January 2015 (UTC)
Personal tools


It seems you are using noscript, which is stopping our project wonderful ads from working. Explain xkcd uses ads to pay for bandwidth, and we manually approve all our advertisers, and our ads are restricted to unobtrusive images and slow animated GIFs. If you found this site helpful, please consider whitelisting us.

Want to advertise with us, or donate to us with Paypal or Bitcoin?