1478: P-Values

 P-Values Title text: If all else fails, use "significant at a p>0.05 level" and hope no one notices.

Explanation

 This explanation may be incomplete or incorrect: Needs work to improve readability for non-statisticians.If you can address this issue, please edit the page! Thanks.

This comic plays on how the significance of scientific experiments is measured and interpreted. The p-value is a statistical measure of how well the results of an experiment fit with the results predicted by the hypothesis. In lay terms, p is the probability that random chance can explain the results, without the experimental prediction. When low p-values occur, the results appear to reject the null hypothesis, whereas high p-values indicate that the data can not be used to support the hypothesis. High p-values do not signify counter-evidence, but only that more results are needed.

Appropriate experimental design generally requires that the significance threshold (usually 0.05) be set prior the experiment, not allowing ex-post changes in order to get a better experiment report. A simple change of this threshold (e.g. from 0.05 to 0.1) can change the experiment result with p-value=0.06 from "barely significant" to "significant".

The highest p-value at which most studies typically draw significance is p<0.05, which is why all p-values in the comic below that number are marked at least significant. 0.050 is labeled "Oh crap. Redo calculations," because the p-value is very close to being considered significant, but isn't. Redoing the calculations may result in a different answer, but it is not guaranteed that it will be lower than 0.050. Values that are higher than 0.050 and lower than 0.1 are considered to be suggesting significance without actually supporting it, which will likely support additional trials.

Values higher than 0.1 should be considered not significant at all, however the comic suggests taking a part of the sample (a "subgroup") and analyzing that subgroup without regard to the rest of the sample. For example, in a study trying to prove that people always sneeze when walking by a particular street lamp, someone would record the number of people who pass the lamp and the number of people who sneeze. If the results don't get the desired p<0.1, then pick a subgroup (e.g. OK, not all people sneeze, but look! women sneeze more than men, so let's analyze only women). Of course, this is not accepted scientific procedure as it's very likely to add sampling bias to the result. This is an example of the multiple comparisons problem, which is also the topic of comic 882

If the results cannot be normally considered significant, the title text suggests inverting p<0.050, making it p>0.050. This may fool casual readers, as the change is only to the inequality sign, which may go unnoticed or be dismissed as a typographical error ("no-one would claim their results aren't significant, they must mean p<0.050").

Transcript

[A two columns T-table where the interpretation column selects various areas of the first column using square brackets.]
P-value Interpretation
0.001 Highly significant
0.01
0.02
0.03
0.04 Significant
0.049
0.050 Oh crap. Redo calculations.
0.051 On the edge of significance
0.06
0.07 Highly suggestive, relevant at the p<0.10 level
0.08
0.09
0.099
≥0.1 Hey, look at this interesting subgroup analysis

Discussion

IMHO the current explanation is misleading. The p-value describes how well the experiment output fits hypothesis. The hypothesis can be that the experiment output is random. The low p-values point out that the experiment output fits well with behavior predicted by the hypothesis. The higher the p-value the more the observed and predicted values differ.Jkotek (talk) 08:54, 26 January 2015 (UTC)

High p-values do not signify that the results differ from what was predicted, they simply indicate that there are not enough results for a conclusion. --108.162.230.113 20:13, 26 January 2015 (UTC)

I read this comic as a bit of a jab at either scientists or media commentators who want the experiments to show a particular result. As the significance decreases, first they re-do the calculations either in the hope that result might have been erroneous and would be re-classified as significant, or intentionally fudge the numbers to increase the significance. The next step is to start clutching at straws, admitting that while the result isn't Technically significant, it is very close to being significant. After that, changing the language to 'suggestive' may convince the general public that the result is actually more significant than it is, while also changing the parameters of the 'significance' value allows it to be classified as significant. Finally, they give up on the overall results, and start pointing out small sections which may by chance show some interesting features.

All of these subversive efforts could come about because of scientists who want their experiment to match their hypothesis, journalists who need a story, researchers who have to justify further funding etc etc. --Pudder (talk) 09:01, 26 January 2015 (UTC)

I like how you have two separate categories - "scientists" and "researchers" with each having two different goals :) Nyq (talk) 10:12, 26 January 2015 (UTC)
As a reporter, I can assure you that journalists are not redoing calculations on studies. Journalists are notorious for their innumeracy; the average reporter can barely figure the tip on her dinner check. Most of us don't know p-values from pea soup.108.162.216.78 16:44, 26 January 2015 (UTC)
The press has at various times been guilty of championing useless drugs AND 'debunking' useful ones, but it's more to do with how information is presented to them than any particular statistical failing on their part. They can look up papers the same as anyone, but without a very solid understanding of the specific area of science there's no real way that a layman can determine if an experiment is flawed or valid or if results have been manipulated. Reporters (like anyone researching an area) at some point has to decide who to trust and who not to, and make up their own mind. It doesn't even matter if a reporter IS very scientifically literate, because the readers aren't and THEY have to take his word for it. Certainly reporters should be much more rigorous but there's more going on than just 'reporters need to take a stats class'. Journals and academics make the exact same mistakes too; skipping to the conclusion, getting exciting about breakthroughs that are too good to be true; and assuming that science and scientists are fundamentally trustworthy. And the answer isn't even that everyone involved should demand better proof, because that's exactly the problem we already have - What actually IS proof? Can you ever trust any research done by someone else? Can you even trust research that you were a part of? After all, any large sample group takes more than one person to implement and analyse, and your personal observations could easily not be representative of the whole. We love to talk about proof as being the beautifully objective thing, but in truth the only true proof comes after decades of work and studies across huge numbers of subjects which naturally never happens if the first test comes back negative, because no-one puts much effort into re-testing things that are 'false'. 01:29, 13 April 2015 (UTC)

This one resembles this interesting blog post very much.--141.101.96.222 13:26, 26 January 2015 (UTC)

STEN (talk) 13:33, 26 January 2015 (UTC)

Heh. 173.245.56.189 20:06, 26 January 2015 (UTC)

See http://xkcd.com/882/ for using a subgroup to improve your p value. Sebastian --108.162.231.68 23:02, 26 January 2015 (UTC)

I agree. The part about p >= 0.1 reminded me of that comic. S (talk) 01:25, 27 January 2015 (UTC)

This comic may be ridiculing the arbitrariness of the .05 significance cutoff and alluding to the "new statistics" being discussed in psychology.[1]
108.162.219.163 23:06, 26 January 2015 (UTC)

The "redo calculations" part could just mean "redo calculations with more significant figures" (i.e. to see whether this 0.050 value is actually 0.0498 or 0.0503). --141.101.104.52 13:36, 28 January 2015 (UTC)

Agreed. I first understood it as someone thinking that 0.05 is a "too round" value, and some calculations tend to raise suspicions when these values pop up. 188.114.99.189 21:28, 7 December 2015 (UTC)
TL;DR

As someone who understands p values, IMO this explanation is way too technical. I really think the intro paragraph should have a short, simplified version that doesn't require any specialized vocabulary words except "p-value" itself. Then talk about controls, null hypothesis, etc, in later paragraphs. - Frankie (talk) 16:52, 28 January 2015 (UTC)

That is nearly impossible. I'm using the American Statistical Association's definition that "Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value" until a better one comes. That said, the difficulty of explaining p-value is no excuse to use the wrong interpretation of "probability that observed result is due to chance".--Troy0 (talk) 06:27, 24 July 2016 (UTC)

There's an irony in the use of hair colour as a suspect subgroup analysis... hair colour can factor in to studies. Ignoring the (probably wrong) common idea that red heads have a lower effectiveness rating for contraceptives, there do seem to be some suggestions that the recessive mutated gene does have implications beyond hair colour. Getting sunburn easily is one we all know, but how about painkiller and aesthetic efficiency? For example: http://healthland.time.com/2010/12/10/why-surgeons-dread-red-heads/ --141.101.99.84 09:15, 18 June 2015 (UTC)