Editing Talk:1478: P-Values

Jump to: navigation, search
Ambox notice.png Please sign your posts with ~~~~

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 8: Line 8:
 
:I like how you have two separate categories - "scientists" and "researchers" with each having two different goals :) [[User:Nyq|Nyq]] ([[User talk:Nyq|talk]]) 10:12, 26 January 2015 (UTC)
 
:I like how you have two separate categories - "scientists" and "researchers" with each having two different goals :) [[User:Nyq|Nyq]] ([[User talk:Nyq|talk]]) 10:12, 26 January 2015 (UTC)
 
: As a reporter, I can assure you that journalists are not redoing calculations on studies. Journalists are notorious for their innumeracy; the average reporter can barely figure the tip on her dinner check. Most of us don't know p-values from pea soup.[[Special:Contributions/108.162.216.78|108.162.216.78]] 16:44, 26 January 2015 (UTC)
 
: As a reporter, I can assure you that journalists are not redoing calculations on studies. Journalists are notorious for their innumeracy; the average reporter can barely figure the tip on her dinner check. Most of us don't know p-values from pea soup.[[Special:Contributions/108.162.216.78|108.162.216.78]] 16:44, 26 January 2015 (UTC)
:: The press has at various times been guilty of championing useless drugs AND 'debunking' useful ones, but it's more to do with how information is presented to them than any particular statistical failing on their part. They can look up papers the same as anyone, but without a very solid understanding of the specific area of science there's no real way that a layman can determine if an experiment is flawed or valid or if results have been manipulated. Reporters (like anyone researching an area) at some point has to decide who to trust and who not to, and make up their own mind. It doesn't even matter if a reporter IS very scientifically literate, because the readers aren't and THEY have to take his word for it. Certainly reporters should be much more rigorous but there's more going on than just 'reporters need to take a stats class'. Journals and academics make the exact same mistakes too; skipping to the conclusion, getting exciting about breakthroughs that are too good to be true; and assuming that science and scientists are fundamentally trustworthy. And the answer isn't even that everyone involved should demand better proof, because that's exactly the problem we already have - What actually IS proof? Can you ever trust any research done by someone else? Can you even trust research that you were a part of? After all, any large sample group takes more than one person to implement and analyse, and your personal observations could easily not be representative of the whole. We love to talk about proof as being the beautifully objective thing, but in truth the only true proof comes after decades of work and studies across huge numbers of subjects which naturally never happens if the first test comes back negative, because no-one puts much effort into re-testing things that are 'false'.  01:29, 13 April 2015 (UTC)
 
  
 
This one resembles [https://mchankins.wordpress.com/2013/04/21/still-not-significant-2/ this interesting blog post] very much.--[[Special:Contributions/141.101.96.222|141.101.96.222]] 13:26, 26 January 2015 (UTC)
 
This one resembles [https://mchankins.wordpress.com/2013/04/21/still-not-significant-2/ this interesting blog post] very much.--[[Special:Contributions/141.101.96.222|141.101.96.222]] 13:26, 26 January 2015 (UTC)
Line 20: Line 19:
  
 
This comic may be ridiculing the arbitrariness of the .05 significance cutoff and alluding to the "new statistics" being discussed in psychology.[http://www.psychologicalscience.org/index.php/publications/observer/2014/march-14/theres-life-beyond-05.html]<br>[[Special:Contributions/108.162.219.163|108.162.219.163]] 23:06, 26 January 2015 (UTC)
 
This comic may be ridiculing the arbitrariness of the .05 significance cutoff and alluding to the "new statistics" being discussed in psychology.[http://www.psychologicalscience.org/index.php/publications/observer/2014/march-14/theres-life-beyond-05.html]<br>[[Special:Contributions/108.162.219.163|108.162.219.163]] 23:06, 26 January 2015 (UTC)
 
The "redo calculations" part could just mean "redo calculations with more significant figures" (i.e. to see whether this 0.050 value is actually 0.0498 or 0.0503). --[[Special:Contributions/141.101.104.52|141.101.104.52]] 13:36, 28 January 2015 (UTC)
 
:Agreed. I first understood it as someone thinking that 0.05 is a "too round" value, and some calculations tend to raise suspicions when these values pop up. [[Special:Contributions/188.114.99.189|188.114.99.189]] 21:28, 7 December 2015 (UTC)
 
 
;TL;DR
 
As someone who understands p values, IMO this explanation is ''way'' too technical. I really think the intro paragraph should have a short, simplified version that doesn't require any specialized vocabulary words except "p-value" itself. Then talk about controls, null hypothesis, etc, in later paragraphs. - [[User:Frankie|Frankie]] ([[User talk:Frankie|talk]]) 16:52, 28 January 2015 (UTC)
 
: That is [http://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/ nearly impossible]. I'm using the American Statistical Association's [http://amstat.tandfonline.com/doi/full/10.1080/00031305.2016.1154108#_i27 definition] that "Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value" until a better one comes. That said, the difficulty of explaining p-value is no excuse to use the wrong interpretation of "probability that observed result is due to chance".--[[User:Troy0|Troy0]] ([[User talk:Troy0|talk]]) 06:27, 24 July 2016 (UTC)
 
 
----
 
 
There's an irony in the use of hair colour as a suspect subgroup analysis... hair colour can factor in to studies.  Ignoring the (probably wrong) common idea that red heads have a lower effectiveness rating for contraceptives, there do seem to be some suggestions that the recessive mutated gene does have implications beyond hair colour.  Getting sunburn easily is one we all know, but how about painkiller and aesthetic efficiency? For example: http://healthland.time.com/2010/12/10/why-surgeons-dread-red-heads/ --[[Special:Contributions/141.101.99.84|141.101.99.84]] 09:15, 18 June 2015 (UTC)
 

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)