Editing 2533: Slope Hypothesis Testing
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 21: | Line 21: | ||
Common statistical formulae assume the data points are statistically independent, that is, that the test score and volume measurement from one point don't reveal anything about those of the other points. By measuring each individual's scream multiple times, Cueball and Megan violate the independence assumption (a person's scream volume is unlikely to be independent from one scream to the next) and invalidate their significance calculation. This is an example of pseudoreplication. Furthermore, Megan and Cueball fail to obtain new test scores for each student, which would further limit their statistical options. | Common statistical formulae assume the data points are statistically independent, that is, that the test score and volume measurement from one point don't reveal anything about those of the other points. By measuring each individual's scream multiple times, Cueball and Megan violate the independence assumption (a person's scream volume is unlikely to be independent from one scream to the next) and invalidate their significance calculation. This is an example of pseudoreplication. Furthermore, Megan and Cueball fail to obtain new test scores for each student, which would further limit their statistical options. | ||
β | Another strange aspect of their experiment is that the p-values obtained during a typical linear regression assume there is uncertainty in the y-values but the x-values are fully known, | + | Another strange aspect of their experiment is that the p-values obtained during a typical linear regression assume there is uncertainty in the y-values but assume the x-values are fully known. Ironically, in this experiment they are reducing uncertainty in the x-values of their data, while doing nothing to improve knowledge of the y-values. |
Moreover, even if the new data were statistically independent, this still appears to be a classic example of "p-hacking", where new data is added until a statistically significant p-value is obtained. | Moreover, even if the new data were statistically independent, this still appears to be a classic example of "p-hacking", where new data is added until a statistically significant p-value is obtained. |