BACKGROUND: All doctors know that P-value<0.05 is "the Graal," but publications require further parameters [odds ratios, confidence interval (CI), etc.] to better analyze scientific data.AIM: The aim of this study was to present P-values, CI, and common effect-sizes (Cohen d, odds ratio, and various coefficients) in a simple way.DESCRIPTION: The P-value is the probability, when the null hypothesis is true (eg, no difference or no association), of obtaining a result equal to or more extreme than what we actually observed. Simplistically, P-value quantifies the probability that the result is due to chance. It does not measure how big the association or the difference is. The CI on a value describes the probability that the true value is within a given range. A 95% CI means that the CI covers the true value in 95 of 100 performed studies. The test is significant if the CI does not include the null hypothesized difference or association (eg, 0 for difference). The effect-sizes are quantitative measures of the strength of a difference or association. If the P-value is <0.05 but the effect size is very low, the test is statistically significant but probably, clinically not so.CONCLUSIONS: Scientific publications require more parameters than a P-value. Statistical results should also include effect sizes and CIs to allow for a more complete, honest, and useful interpretation of scientific findings.
Why a P-Value is Not Enough.
Solla F
Conceptualization
;
2018-01-01
Abstract
BACKGROUND: All doctors know that P-value<0.05 is "the Graal," but publications require further parameters [odds ratios, confidence interval (CI), etc.] to better analyze scientific data.AIM: The aim of this study was to present P-values, CI, and common effect-sizes (Cohen d, odds ratio, and various coefficients) in a simple way.DESCRIPTION: The P-value is the probability, when the null hypothesis is true (eg, no difference or no association), of obtaining a result equal to or more extreme than what we actually observed. Simplistically, P-value quantifies the probability that the result is due to chance. It does not measure how big the association or the difference is. The CI on a value describes the probability that the true value is within a given range. A 95% CI means that the CI covers the true value in 95 of 100 performed studies. The test is significant if the CI does not include the null hypothesized difference or association (eg, 0 for difference). The effect-sizes are quantitative measures of the strength of a difference or association. If the P-value is <0.05 but the effect size is very low, the test is statistically significant but probably, clinically not so.CONCLUSIONS: Scientific publications require more parameters than a P-value. Statistical results should also include effect sizes and CIs to allow for a more complete, honest, and useful interpretation of scientific findings.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.