P-values – a chronic conundrum

Abstract Background In medical research and practice, the p-value is arguably the most often used statistic and yet it is widely misconstrued as the probability of the type I error, which comes with serious consequences. This misunderstanding can greatly affect the reproducibility in research, treatment selection in medical practice, and model specification in empirical analyses. By using plain language and concrete examples, this paper is intended to elucidate the p-value confusion from its root, to explicate the difference between significance and hypothesis testing, to illuminate the consequences of the confusion, and to present a viable alternative to the conventional p-value. Main text The confusion with p-values has plagued the research community and medical practitioners for decades. However, efforts to clarify it have been largely futile, in part, because intuitive yet mathematically rigorous educational materials are scarce. Additionally, the lack of a practical alternative to the p-value for guarding against randomness also plays a role. The p-value confusion is rooted in the misconception of significance and hypothesis testing. Most, including many statisticians, are unaware that p-values and significance testing formed by Fisher are incomparable to the hypothesis testing paradigm created by Neyman and Pearson. And most otherwise great statistics textbooks tend to cobble the two paradigms together and make no effort to elucidate the subtle but fundamental differences between them. The p-value is a practical tool gauging the “strength of evidence” against the null hypothesis. It informs investigators that a p-value of 0.001, for example, is stronger than 0.05. However, p-values produced in significance testing are not the probabilities of type I errors as commonly misconceived. For a p-value of 0.05, the chance a treatment does not work is not 5%; rather, it is at least 28.9%. Conclusions A long-overdue effort to understand p-values correctly is much needed. However, in medical research and practice, just banning significance testing and accepting uncertainty are not enough. Researchers, clinicians, and patients alike need to know the probability a treatment will or will not work. Thus, the calibrated p-values (the probability that a treatment does not work) should be reported in research papers.

Tags
Data and Resources
To access the resources you must log in

This item has no data

Identity

Description: The Identity category includes attributes that support the identification of the resource.

Field Value
PID https://www.doi.org/10.6084/m9.figshare.c.5038193.v1
PID https://www.doi.org/10.6084/m9.figshare.c.5038193
URL http://dx.doi.org/10.6084/m9.figshare.c.5038193
URL http://dx.doi.org/10.6084/m9.figshare.c.5038193.v1
Access Modality

Description: The Access Modality category includes attributes that report the modality of exploitation of the resource.

Field Value
Access Right not available
Attribution

Description: Authorships and contributors

Field Value
Author Gao, Jian, 0000-0001-8101-740X
Publishing

Description: Attributes about the publishing venue (e.g. journal) and deposit location (e.g. repository)

Field Value
Collected From Datacite
Hosted By figshare
Publication Date 2020-01-01
Publisher figshare
Additional Info
Field Value
Language UNKNOWN
Resource Type Collection
keyword FOS: Sociology
keyword FOS: Biological sciences
system:type other
Management Info
Field Value
Source https://science-innovation-policy.openaire.eu/search/other?orpId=dedup_wf_001::ff72426d6bf5ad82ca603c902c7ce76d
Author jsonws_user
Last Updated 20 December 2020, 01:26 (CET)
Created 20 December 2020, 01:26 (CET)