TY - JOUR
T1 - Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015
AU - Camerer, Colin F.
AU - Dreber, Anna
AU - Holzmeister, Felix
AU - Ho, Teck Hua
AU - Huber, Jürgen
AU - Johannesson, Magnus
AU - Kirchler, Michael
AU - Nave, Gideon
AU - Nosek, Brian A.
AU - Pfeiffer, Thomas
AU - Altmejd, Adam
AU - Buttrick, Nick
AU - Chan, Taizan
AU - Chen, Yiling
AU - Forsell, Eskil
AU - Gampa, Anup
AU - Heikensten, Emma
AU - Hummer, Lily
AU - Imai, Taisuke
AU - Isaksson, Siri
AU - Manfredi, Dylan
AU - Rose, Julia
AU - Wagenmakers, Eric Jan
AU - Wu, Hang
N1 - Publisher Copyright:
© 2018, The Author(s).
PY - 2018/9/1
Y1 - 2018/9/1
N2 - Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
AB - Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
UR - http://www.scopus.com/inward/record.url?scp=85052962253&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052962253&partnerID=8YFLogxK
U2 - 10.1038/s41562-018-0399-z
DO - 10.1038/s41562-018-0399-z
M3 - Article
C2 - 31346273
AN - SCOPUS:85052962253
SN - 2397-3374
VL - 2
SP - 637
EP - 644
JO - Nature Human Behaviour
JF - Nature Human Behaviour
IS - 9
ER -