TY - JOUR
T1 - Examining the replicability of online experiments selected by a decision market
AU - Holzmeister, Felix
AU - Johannesson, Magnus
AU - Camerer, Colin F.
AU - Chen, Yiling
AU - Ho, Teck Hua
AU - Hoogeveen, Suzanne
AU - Huber, Juergen
AU - Imai, Noriko
AU - Imai, Taisuke
AU - Jin, Lawrence
AU - Kirchler, Michael
AU - Ly, Alexander
AU - Mandl, Benjamin
AU - Manfredi, Dylan
AU - Nave, Gideon
AU - Nosek, Brian A.
AU - Pfeiffer, Thomas
AU - Sarafoglou, Alexandra
AU - Schwaiger, Rene
AU - Wagenmakers, Eric Jan
AU - Waldén, Viking
AU - Dreber, Anna
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024
Y1 - 2024
N2 - Here we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015–2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with 2 randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.
AB - Here we test the feasibility of using decision markets to select studies for replication and provide evidence about the replicability of online experiments. Social scientists (n = 162) traded on the outcome of close replications of 41 systematically selected MTurk social science experiments published in PNAS 2015–2018, knowing that the 12 studies with the lowest and the 12 with the highest final market prices would be selected for replication, along with 2 randomly selected studies. The replication rate, based on the statistical significance indicator, was 83% for the top-12 and 33% for the bottom-12 group. Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size estimates. The replication rate varied between 54% and 62% for alternative replication indicators. The observed replicability of MTurk experiments is comparable to that of previous systematic replication projects involving laboratory experiments.
UR - http://www.scopus.com/inward/record.url?scp=85209596963&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85209596963&partnerID=8YFLogxK
U2 - 10.1038/s41562-024-02062-9
DO - 10.1038/s41562-024-02062-9
M3 - Article
AN - SCOPUS:85209596963
SN - 2397-3374
JO - Nature Human Behaviour
JF - Nature Human Behaviour
ER -