Kavli Affiliate: Joshua Vogelstein
| Authors: Xinhui Li, Nathalia Bianchini Esper, Lei Ai, Steve Giavasis, Hecheng Jin, Eric Feczko, Ting Xu, Jon Clucas, Alexandre Franco, Anibal Solon Heinsfeld, Azeez Adebimpe, Joshua T. Vogelstein, Chao-Gan Yan, Oscar Esteban, Russell A. Poldrack, Cameron Craddock, Damien Fair, Theodore Satterthwaite, Gregory Kiar and Michael P. Milham
| Summary:
Abstract When fields lack consensus standard methods and accessible ground truths, reproducibility can be more of an ideal than a reality. Such has been the case for functional neuroimaging, where there exists a sprawling space of tools and processing pipelines. We provide a critical evaluation of the impact of differences across five independently developed minimal preprocessing pipelines for functional MRI. We show that even when handling identical data, inter-pipeline agreement was only moderate, critically shedding light on a factor that limits cross-study reproducibility. We show that low inter-pipeline agreement mainly becomes appreciable when the reliability of the underlying data is high, which is increasingly the case as the field progresses. Crucially, we show that when inter-pipeline agreement is compromised, so too are the consistency of insights from brainwide association studies. We highlight the importance of comparing analytic configurations, as both widely discussed and commonly overlooked decisions can lead to marked variation.