Wednesday 10 April 2013

(Appropriately powered) replication's what you need

Thanks to Mark Stokes for picture
There has been some truly excellent coverage this morning of the very important paper published today by Kate Button, Marcus Munafo and colleagues in Nature Reviews Neuroscience, entitled “Power failure: why small sample size undermines the reliability of neuroscience”.

For example, Ed Yong has written a fantastic piece on the issues raised by the realisation that insufficient statistical power plagues much neuroscience research, and Christian Jarrett has an equally good article on the implications of these issues for the field.

As I commented in Ed Yong’s article, I think this is a landmark paper.  It's very good to see these issues receiving exposure in such a widely-read and highly-respected journal - I think it says a lot for the willingness of the neuroscience field to consider and hopefully tackle these problems, which are being identified in so many branches of science.

I really like the section of the paper focusing on the fact that the issues of power failure are a particular problem for replication attempts, which I think is a point not many people are conscious of.  You'll often see an experiment's sample size justified on the basis of an argument like "well, that number is what's used in previous studies".  Button et al demonstrate that such a justification is unlikely to be sufficient.  To be adequately powered, replications need a larger sample size than the original study they’re seeking to replicate.  There are very few replication studies in the literature that fulfil this criterion.

I feel deep down that greater emphasis on replication is the answer to a lot of the current issues facing the field, but the points raised by Button et al are key issues that researchers in the field need to take account of.

The good thing is that I think the field is taking notice of papers such as this one, and is making progress towards developing more robust methodological principles.  Button et al.'s paper, like the recent Nature Neuroscience papers by Kriegeskorte et al and Nieuwenhuis et al, plus the recent moves by Psychological Science, Cortex, and other journals to promote the use of more reliable methodology, are all excellent contributions to that progress.  I think it's a sign of a healthy, thriving scientific discipline that these methods developments are being published in such prominent flagship journals.  It gives me confidence about the future of the field.

Update 10/4/13, 3pm: I'm grateful to Paul Fletcher for highlighting that NeuroImage: Clinical has created a new section to help address concerns about the lack of replication in clinical neuroimaging.  Very happy to publicise any other similar moves to improve things.

References:
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B., Flint, J., Robinson, E., & Munafò, M. (2013). Power failure: why small sample size undermines the reliability of neuroscience Nature Reviews Neuroscience DOI: 10.1038/nrn3475
Chambers, C. (2013). Registered Reports: A new publishing initiative at Cortex Cortex, 49 (3), 609-610 DOI: 10.1016/j.cortex.2012.12.016
Kriegeskorte, N., Simmons, W., Bellgowan, P., & Baker, C. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience, 12 (5), 535-540 DOI: 10.1038/nn.2303
Nieuwenhuis, S., Forstmann, B., & Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance Nature Neuroscience, 14 (9), 1105-1107 DOI: 10.1038/nn.2886