Tuesday, 25 October 2016

"I remember it well"


Hermione Gingold and Maurice Chevalier in Gigi (1958)
sing "I Remember It Well"
When Maurice Chevalier and Hermione Gingold reminisce about their younger days in the musical Gigi, it is clear that despite Chevalier's insistence that he remembers it well, their memories differ considerably in detail and precision.  Such variation in how precisely we can remember previous events is not only an affliction of older adults: we are all used to the vagaries of our memories, with some events remembered with crystal clarity and others recalled only indistinctly.

Studies of human memory often focus on how we remember some experiences but forget others, distinguishing only between ‘successful’ and ‘unsuccessful’ memory. However, as Chevalier and Gingold demonstrate, our memory for successfully remembered events can vary widely in quality, differing in the kinds of detail we can remember and how precise our memory for those details is. In addition to these ‘objective’ measures of how well we remember, our memory for an event can also subjectively feel more or less vivid to us based on our conscious experience of reliving the episode, regardless of how accurate our memory actually is. To date, there has been limited understanding of how such substantial differences in memory accuracy and experience occur.

Wednesday, 10 December 2014

Journalists' guide to fMRI papers

Printable version of the guide can be downloaded from [here].

This brief guide is written for journalists and others who may not consider themselves fMRI experts, but would appreciate some advice about how to read an fMRI paper with an appropriate degree of informed skepticism.  It is not meant to be prescriptive (or indeed exhaustive), but a few ideas of what to look for will hopefully benefit those wanting to report fMRI papers accurately in the media, as well as people who might simply wish to know how much they can reliably interpret from articles they read.

Like all science, fMRI is constantly evolving, so it is difficult to establish absolute standards that will be agreed by everybody.  However, here are some pointers about common problems to look out for:

Wednesday, 10 April 2013

(Appropriately powered) replication's what you need

Thanks to Mark Stokes for picture
There has been some truly excellent coverage this morning of the very important paper published today by Kate Button, Marcus Munafo and colleagues in Nature Reviews Neuroscience, entitled “Power failure: why small sample size undermines the reliability of neuroscience”.

For example, Ed Yong has written a fantastic piece on the issues raised by the realisation that insufficient statistical power plagues much neuroscience research, and Christian Jarrett has an equally good article on the implications of these issues for the field.

As I commented in Ed Yong’s article, I think this is a landmark paper.  It's very good to see these issues receiving exposure in such a widely-read and highly-respected journal - I think it says a lot for the willingness of the neuroscience field to consider and hopefully tackle these problems, which are being identified in so many branches of science.

I really like the section of the paper focusing on the fact that the issues of power failure are a particular problem for replication attempts, which I think is a point not many people are conscious of.  You'll often see an experiment's sample size justified on the basis of an argument like "well, that number is what's used in previous studies".  Button et al demonstrate that such a justification is unlikely to be sufficient.  To be adequately powered, replications need a larger sample size than the original study they’re seeking to replicate.  There are very few replication studies in the literature that fulfil this criterion.

I feel deep down that greater emphasis on replication is the answer to a lot of the current issues facing the field, but the points raised by Button et al are key issues that researchers in the field need to take account of.

The good thing is that I think the field is taking notice of papers such as this one, and is making progress towards developing more robust methodological principles.  Button et al.'s paper, like the recent Nature Neuroscience papers by Kriegeskorte et al and Nieuwenhuis et al, plus the recent moves by Psychological Science, Cortex, and other journals to promote the use of more reliable methodology, are all excellent contributions to that progress.  I think it's a sign of a healthy, thriving scientific discipline that these methods developments are being published in such prominent flagship journals.  It gives me confidence about the future of the field.

Update 10/4/13, 3pm: I'm grateful to Paul Fletcher for highlighting that NeuroImage: Clinical has created a new section to help address concerns about the lack of replication in clinical neuroimaging.  Very happy to publicise any other similar moves to improve things.

References:
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B., Flint, J., Robinson, E., & Munafò, M. (2013). Power failure: why small sample size undermines the reliability of neuroscience Nature Reviews Neuroscience DOI: 10.1038/nrn3475
Chambers, C. (2013). Registered Reports: A new publishing initiative at Cortex Cortex, 49 (3), 609-610 DOI: 10.1016/j.cortex.2012.12.016
Kriegeskorte, N., Simmons, W., Bellgowan, P., & Baker, C. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience, 12 (5), 535-540 DOI: 10.1038/nn.2303
Nieuwenhuis, S., Forstmann, B., & Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance Nature Neuroscience, 14 (9), 1105-1107 DOI: 10.1038/nn.2886

Wednesday, 30 May 2012

Forget the hype: how close are we to a 'forgetting pill'?

The neuralyzer from Men in Black
I've been a little disconcerted by the recent appearance in the popular science press of a number of articles seeming to claim that we're just around the corner from being able to erase painful or traumatic memories.  For example:



The articles are beautifully written, full of interesting and thought-provoking questions, and obviously the product of a great deal of work.  I think good science writing is really important and greatly value the work that writers like Jonah Lehrer and Jerry Adler do. However, I can't understand how these very clever, usually marvellous writers make the huge leap in this instance from the (albeit in themselves fascinating) findings in animal models to the putative selective erasure of the complex, multidimensional, highly interconnected ensemble of neural representations that constitutes a single human autobiographical memory.

This matters because many thousands of people suffer enormous anguish every day with the dreadful effects of post-traumatic stress or related conditions, and may have their hopes raised that a "forgetting pill" is just around the corner.  It seems to me that this hype isn't justified based on current knowledge, although as this isn’t my area of specialist expertise, maybe I’m missing something.  I had an interesting email conversation with Jonah Lehrer in which he was characteristically open to a number of my (hopefully constructive) criticisms.  However, to find out whether I might have misunderstood the science, I asked someone who is an expert in this area, Dr Amy Milton from the University of Cambridge, to set things straight.  Here’s her view:

Wednesday, 11 January 2012

Elements of episodic memory

Keen students of memory will recognise that the title of this post is an homage to the seminal book of the same title by the great memory researcher, Endel Tulving.  To my mind, Tulving’s Elements is one of the finest books that has been written about memory, along with William James’s Principles of Psychology and Dan Schacter’s Searching for Memory. (It’s quite possible that Charles Fernyhough’s forthcoming Pieces of Light may soon join that list).

In Tulving’s book, he describes how episodic memories of experienced events are unlikely to be stored as fixed, separate, discrete “memory traces”, but rather as “bundles” of features.  It makes sense, given the enormous number of events we may have to remember over a lifetime, that our brains would have evolved a more efficient strategy than simply storing each event separately, as a bound trace comprising all its different components.  The redundancy would be huge.  Instead, it appears that we store single representations of features distributed around the brain which are then shared between different event memories via associative networks. Tulving acknowledges that “we have no idea about the number and identity of features that the human mind or its memory system has at its disposal” (p. 161).  However, he speculates that “the features of the mind correspond to discriminable differences in our perceptual environment and to the categories and the concepts that the language we use imposes on the world.” (ibid)

Thursday, 1 December 2011

Why Jon Driver was an inspiration to me

Jon Driver studied Experimental Psychology at Oxford before taking up a University Lectureship at Cambridge.  Within eight years of obtaining his DPhil doctoral degree he was a Professor at Birkbeck, and from 1998 a Professor at the UCL Institute of Cognitive Neuroscience (ICN), one of the world’s leading centres of research into the brain basis of cognition.  He was Director of the ICN from 2004-2009, before being one of a small handful of researchers from all across the sciences to be selected for a prestigious Royal Society Anniversary Research Professorship in 2009.  He died this week, tragically young at the age of 49, leaving a young family.

I never worked with Jon directly, and wouldn’t say that I knew him particularly well.  More comprehensive and better informed assessments of his life and career will no doubt be found elsewhere.  However, the times I did spend with Jon were sufficient to leave a lasting impression on me, which is what I wanted to reflect on in these brief thoughts.

Thursday, 15 September 2011

The future of cognitive neuroscience

I have previously written about how I think that cognitive neuroscience as a scientific discipline (and I know that this is not a universally held view) has largely moved on from publishing studies demonstrating the neural correlates of “x”, where x might be behaviours as diverse as maternal love, urinating, or thinking about god.  There are still a few of these sorts of studies published each year, and because the public are, it seems, fascinated by stories about blobs on brains, the media portrayal of cognitive neuroscience tends to focus on such findings.

Some blobs on a brain
This is all very entertaining if you like your science presented to you in a breakfast TV sofa sort of way.  However, the downside is that people who are not regular readers of the fMRI research literature think that the media portrayal of cognitive neuroscience is an accurate representation of the field.  In fact, I would argue, this is far from the case.  In my experience of working in cognitive neuroscience for the last decade or more, most researchers I have encountered are not interested in so-called “blobology”.  Instead, they work very hard each day carefully designing theoretically motivated experiments using cognitive neuroscience techniques to produce empirical data that can be used to differentiate between cognitive theories about how functions like memory, language, vision, attention, and so on, might operate.