Monday 14 February 2011

Is there a cognitive neuroscience funding crisis?

When I started my lectureship (a position equivalent to assistant professor in the US system) way back in the good old days of 2007, one of the first things I had to think about was how to begin building a research group.  My research interests are in understanding human memory using cognitive neuroscience techniques such as neuropsychology (studying the way memory is disrupted following brain damage or dementia) and neuroimaging (studying the brain areas that are particularly active while remembering).  We are seeking a greater understanding of the way in which different memory processes are organised in the brain, as a means to determine how these processes might be preserved or impaired in neurological and psychiatric disorders.

Cognitive neuroscience often generates great excitement in the media and the public in general.  This is apparent most obviously in the genuine fascination people have with seeing where in the brain “lights up” during a particular kind of behaviour.  Perhaps somewhat less evident in the media, but still captivating to many who hear about it, are the many strange and wonderful examples of altered behaviour following brain injury or stroke.  Indeed, it was through hearing vivid descriptions of neuropsychological behaviours from an inspirational undergraduate lecturer that I became hooked on the area as a student.  Another reason perhaps for the great interest in cognitive neuroscience in this country is that the UK is very good at it.  Considering the disparities in funding and resources compared with the US, for example, the UK is right up there among the world leaders in the field no matter which measure you choose.  Just as one example, two of the top five (and three of the top ten) most highly cited scientists in the field work in the UK

Cognitive neuroscience is still a relatively young field, but has – it seems to me at least – largely now moved on from the days in which studies demonstrating “the neural correlates of x” would always generate great excitement.  Such straightforward studies can still be published, and can sometimes be interesting.  However, researchers are often now more interested in using cognitive neuroscience techniques to inform the development of cognitive theories and to better understand cognitive disorders.  Thus competency with the technically demanding methods of functional MRI, for example, needs to be coupled with the ability to design and implement cognitive paradigms that address closely the function of interest, allow theoretically motivated variables to be manipulated while others are controlled, and permit inferences that can be used to differentiate between competing cognitive hypotheses about how that function operates.

Building a group in which such multidisciplinary skills are represented is not straightforward, and gaining access to the methods (whether functional MRI scans of brain activity in healthy volunteers, or structural MRI scans of lesion locations and volumes in patients) does not come cheap.  Thus, in 2007, I was very aware that I needed to apply for research funding.  Back then, there were three main categories of funding body that I felt might be interested in funding cognitive neuroscience research:

  • Research Councils – the MRC (medical research), BBSRC (biotechnology and biological sciences research), and to a lesser extent, the ESRC (economic and social research)
  • Charities – primarily the Wellcome Trust, although also bodies such as the Alzheimer’s Research Trust
  • Industry – mainly pharmaceutical companies interested in funding cognitive neuroscience research that might advance drug development

As such, in addition to writing new lecture courses and trying to do some cheap experiments (often thanks to the help and generosity of colleagues and former advisors), I spent the first couple of years as a lecturer writing grants.  In submitting applications and seeking opportunities in each of the three funding categories above, I was helped a great deal by the advice and support of senior colleagues in my department and elsewhere.  In addition, many of the funding bodies interested in cognitive neuroscience had schemes particularly suited to early career researchers, such as small grant schemes and young investigator awards.  It never seemed easy, and I was prepared for the fact that a very small proportion of my applications might be successful, but I did at least feel that there were a number of places I could go to seek funding.

Now, however, the funding landscape for cognitive neuroscience research seems to have changed considerably.

In the last couple of years, a number of major pharmaceutical companies have closed their neuroscience research and development facilities.  In addition, perhaps anticipating a cut in the government funding of science research that never materialised (in no small part thanks to the “Science is Vital” campaign), many of the charities and research councils revamped their funding schemes.  These overhauls were announced as measures “to better reflect strategic priorities”, but the result seems to me to be a significant reduction in the funding opportunities available to early career investigators in cognitive neuroscience.

To give a few examples:

  • The ESRC recently announced the closure of its “small grants” scheme, which provided limited sums particularly suited to allowing early career researchers to develop paradigms and collect preliminary data that could be used to strengthen applications for larger grants in the future.

  • The Wellcome Trust has ended its project grant and programme grant schemes, the former of which provided the kind of support (one member of staff and research costs for three years) that was ideal as a first substantial grant for someone building their group.  Instead, the Trust has replaced these schemes with investigator awards, aimed at “exceptional individuals” who “have been lead investigator on at least one significant research grant from a major funding body”.

  • Finally, as seen in all the papers and discussed on BBC Radio 4’s Today programme in the last few days, the BBSRC announced that it was “reprioritising” its funding away from neuroscience.  This was reported as a complete axing of the council’s neuroscience budget, and possible closure of up to 30 research groups.  However, after admitting an error in one of its media briefings, the BBSRC clarified that the changes would “only” mean a reduction of perhaps 20% in the funding directed at neuroscience research. 

These developments mean that it is much more difficult to see how a new lecturer can build a cognitive neuroscience research group now.  Many of the schemes directly aimed at those early in their career have either been axed or shifted to support individuals who have already led a research grant.  But how are you supposed to develop the track record of having led a research grant if nobody will fund you before you have that track record?

Also, despite cognitive neuroscience being one of this country’s major science success stories in recent years, internationally competitive when compared against even the finest and best funded groups in the US and elsewhere, there is a concern that many of the UK funding bodies seem to be intent on moving away from funding cognitive neuroscience research.  The recent move by the BBSRC, coupled with a shift by the MRC over the last few years to prioritise translational neuroscience research that has direct and clear applications to patient care, means that it is not clear which of the research councils now sees basic cognitive neuroscience research as within its funding remit.

Is there a cognitive neuroscience funding crisis?  There is undoubtedly still a lot of money on offer: the MRC alone funds over £100 million of research in the general area of neuroscience.  However, the perception among cognitive neuroscientists is that a very difficult situation has recently become much harder (David Colquhoun has mentioned that only around 7% of neuroscience and mental health grant applications were funded by the MRC in the most recent round).  This is not helped when funding bodies announce changes, which may turn out to be relatively minor reprioritisations, in a way that lead to sensational media headlines about the “disastrous impact” of “draconian funding cuts”.

As a result, this is a worrying time to be a cognitive neuroscience researcher, but it is particularly concerning that the crucial first few rungs on the funding ladder for new researchers seem to be those most under threat.  It is obvious that new researchers are the most vulnerable and in need of support in developing their research careers.  If such individuals feel that the UK funding bodies are making it simply impossible for them to do that, they will either go abroad or leave science completely.  And if that happens, a cutting edge field in which the UK has been one of the world leaders within only the last few years, will face a future of rapid and inescapable decline.

30 comments:

  1. Charlie Wilson (@crewilson)15 February 2011 at 08:59

    It would be a nice summary, if it wasn't depressing enough to negate the use of the word 'nice'. And it seems that even getting to the level you've already achieved is getting harder and harder.

    As a Brit who had to come abroad to continue my research, I often have cause to reflect on the scenario in your last paragraph. The question I ask myself is, if I can do the work elsewhere (and obviously it's not easy anywhere), and if I can contemplate never returning, does it really matter to me if UK neuroscience continues its pitch into decline.

    ReplyDelete
  2. This is an excellent summary, covering much of what I've been discovering and mulling over recently, having become a Lecturer at UCL in July 2010. It would be interesting to know the overall cut in the number of Awards made by the Wellcome Trust in their new scheme, if as predicted they do cut the number of awards. It seems harder to make joint applications, possibly diminishing collaborations in UK neuroscience.

    ReplyDelete
  3. Mark Baxter (@markgbaxter)15 February 2011 at 14:00

    I think there's always been a weak link in funding the transition between getting a PhD and becoming an independent investigator. In the UK there are essentially no grants available between PhD and career development awards (3-6 years postdoc experience) anyway, and the squeeze on funding is going to make establishing a new research program even more difficult. The situation everywhere is grim, but at least the NIH in the US has established specific programs to try to deal with this (like the K99/R00 "transition to independence" grants, and giving investigators breaks on the payline for getting their first R01 funded).

    It's not clear to me the science base can be sustained by Brits making careers elsewhere and then "coming home" when they're senior enough to attract substantial grants. In certain research areas (like mine) it's essentially impossible to find personnel in the UK with appropriate training, and once a base of well-trained research staff is gone, it's gone forever.

    ReplyDelete
  4. Charlie Wilson (@crewilson)15 February 2011 at 16:14

    The French system is far from perfect, and certainly the same sorts of problems are coming this way, but at least there is an explicit recognition of the idea that it's good to get researchers into proper posts earlier in their careers if possible, and that core funding is of use once you're in post - it's possible, for example, to do one postdoc and then get such a post. Certainly there's not much work-based incentive for me to 'come home' at the moment.

    Plus none of Oxbridge or UCL etc are in wine regions...

    ReplyDelete
  5. Thanks for these interesting comments. I agree with Mark that the PhD-to-PI transition has always been difficult. What worries me now is that rather than addressing this problem, the various funding scheme changes seem to make it harder. Maybe that is part of the plan. But I can't see why the funding agencies would "want" the best early-career scientists to go abroad to establish their funding track record, before coming home to take up e.g. a Wellcome investigator award. It will be interesting to see what happens in 3 years time when, presumably, nobody will be eligible for the Wellcome scheme unless they're working overseas.

    ReplyDelete
  6. Excellent post - very clear expression of the problem.

    Hard times all around, as far as I can see and little prospect of change in the near future. I don't have any easy answers but I think perhaps universities need to be more proactive about ensuring that their new faculty get off to the best start. UK institutions tend to have less financial muscle in that regard compared to the US, unfortunately, so perhaps that's a pipe dream. But it may be necessary to ally new hires with more established faculty - who can provide mentorship and perhaps even strategic/collaborative alliances that will increase the chances of successful grant apps.

    ReplyDelete
  7. Many thanks, Stephen. I agree, and was very lucky when I was starting out to have the kind of support from people in my department that you describe. I now try to do what I can to help newer faculty similarly.

    I guess the question is WHY would the funding agencies apparently unanimously decide that early-career researchers should be particularly the victims of the current times? Somebody, somewhere, surely must've thought "Hang on a minute, maybe this isn't such a good idea. Maybe we might need some of these people to still be in science in a few years..."

    ReplyDelete
  8. I couldn't agree more about the very real danger posed to early-career and even mid-career scientists by the lack of smallish responsive mode grants.

    But being in a different area, I may see the problem slightly differently. At the risk of being lynched, I'll have to admit that I sometimes sigh when see the next "new phrenology" study come out. Only too often the results are uninterpretable (though university PR departments love the fact that, however trivial, they make headlines). The equipment is enormously expensive and perhaps some of that money could be better spent (for example, on fundamental biophysics!)

    ReplyDelete
  9. David, I know you're being slightly mischievous. However, I think your comment confuses the media fascination with "the neural correlates of x" with the reality of 99% of cognitive neuroscience research. As I describe in the 3rd paragraph above, cog neuro is an empirical science just like any other, one which seeks to make inferences from brain-based evidence that can be used to differentiate between competing hypotheses about the way in which cognitive functions operate. I know your description of the "new phrenology" is tongue-in-cheek, but when such a leading and influential advocate for science perpetuates that lazy stereotype, I think it needs to be challenged. You wouldn't dismiss astronomy as being all about showing the prettiest, most colourful star-scapes, and you wouldn't dismiss genetics as being all about creating cloned sheep. In the same way, I think it doesn't help science for cog neuro to be confused with its media representation. Instead, we should all be working harder to change that media representation.

    ReplyDelete
  10. Perhaps unsurprisingly, I agree with Jon that people shouldn't confuse the stories in the media with the actual science underlying brain imaging. I don't sigh when I read strange coverage of the LHC, as I assume my fellow scientists do know what they are doing, even if they are in a different field of research from me.

    I'd also stress that within cognitive neuroscience, people go to tremendous lengths to avoid 'phrenology', and what you may mean by 'uninterpretable' simply reflects the complexity of this area of research. The brain isn't simple, and who would expect it to be.

    ReplyDelete
  11. Uhuh, I knew I'd get lynched!
    What I had in mind was, for example, the recent Tories vs liberals anterior cingulate story, or even the taxi drivers' hippocampus (the one that got an IgNobel prize). I'm certainly not the only person to wonder what the hypothesis is, how appropriate the controls are, and, above all, whether adequate consideration is given to causality in studies like these. Neither was it I who coined the term "new phrenology" for studies like these. No doubt some work along these lines should be done. The question that needs to be asked is whether they are starving cheaper, more basic, studies of funds.

    ReplyDelete
  12. Agree that there is an important problem of public perception/ understanding. Big issues for me are 1) complexities of the technology/methodology (statistical models in use, chain of inference in interpreting BOLD response) and 2) complexities around disentangling functional and neuroanatomical specificity/localisation (I wish more cog neuroscientists would read Luria on that). These are very difficult issues for the professionals and so doubly so for the non-specialist.

    ReplyDelete
  13. David, I don't think people disagreeing with you constitutes a 'lynching'. Seriously.

    And to continue disagreeing with you, the hypothesis of the McGuire paper was that doing the Knowledge would have an affect on parts of the brain associated with spatial knowledge (inc. the hippocampus). This may be intrinsically amusing to the people who award the Ignobels, but it isn't bad science, de facto.

    ReplyDelete
  14. Let me expand a bit on that. There seems to me to be a general problem of trying to run before one can walk. A better example might be "systems biology". Huge amounts of money have been spent (not least by the BBSRC) on doing computations with nothing like enough input data. It would be wonderful if we could compute the behaviour of a whole human, but my guess is that such abilities are very far into the future (if ever). Athel Bornish-Bowden recently wrote "Systems Biology. How far has it come". It’s a fascinating account from a distinguished biochemist. And worth it alone for the quotation from the legendary Sydney Brenner.

    Sydney Brenner describes systems biology as “low-input, high-throughput, no-output biology”.

    Up to now, that just about sums it up. It seems to me to be important to distinguish between what one would like to do, and what it is feasible to achieve. Natural impatience pushes us the opposite way, but the danger is that money is wasted. The 'decade of the brain' ended with none of its aims achieved, because the questions that were asked were just too difficult to be answered in that sort of time.

    This is not really surprising. Serious research in areas like this is very new. From a historical point of view it has been going on for a tiny length of time. It isn't surprising that the outcomes so far are pretty crude. It could take a few hundred years (wild guess) before we understand much about the relationship between brain and behaviour.

    It has become fashionable to denigrate 'reductionist' research, but the fact is that we might need another few hundred years of it to get the input data needed for realistic modelling of very complex systems. The problem is that politicians see things on a five year time scale. That lack of vision is sad, but it should not lead us into promising the impossible.

    ReplyDelete
  15. David - I don't think any cognitive neuroscientist (in their right mind) submits a grant application with the stated aim of "computing the behaviour of a whole human".

    Rather, as in every other scientific field, researchers choose their specialist topic (in my case, one aspect of how we recollect the context in which we experienced previous events). They then attempt to design theoretically-motivated experiments that will produce data that can be used like any other dependent variable to help differentiate between competing hypotheses as to how a particular property of that cognitive function might operate.

    In that way, experiment by experiment, step by step, our understanding of individual cognitive functions gradually improves. That is surely no different to any scientific field, and to pretend it is different and that a full explanation of brain-behaviour is the only valid question, is unfortunate.

    Charles - I think your point about public understanding being held back by the complexity is important. It is surely true that every scientific field has its own complexities, but there does seem to be a prejudice against cog neuro that its complexities reflect some kind of black arts. Of course some scientists may use statistical or inferential short-cuts, but I don't believe the majority do. Perhaps we need to continue being better at explaining cog neuro, so that the media realise they can tell a story that appropriately reflects the complexity, and can still be of public interest.

    ReplyDelete
  16. The comment about "computing the behaviour of a whole human" was made in the context of systems biology, not cognitive neuroscience. There is a huge European grant that is devoted to doing exactly that, preposterous though that may sound to you and me.

    Charles Fernyhough points out that the complexities are too great for the public. That is true of the details in many fields, but I think it is a mistake to underestimate the common sense of the public. One could almost hear the skepticism on John Humphrys' voice when talking about whether the thickness of the anterior cingulate could tell us anything useful about political views, or anything useful about the brain. There is a real danger to science if the public laugh at us when overblown claims are made.

    Neither is it sufficient to blame the media when that happens. As often as not, the media are merely copying what it says in the university's press release. The activities of PR people have a lot to answer for.

    ReplyDelete
  17. @Sophie Scott
    The time when the taxi driver story came to my attention was when it was discussed on the Today programme after the PNAS 2000 paper.
    http://www.pnas.org/content/97/8/4398.full.pdf+html?sid=bf75c123-6bde-4e2b-beaf-cf8431e440cf

    I'm aware that there has been work since 2000, and I haven't followed it in detail.

    My wife, whose expertise lie in French and music, asked "bigger than what?". That seemed a sensible question and the answer appeared to be bigger than 50 non-taxi drivers they pulled out of the drawer.

    Obviously it isn't possible to do a proper randomised study of a question like this, so it is really like observational epidemiology, with all the attendant problems of making causal inferences (and also the consequent invalidity of statistical tests of significance, all of which assume randomisation). The correlation with length of time as a taxi driver (Fig 3) struck me then as pretty dubious -my guess is that it depends entirely on the rightmost two points).

    I felt (and still do) that the discussion failed to consider properly the may confounders that bedevil the interpretation of this sort of observational epidemiology. At least epidemiologists discuss the causality problem seriously in their introductions, even if they brush it under the carpet by the time they get to making recommendations for action, but this paper didn't even pay much lip service to a very basic problem. It does worry me slightly that cognitive people did not seem to appreciate some of these problems.

    ReplyDelete
  18. Hi Jon (and all),

    I couldn't help add my two-pence worth. First off, I'd add to Jon's list of abolished funding schemes the British Academy's Research Development Award (up to £150k FEC), which does not seem to be running at least this year, according to the message on their website. This was a great scheme for early-career researchers.

    Second, cognitive neuroscience and phrenology. This will always be a criticism levelled at our field; but would critics extend the phrenology claim to any experiment that attempted to understand the brain in a regionally-specific manner? What about the pioneering single-cell recording work in the 60s/70s in the visual system, or more recently the discovery of hippocampal grid cells? Are they phrenology too? I too often get frustrated with intellectually lazy newspaper reports concerning "a brain region for conservatism", just as I get frustrated with "a gene for autism" or "eating xxx gives you cancer", but that doesn't mean that the fields of genetics or epidemiology should be dismissed out-of-hand.

    Finally, it's certainly not the case that all cognitive neuroscience involves brain imaging, or even that brain imaging is particularly expensive. As anyone who has put together a FEC grant application knows, the main expenses are staff and overheads. And presumably they apply to any field.

    ReplyDelete
  19. Jon (Roiser) is right to expose the faulty logic of reasoning that the presence of intellectually lazy newspaper reports might be an indicator of poor quality science.

    As another example, I just went to Google to search on 'nicotinic receptor'. This pulled up the headlines 'Fancy a smoke? You have a faulty brain' (The Sun). Not very edifying, not true, and of course they don't provide a very good account of the Nature paper to which the article refers, where α5 nicotinic acetylcholine receptor subunit knockdown in the mouse habenula selectively affected nicotine-dependent reward signals. So because the press stories are sensational and rubbish should we (a) dismiss David's area of research? (b) ponder endlessly over whether Christie Fowler (lead author) has properly considered whether mouse behavior properly extrapolates to complex human addictions? Personally I just find it another interesting clue to how the brain works.

    I don't think the kind of intellectual chauvinism that David espouses is very helpful, because it presupposes that one particular level of enquiry is right and that we can determine that in advance. This seems to me to be a very dangerous strategy when trying to understand a complex organ like the brain unless you are blessed with the power of God-like prescience.

    ReplyDelete
  20. @Geraint Rees

    Oh dear. I do hope this doesn't get personal. I'd just point out that I've never worked on central nicotinic receptors, and only briefly on any sort of neuronal nicotinic receptors. I merely watch with interest to see if anyone manages to find out what they are there for.

    I'd like to point out two things.

    First my criticism of the taxi driver paper was based on reading the paper, not on newspaper reports. Whether reports in the papers were accurate or not is irrelevant to my reservations. I'd be interested to hear a reaction to the particular points that I made, because they seem like real problems to me. If they are not, please tell my why.

    Second, in the course of writing for my blog. my starting point is often a newspaper report. But in almost every case I have found that the problems in the report can be traced to exaggerated claims in the press release issued by the university (and, presumably, approved by the authors). This, I suspect, is a sign of the intense competition between researchers for funds, and the intense competition between universities for status. Science has always been competitive and up to a point that is good. But once PR people get in on the act, and university apparatchiks start to impose publication targets, there is a real risk that competition reaches a point where it becomes counter-productive.

    Here are a few examples, well away from the cognitive area (and well away from my own area too).

    (1) Snoring “remedy” on radio http://www.dcscience.net/?p=96

    (2) Why honey isn’t a wonder cough cure: more academic spin http://www.dcscience.net/?p=209

    (3) Diet and health. What can you believe: or does bacon kill you? http://www.dcscience.net/?p=1435
    (This one is back in the news today).

    ReplyDelete
  21. @stephenemoss

    The many problems you cite that face cognitive neuroscientists, are more likely to be viewed as solutions by the current Government. There is a well-recognised antithesis to science and research embedded deep in Conservative political ideology – most recently manifested in the cuts to science funding (actually begun by Mandelson) that at the current inflation rate will amount to >20% in real terms by 2015. This will satisfy those on the right who seek a reduction in the amount of UK science funded from the public purse.

    What then to do with the large numbers of academics whose plight you describe as their funding dries up? The Tories have thought about this too. They have already announced the near abolition of the HEFCE grant that underpins the salaries of most UK academic staff. And now, despite having said that Universities can recover that lost income by charging tuition fees up to £9000, they propose to hit Universities that actually impose those top-end fees, with crippling fines.

    If Universities end up being able to only charge say £6000 - £7000 they will have to make substantial savings, which is where those cohorts of grant-less academics enter the equation, as they conveniently present themselves for sacrifice. It is difficult to see current policy decisions playing out in any other way, save for one improbable scenario in which those few Universities that have the wherewithal to do so, go private. I say improbable because even the Tories, with their obsessive love of all things private, might find that too much of a political hot potato.

    ReplyDelete
  22. Thanks so much to everyone for the fascinating and insightful discussion. If I may, I'd like to expand the conversation from "my field is better than yours" and see if anyone also has views on one of the other issues I discussed.

    Namely, the impression that many of the current funding scheme changes seem to impact particularly on those most vulnerable and in need of support: new and early-career researchers. Do other people agree with this impression and, if so, why would the funding bodies be driving in this direction and what, if anything, can we do about it?

    ReplyDelete
  23. Of course that's right Jon, and it reflects a drive to play it safe with shrinking resources. As Stephen Moss points out, even the frozen level of the MRC represents a considerable real terms cut over the next few years, and the other research councils did get hammered in the CSR once you consider inflation over the next few years.

    To us it looks like crazy short-termism, allowing the resources to get stuck to local attractors like this can only end badly for UK science. A number of solutions have been suggested. Dorothy Bishop suggested on her blog that when rating a researcher's output, the amount of money spent should be factored in (http://deevybee.blogspot.com/2010/08/how-our-current-reward-structures-have.html). As a recently appointed unresourced lecturer I like this idea. A model which goes completely away from individual grant-holding could also be considered - award funding to institutes, say, and let them handle recruitment more locally.

    ReplyDelete
  24. Charlie Wilson (@crewilson)25 February 2011 at 19:01

    Well there are two interesting discussions here... I'm going to be greedy and comment on both.

    On Cog Neuro, Dick Passingham used to give a really interesting talk about whether fMRI is actually a new phrenology, and it was interesting to hear a fruitily honest view on the subject from someone who has been involved in the field from the start.

    Sadly I don't think he ever wrote it down, as I would love to link to it. His argument (and I demur to him) was that you see plenty of imaging papers but quite a lot doing little more than "neural correlates of X", and relatively few in the highest IF journals (notwithstanding present company...). His point was that this meant that cognitive neuroscientists could do better science with the tools they had, but that these tools nevertheless really do allow us to "do science" (as opposed to phrenology). The body of the talk was about how to properly design imaging studies that met his requirements. I'll see if I can find my notes.

    As for the media use of this work - I note that we get far fewer "scientists find gene for liking Marmite" studies now than a few years ago. I presume the Tories-Liberals ACC sort of story will go the same way...

    On the funding side, my sense is that funding early career scientists is deemed as "more risky" for funders, because the outcomes, which they measure in papers and impact factors etc, are less certain. Because the idea that one determines the future of organizations using only metrics is so in vogue now, the funders are much more assessed on these outcome measures, especially the public bodies, and this pushes them to be more risk averse.

    What is the solution? I'd argue the sort of core funding that's disappearing right now. So what do we do about it? Ummm. Well I took the mature approach, threw my toys out of the pram and moved to France. They're going through the same process here, they're just 10 years behind the UK, which suits me for now.

    ReplyDelete
  25. I'm certainly not in the business of comparing the worth of different fields. I'm interested in what goes on within single molecules and thanks to the invention of single channel recording it's possible to treat the problem of inference by the methods of physics. Difficult though the maths may seem, if you aren't into matrix Laplace transforms, it is undeniably a a much simpler system (in fact that is why it can be treated so precisely). It tells you a bit about how proteins and synapses work, but nothing whatsoever about how the brain works.

    I do wish, though, that someone had addressed the specific comments that I made about the 2000 paper. That hasn't happened yet. It may well be that some of the problems were addressed later but when I was discussing it at the time that paper was all that there was.

    Taking the result at face value, I am still a bit puzzled about what it tells you to know that the hippocampus gets bigger. A lot of people learn large numbers of things (like the ichthyologist who was reprimanded for not recalling the name of a student amd responded "every time I remember the name of a student, I forget the name of a fish". It was hardly news that the hippocampus was involved in some with memory, but the big problem, the physical basis of memory is still not understood at all. It remains a total mystery (I prefer that way of putting it to the usual scientific euphemisn "unclear"). Imaging doesn't seem to have contributed much to that basic problem.

    If I may get back to something of more immediate importance, I'm really glad that John King mentioned Dorothy Bishop's wonderful discussion of funding at http://deevybee.blogspot.com/2010/08/how-our-current-reward-structures-have.html
    It is quite the most sensible thing I've read for some time.

    I suppose it would be a bit mischievous to point out that on December 28th 2010, Dorothy Bishop tweeted

    "The New Phrenology. Yesterday #amygdala size determined social network;R4 Today: it's yr politics (unpublished study). Sigh. #neuroscience"

    Since she knows more about such things than me, this tweet encouraged me a bit to express my views, dangerous though that seems to be.

    ReplyDelete
  26. Well, who could resist a nudge by the great DC? The two issues raised in the comments, neuroscience funding and neuroscience quality, come together in my thoughts on this.
    My research includes a bit of neuroscience, and it's one of the most wonderful and exciting fields to be in. But it does seem to have more than its fair share of dodgy science, and I’ve been brooding on the reasons for that.
    1. These days every psychologist feels a need to bolt on an imaging component to their research so that they can produce a picture of a brain with coloured blobs on it (the new phrenology), at the same time increasing by at least 10 fold the cost of the study.
    2. Imaging studies generate far more data than anyone can get their head around. David Colquhoun’s own university, UCL, has produced some stellar neuroscientists who have developed analytic methods that are used all over the world. Nevertheless, many neuroscience studies deal with data complexity by a process of double-dipping, first scrutinising the data to decide where to focus an analysis, and then applying inferential statistics. This certainly happens in the area of ERP, where I do most of my work. There are numerous options of which portion of data to analyse, in terms of selecting electrodes, time windows and criteria for defining a response. There’s a beautiful paper that explains just why this is a bad idea: Kriegeskorte et al (2009). Circular analysis in systems neuroscience: the dangers of double dipping.Nature Neuroscience, 12, 535-540.
    3. Nobody ever replicates a study. It’s interesting to compare neuroimaging with genetics. Genetics had exactly the same kind of problem - too much data and consequently too many nonreplicable results from data trawling. But these days the field guards against that by ensuring you can’t publish an association study unless it’s replicated in an independent sample.
    4. Reliability of measures is seldom considered. One would hope that in structural imaging studies, there would be good replicability of measurement - though my eyes were opened by a study contrasting findings from automated voxel based morphometry vs manual analysis: Eckert et al. (2005). Cortex, 41(3), 304-315. And functional imaging is a whole other business, where test-retest reliability may not be so great. Where a measure is only moderately reliable, you can still get meaningful results from the kind of study that compares people’s performance in different conditions, but I do worry about the kind of study that tries to correlate a brain measure with individual differences in some psychological trait, without giving any indication of the reliability of either measure.
    I’m less concerned than DC about the fact that some studies of cognitive function adopt a correlational rather than experimental approach. There are many aspects of human cognition that you can’t manipulate experimentally. Yes, direction of causality is often hard to establish (a fact typically missed in media reports), but such studies can still narrow down the options. I thought David Dobbs’ evaluation of the taxi driver studies was spot on.
    What’s this got to do with the funding crisis? Well, the situation in science is dire, but one thing the crisis may force us to do is to take data-sharing seriously. I think neuroscience should follow genetics in emphasising the importance of replication - and the best way to do that is to have datasets of imaging data publicly available. Of course, that wouldn’t be a complete solution, but it would certainly help. For instance, there are now numerous studies of structural brain imaging in autism. Yet I still see grant proposals coming through where the scientists are requesting thousands of pounds to do structural imaging of a new autism sample. The controversial ‘diagnosis from brain scans’ study that I highlighted a while back could be replicated (or not) from existing data.
    Well, I could go on, but that is really enough from me, except to say Many Congratulations to Jon and family re arrival of new sprog!

    ReplyDelete
  27. David,

    People *were* aware at the time of the points you made regarding the taxi drivers paper, neuroscientists can read a graph too! However in context remember that little was known about neurogenesis and plasticity in adults, and the idea that brain regions might alter structurally in response to behaviour or cognition was exciting. The fact that the case could not be made to a high level of confidence was apparent from the data presented in the paper, as you yourself were able to see. I think it's fair to say cognitive neuroscience has not been a particularly cautious field in the past decade, but (a) we all know that and are a suspicious bunch and (b) perhaps that's one of the reasons the field has moved so fast.

    Dorothy:

    I think your point about data sharing is enlightened but it highlights how vary badly we have lost our way with science funding. The ownership of datasets is part of the capital of a research unit. If your lab has spent its hard-won resources gathering structural brain images of, say, 200 people with autism, the incentive to open that data to other labs is massively outweighed by the competition from those very labs to get the next round of research funding. Quite possibly, a competing lab will produce high profile work that you were planning to do. Concern of future lack of funding might encourage groups to see datasets as money in the bank for a rainy day.

    Hopefully financial restraint will encourage more cooperation with data, but I fear that unless the contingencies around funding are changed it will go the other way, into defensiveness and protectionism. It would need the funding agencies to *require* open data to change this.

    ReplyDelete
  28. Charlie Wilson (@crewilson)28 February 2011 at 17:01

    Congratulations to Jon & family.

    What Dorothy's points highlight for me is the fact that imaging data (and similar methods) are useless unless you do good, carefully controlled, piloted, and reliable psychology & behavioural measures at the same time. Sometimes I find the treatment of task & behaviour in imaging studies desultory. As the field matures, I have no doubt that it will become increasingly difficult to do studies without this.

    The point about data sharing is an excellent one, and quite right. It doesn't just apply to imaging - monkey electrophysiology has huge data-sets as well, for example, and there is no doubt a lot of repetition between them. Some thoughts:

    1. John's point about protectionism is valid. It has to be mandated by funding agencies, much in the same way that they are (in some cases) pushing standards of Open Access Publishing.

    2. Assessment of someone's scientific output (for jobs / funding) would need to value data collected as well as papers etc.

    3. There would have to be some agreed rules about using data other people collect, and the involvement of those who collected it, depending the use to which it is put.

    4. A wider point - I think these huge data-sets mean we have to re-address the model of hypothesis driven science that we use. If you stick to a strict idea of "have hypothesis, design study to test hypothesis, analyse data in order to test hypothesis only, start again", then the re-use of large and expensively acquired data-sets is invalid. I don't feel that neuroscience has really faced up to this, although papers like the Kriegeskorte one are an interesting start...

    ReplyDelete
  29. As another early career researcher in Cog Neuro, I agree with Jon that the funding situation looks miserable. The way I see it, my field (social cog neuroscience) was lucky 10 years ago to fall at the intersection of 3 funding councils (ESRC, MRC and BBSRC), and I could potentially apply to all 3. But now that all three have less cash and are pulling back toward their core topics (whatever those may be), I find that my research is not attractive to any of them. So I'm writing as many applications as I can but still not getting anywhere.

    And as for cognitive neuroscience being expensive, I don't think it is any more expensive than other fields of neuroscience (e.g. behavioural - animals cost a lot), or other methods. I'd be quite happy with just a little grant with one post-doc and a bit of scanner time plus behavioural testing to keep a stream of data coming in. But the pressure these days seems to be to have massive labs and massive collaborations, which are not more efficient and are probably worse value for money.

    Data sharing is an interesting idea, and I'm involved in some initiatives. One thing that is needed for effective data sharing is a big increase in the statistical / computer programming abilities of the average cog neuro PhD student or post-doc. I don't know how we can solve that one (I already teach programming to Masters students).

    ReplyDelete
  30. Thanks for your kind new baby wishes, and for continuing this fascinating discussion. I don't have much time at the moment for obvious reasons, but just wanted to flag up an excellent article by David Dobbs over at Wired Science summarising the arguments on this comment thread:

    http://bit.ly/fygpMY

    ReplyDelete