Lightbulb
Image by minxlabs via Flickr

How many scientists does it take to decide how to allocate research funds in the United Kingdom?

Answer: More than a thousand, and it takes over a year and costs more than £10 million (nearly US$15 million) to do it, according to a report in last week’s issue of the journal Science.

The Research Assessment Exercise (RAE) has been in existence since 1986 and has been used by the UK government to allocate departmental funding for the next five years.  The 6th report was released on December 18th, 2008,  which had administrators dashing to check their rank.  They will need to wait until March 2009 to find out what this means in terms of funding.  About £1.5 billion (US$2.2 billion) in government funding is distributed each year.

The 2008 RAE is the last assessment that will rely primarily on massive peer-review.   The government intends to move to a more quantitative approach, combining several metrics such as citations to published papers, competitive grants received, and Ph.D.s granted.  While this will ultimately speed up the evaluation process (and ultimately make it cheaper to conduct), it may come with its own problems.

While citations correlate strongly to subjective measures of quality, we assume that this relationship persists when the method of evaluation changes.  Researchers may change their behavior when their goal is to maximize their citation count as opposed to maximizing their prestige in the eyes of their peers.

Rest assured, I’m not going to infer that no one has attempted to play the citation game.  I recently reported on the case of a journal editor, M. S. El Naschie, whose practice of self-citation may have greatly inflated the impact factor of his journal.  We also know of cases of editors contacting the authors of manuscripts under review to cite more journal content for the same reason.

Honorary authorship, where an individual (such as the head of a department) is listed on the byline of an article without having contributed significantly to the work  is common, especially in the biomedical sciences.  When citations result in monetary reward, we can expect that these practices will become commonplace.  What we may not expect are more devious forms of citation inflation, such as the rise of informal “cartels” between journals, editors, or researchers whose goal it is to mutually inflate citations counts — a kind of informal “scratch my back and I’ll scratch yours” agreement.

Moving to a more quantitative form of evaluation may reduce time and expense — but only in the short-term.  In the long-term, as certain metrics become corrupt and meaningless, the bean counters will need to move back to the judgments of peers.

Reblog this post [with Zemanta]
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

1 Thought on "Funding UK Research"

Comments are closed.