"Mind The Gap" inscription on the Lo...
“Mind The Gap” inscription on the London Underground, Bank station. (Photo credit: Wikipedia)

With the speed of communication today, researchers, authors, and grant funders are impatient to get an indicator of its value. Waiting 1-3 years for publication and citation seems interminable. Conflating an article’s impact with its journals’ impact creates uncertainty, as well.

Altmetrics attempts to close that gap by providing more timely measures that are also more pertinent to the researcher and their article. Use metrics from downloads and blogs, and attention metrics such as tweets and bookmarks, can provide immediate indicators of interest. Although metrics associated with these activities are in the developmental stage, there is growing investment in the broader landscape to produce more current metrics that serve the researcher, their communities and funding agencies.

In January, the Chronicle of Higher Education highlighted the work of Jason Priem, a PhD candidate at the School of Information and Library Science at University of North Carolina-Chapel Hill, who coined the term “altmetrics.” In his post, “Altmetrics: a Manifesto,” Jason noted the limitations and slowness of peer review and citations. He suggests that the speed with which altmetrics data are available could potentially lead to real-time recommendation and collaborative filtering systems. Jason and Heather Piwowar, who works at the Dryad Digital Repository, created Total-impact as a prototype in their spare time last year. Two months ago, they received a grant from the Sloan Foundation, and next month Heather will be working full time to more fully develop it and provide context on the data set.

While it may be easy to dismiss the idea that social media metrics can be meaningful for scholars, PLoS has been developing a suite of measures over the last three years referred to as article-level metrics (ALM) that provide a view of the performance and reach of an article. Their approach is to present totals of multiple data points including:

  1. Usage data (HTML views and PDF downloads)
  2. Citations (PubMed Central, Scopus, Crossref, Web of Science)
  3. Social networks (CiteULike, Connotea, Facebook, Mendeley)
  4. Blogs and media coverage (Nature, Research blogging, Trackbacks)
  5. Discussion activity on PLoS (reader’s comments, notes and ratings)

Martin Fenner, an MD and cancer researcher in Germany, is working full-time as technical lead on PLoS ALM as of this summer. He brings experience as the creator of ScienceCard (author-level metrics) and is involved with the Open Researcher and Contributor ID (ORCID). As Cameron Neylon said in his 2009 article with Shirley Wu in PLoS Biology, “The great thing about metrics . . . is that there are so many to choose from.”

So which measures matter? Earlier this year, Phil Davis questioned Gunther Eysenbach’s assertion that tweets can predict citations in his article. Mendeley data, however, appear more relevant, and several research papers presented this year show a strong correlation with citation data. In fact, patterns of use indicate that some papers are widely shared but seldom cited while others are frequently cited but appear to have limited readership. In a recent presentation, William Gunn of Mendeley noted looking ahead that:

Useful as these new metrics are, they tell only part of the story. It’s a useful bit of info to know the volume of citations or bookmarks or tweets about a paper or research field, but the real value lies in knowing what meaning the author intended to express when he linked paper A to paper B. Through the layer of social metadata collected around research objects at Mendeley we can start to address this challenge and add some quality to the quantitative metrics currently available.

A growing community is forming around the topic and the conversation in June at the Altmetrics12 program focused on exploring the use of emerging tools and sharing findings of research results. As part of the Association for Computing Machinery (ACM) Web Science Conference, this daylong workshop attracted 60 very active participants in this budding community. Keynotes by Johan Bollen (Indiana University) and Gregg Gordon (SSRN) were accompanied by discussions of research and demonstration of 11 different tools.

One of those tools, Altmetric.com, created by Euan Adie, won Elsevier’s Apps for Science competition last year and now is part of the family of research tools at Digital Science supported by Macmillan Publishers.  The Altmetric Explorer tracks conversations around the scientific articles in tweets, blog posts, and news that are analyzed and represented by a score in a “donut” made of colors that reflect the mix of sources.

An important component of the altmetric community was represented by founders of two leading academic social networks: Mendeley (which also competes with citation managers) and Academia.edu (whose competition includes ResearchGate). Since these tools enable researchers to collaborate by posting and sharing their work, the data from these systems could potentially offer fertile ground for data to support the growth of altmetrics.

The most recent entrant in this arena is Plum Analytics, founded by Andrea Michalek and Mike Buschman, who were team leaders in the successful development and launch of ProQuest’s Summon. Andrea is building a “researcher reputation graph” that mines the web, social networks, and university-hosted data to map relationships between a researcher, his institution, his work, and those who engage with it. An interview with Andrea in semanticweb.com described how she is dealing with the issues of identifying a single researcher and a single article:

 The Researcher Graph is seeded with departmental ontologies, document objects IDs for published articles, ISBNs for books, and other information universities typically already have data about. For the Document ID, it is creating a set of aliases and rules to find different URIs by which a work is referenced, since one paper can be living legitimately at 50 different places around the web; so, when someone tweets a link for one of these resources it will know it is the same one that might go by a different alias from another publisher.

The resulting data set could provide a current complement to the institutional citation analysis services offered by Thomson’s Research in View and Elsevier’s SciVal.

The dimensions of altmetrics extend well beyond an effort to capture social media activity and use it as an indicator of subsequent citation ranking. New services, such as Mendeley whose goal is ‘to manage your research’ and Academia.edu whose goal is ‘to accelerate research’ are tools for researchers where use data is a by-product that contributes to the scholars’ ‘footprint’. Altmetrics seeks to quantify the response to research and ultimately its influence across a global community.

Competition to secure grants and promotions are familiar drivers in the demand for metrics that represent value. While it is still early days for altmetrics, this nascent movement is gathering steam and will be the topic of many future conversations as we engage in evaluating the qualitative aspects of a new set of metrics that will — like medicine — be complementary rather than alternative.

Enhanced by Zemanta
Judy Luther

Judy Luther

Judy Luther is President of Informed Strategies which provides market insights to organizations on innovative content and business models. A past president of SSP, she serves on the editorial board of Against the Grain and The Charleston Advisor.

Discussion

40 Thoughts on "Altmetrics – Trying to Fill the Gap"

Thanks for an interesting article. This current growth in diversity of metrics is to be welcomed. I think concept of ‘altmetrics’ will absorb over time into mainstream bibliometrics and that utility of each marker will determine which are most commonly used which will drop out of use (maybe there should be a ‘metric metric’?) The more the merrier, and there is a place for social-media measurement, but I sense scholarly citation/download rates will again float to the top as being of greatest use.

Alternative metrics (like alt music) only remains “alternative” when they sits outside popular endorsement. Metrics become mainstream when they are valid, robust indicators of what they attempt to measure. That said, which of the altmetrics do you think will become mainstream and why?

Re “which of the altmetrics do you think will become mainstream and why”: scholarly ones which have a level of resistance to gaming.

Certainly gaming is a topic of concern within the altmetric community. As soon as a metric is adopted there are efforts to game the system.

I think that some of the easily available data from popular sites (Facebook, Twitter) won’t prove as relevant as the data exhaust from systems that support the work of scholars. Data on activity at academic social media sites and those that support storing and using articles are more likely to provide a preview of the significance of a work.

Do you have a sense of how “open” each of these proposed metrics are going to be?

Is there any information on which of these efforts are for-profit and shareholder/investor owned versus not-for-profit?

Much of the criticism of Thomson-Reuters and the Impact Factor revolves around ideas that the metric is not transparent and is tightly controlled by a commercial company that only makes the information available behind a paywall. Are these efforts more of the same? Are there efforts that can be owned and controlled by the research community itself? Are there metrics that are clear and obvious and can be calculated by anyone interested without subscribing to a pay service?

Is this an example where it’s in the research community’s interests to drive progress by allowing commercial exploitation? Or will it be seen more as a case of private companies asking researchers to do additional unpaid work (social media commenting, blogging, tweeting and bookmarking, etc.) and then raking in profits, taking effort and money out of the research community and putting it into shareholder/investor pockets?

I know Jason Priem has spoken about his efforts being done as a community-supported not-for-profit foundation. How about the others?

Judy, thank you for writing this nice altmetrics summary.

David, collecting and analyzing all these metrics obviously costs money, in particular with almost real-time sources such as usage data or Twitter. PLOS makes all Article-Level Metrics freely available via API and bulk download (when permitted by the service providing the metrics). I think it is a very useful strategy to pay for these metrics rather than providing them as a paid service.

Phil and Andrew, I agree that resistance to gaming is an important factor for a metric to become mainstream. Another one is coverage: more than 90% of PLOS papers are bookmarked in Mendeley, but less than 20% of papers have comments at the journal website.

I guess I just find it confusing that in an era of growing open access, it’s deemed not acceptable to pay for access to the articles themselves, but okay to pay for access to information about the articles. Or that it’s wrong for a for-profit corporation like Elsevier to take money out of the research world and put it in their pockets, but okay for any of the companies above to do the same. I think there’s great merit in offering rewards for work that offers progress and better technologies/understanding, I just find the seemingly arbitrary nature of where lines are drawn to be somewhat baffling. I’d like to see the research community control its own destiny and to be able to do what’s best for its own goals, rather than being a pawn in someone else’s business model, hence my questions above about who owns what.

If the goal here is to replace the Impact Factor, then we need metrics that are as, if not more meaningful (discussed here http://scholarlykitchen.sspnet.org/2012/04/19/post-publication-peer-review-what-value-do-usage-based-metrics-offer/ ) and ideally, metrics that are transparent and openly available for use and re-use. I have reservations about tying the research community down to another privately owned for-profit company that can shift the rules and change the nature of what it costs to access that data and how open it is for re-use as their business model evolves.

Your questions are good ones but its unclear to me how the community comes together to support the development of an effort where the product concept is still evolving. We’re still in the early stage of gaining sufficient experience to produce a reliable metric and enough feedback to determine its relevance.

While many of those within the community advocating for altmetrics are committed to open data as Martin points out it costs to develop the systems and a service. At the moment the most robust examples are supported as commercial ventures. Will the most useful data from systems designed to support scholarly sharing and workflow be freely available? Some organizations able to supply data as a by-product of their service would need a rationale for supporting the ongoing costs. That’s not to say that some metrics couldn’t be free and others part of a customized package of paid services.

How does the community come together to support something like CrossRef, where the offerings continue to evolve? Why not create a not-for-profit that’s governed by the research community to answer those questions? Is it better to have the research community in control of their defining metrics rather than leave them to the whims of Wall Street?

Also, you and Martin have both pointed out that developing, maintaining, and delivering this data costs a lot of money. But that goes for the publishers as well. The sense I get is that these for-profit altmetrics companies do not expect to pay publishers for access to the raw materials that they’ll use to turn profits. Where is the motivation for publishers to invest in creating this data and making it available?

David, re-reading my comment I think I wasn’t clear. In my opinion it is very important that these new metrics are freely and openly available. One model that PLOS supports is to have the publisher pay as part of the service to authors and readers. Which publishers also do for citations and usage data. Similar to citations (CrossRef) and usage (COUNTER) this requires an independent organization to define best practices, provide infrastructure, etc. Without that independent organization we will probably see the development of an altmetrics services industry with either a fragmented market or a dominant commercial player. One more reason to think about this now and not in two years.

(disclosure: I founded Altmetric)

Nice overview post!

And fair points David.

“I have reservations about tying the research community down to another privately owned for-profit company that can shift the rules and change the nature of what it costs to access that data and how open it is for re-use as their business model evolves.”

Yes, me too.

IMHO the alt-metrics ethos is all about moving away from single, all encompassing metrics or sources of data and taking a more holistic view of impact. It’s encouraging people to *not* rely on a single number for everything.

That doesn’t mean bombarding people with hundreds of data-points and telling them to sort it out themselves though. There’s tremendous scope to look at what kind of data and analysis can help make decisions in different contexts, which is why the field is so exciting to work in.

The Altmetric score tries to do this in one specific context, which is assessing how much attention a scholarly article has gotten compared to others. People do find this compelling, for better or worse, or we wouldn’t offer it. It’s far from the be all and end all of what Altmetric does though – it’s just a starting point, a way into the underlying data.

“in an era of growing open access, it’s deemed not acceptable to pay for access to the articles themselves, but okay to pay for access to information about the articles”

Without denying that this might seem unfair:

People don’t object to paying for things as a point of principle – they do object when publishers turn very large profits ultimately derived from public money in return for what they (for better or worse) see as little value.

“Do you have a sense of how “open” each of these proposed metrics are going to be?”

For the record Altmetric is for profit.

For profit and ‘open’ aren’t mutually exclusive though (see BioMed Central or, conversely, the ACS).

We don’t charge publishers for the raw data per se, we charge for the work we do aggregating it, cleaning it up and helping to contextualize it. The value of this is clear enough to enough people for the Altmetric is be commercially viable.

That means I can work on it full time, hire very smart people to work on it with me and make data that’s normally out of reach free to individuals through stuff like the altmetric.it bookmarklet.

Works for me. 😉

Excellent overview, Judy. Thank you.

Readers of this post may also be interested in an article recently posted to arXiv, “The weakening relationship between the Impact Factor and papers’ citations in the digital age” http://arxiv.org/abs/1205.4328. I don’t believe it’s just the time lag that drives attempts to find alternative indications of importance, now at the article level. Journals, as proxies of article status, made a great deal of sense for the cluster of technologies available for the first 350 years of scholarly communications. In a digital age, journal status would appear to be an evanescent indicator.

The activities of bibliometricians are doing untold harm to science. The ONLY way to judge the value of a paper is to read it. Even then, its importance may not be clear for another 20 years.

There is clearly too much money around if people are getting paid to devise one spurious method after another.

The essence of science is to admit openly that you don’t know. Bibliometricians obviously haven’t learned that lesson. They should get off the backs of real scientists.

Your diatribe ignores the fact that science is a highly competitive business. Every researcher or group is a small business. The folks who pick the winners want, and deserve, as much information as possible.

I have been doing science (experimental and theoretical) for 40 years. I don’t need any lectures on the competitiveness of science.
My point is that you have not demonstrated than any metric helps to predict winners. My guess is that often it will select second-rate spiv scientists.
I’d suggest that you keep quite until you produce some evidence that your methods work.

These are not my methods. It is the way science presently works today. Your guess is groundless, nor does anyone claim that metrics pick winners, but they are useful when people pick winners, which must be done.

For example, I think citations are a good rough measure of importance. They usually indicate that the paper has been read by the way. So are being accepted in leading journals, and getting a lot of grants. These are standard metrics, to which I see no alternative. Do you have an alternative evaluation system for science to adopt?

I have been doing the science of science for 40 years. Your wild claim that metrics are doing “untold harm” has no empirical basis.

Getting accepted in leading journals is not a measure of much. Just look at the extremely skewed distribution of citations in such journals (eg http://www.dcscience.net/?p=4873 ). I’ve had several letters in Nature which, in retrospect, are really trivial (they just happened to hit the editor’s current buzzwords at the time). Some of the trivial ones have a deservedly low number of citations, but one has a very large number -utterly unrelated to the quality of the work.

You say “These are standard metrics, to which I see no alternative”. There is an alternative -engage your brain and read the paper. Of course that would make bibliometricians redundant. But until such time as they produce evidence that ANY metric has predictive value that is where they should be.

It is irresponsible, and bad science, to foist on the community untested methods. Almost as bad as homeopathy. There is an unfortunate parallel.

I read your opeds and your skewed distribution argument misses the point. There is fierce competition to get in leading journals so it is correctly viewed as an accomplishment. “Reading the papers” is not an alternative to the present evaluation system, rather it is glibly simplistic. Promotion committees are in no position to read all of a candidates papers, plus all the other papers in the field, to judge the candidates worth. The present system exists for very good reasons. Metrics allow the candidate to be judged based on the actions of those people who actually do read the papers, namely (1) peer reviewers and (2) those who cite the papers.

David, I understand your frustration, but your attack on those who do bibliometrics is unwarranted. Bibliometrics is a statistical tool used to understand the state of research using publication metrics. It is no different from the work economists do understand the state of the economy. If by “real science” you mean those who generate new data points, you’ll need to rule out epidemiology and meta-analysis. I think your frustration is based on those who use bibliometrics for evaluation purposes; however, it is not the bibliometricians you should blame, but university and funding administrators who misuse them.

it’s a but unfortunate to bring up economists as an example of successful science. And epidemiology is, of course, riddled with problems of causality (and consequent over interpretation).

That’s beside the point, though. My definition of science in the present context would mean that hypotheses are subjected to empirical testing before they are used. I’m not aware of any evidence that any metric predicts the success fulness of an individual scientist. though I see plenty of reason to think that they encourage cheating.

How about checking the metrics of a sample of scientists every year and follow the cohort for, say, 30 years, and see how many of them are generally accepted as being successful?

Still better, take two samples of young scientists and allocate them at random to (a) get on with the job (b) to be assessed by the cruel and silly methods used at, for example, Queen Mary, University of London (see http://www.dcscience.net/?p=5388 and http://www.dcscience.net/?p=5481 ).
That way you’d really be able to tell whether imposition of their methods had any beneficial effects.

I know these experiments would be difficult and take a long time. but until they are done it is irresponsible to pretend that you know whether metrics are useful or not.

No doubt you’ll object that if you were to do the work properly, it would take too long and and be bad for your own metrics. I’d regard that reaction as being a good example of the harm done by metrics to good science.

You are right when you say “I think your frustration is based on those who use bibliometrics for evaluation purposes; however, it is not the bibliometricians you should blame, but university and funding administrators who misuse them”. But if you don’t intend them to be used it might help if you made that clear in every paper your write.

They are not proposed predictors of success, but rather measures of success to date. If they were predictors they would have a different, probably probabilistic, mathematical form, plus a definition of future success.

I can’t agree at all that they are anything like satisfactory “measures of success to date”. They are naive and misleading, even for that limited ambition. For example, some of my most important papers are quite mathematical, and such papers never get huge numbers of citations, despite the fact that they form the essential basis for the experimental work that followed. I’m not aware of any automated system that can measure the value of such work,

In any case, it’s naive to say that they are not proposed as predictors of success. The main interest for everyone outside the bibliometric bubble is to use them to select staff. If they can’t do that, then they are of very little interest. Of course they are loved by innumerate HR types because they appear to circumvent the need for thought or knowledge of the subject area.

I outlined on this thread how the ability of metrics to test the quality of scientists could be properly tested. As far as I know, nobody has tried to do so. It’s much quicker to count tweets, but that is irrelevant to the real world of science.

I’d say to you, much as i would say to a homeopath, show me the data that shows the usefulness of your metrics (/pills) and I’ll believe you. But as far as I know, no such data exist.

I see nothing misleading about the standard metrics, and your example does not show that. If no one cited your math papers then they probably did not influence anyone directly. If they led to your work that pepole did cite then the influence was indirect, which citations do not reflect. Actually, I have studied cases where citation does not reflect influence, but they merely make citation a rough measure, not a misleading one.

The metrics play the same role in evaluation that measurement plays everywhere. They may be rough but there is nothing misleading about them. Promotion committees cannot realisticaly convene boards of expert visitors for every promotion decision. So unless you have an alternative way to do evaluation, that is clearly better than the present system, you have no case. The combination of peer review for funding and publishing, and citation, are the best available measures of success.

This case reminds me of democracy, which no one likes but to which there is no better alternative.qq

I see that you have jumped straight from saying that metrics “are not proposed predictors of success” to advocating their use by promotion committees! Please make up your mind.

You say ” They may be rough but there is nothing misleading about them.”. That’s verging on being an oxymoron. More to the point, you haven’t been able to produce any convincing evidence at all that metrics are able to predict success. As I have pointed out, it wouldn’t be hard to test them properly, but proper testing of hypotheses is rare in education and sociology (hence my frequent use of the hashtag #edubollocks.

In the absence of hard evidence, we are both guessing. But to judge people by metrics could very easily do more harm than good. That’s why the lack of proper testing is such a disgrace. For example, judging in the way you advocate could favour the sort of spiv scientist who exploits grad students to produce a paper a week, each of which has a about a week’s deep thought in it.

As I have pointed out already, I have been lucky enough to know three Nobel prizewinners well (Huxley, Katz & Sakmann). All three would have been fired by the metrics based criteria being used in some universities. So would Peter Higgs.

Queen Mary University seem to be impressed by someone who produces 26 papers in four years. I produced 27 research papers between my first (in 1963) up to the time I was elected to the Royal Society (1985). Universities love to boast about the number of FRSs they have. But some now adopt metrics-based evaluation which is likely to fire people before they get old enough to be elected.

The spiv scientist is already too prevalent. I suspect that your ideas are making the problem worse.

I have explained below why the metrics are observations, not predictions. I also address the Queen Mary case. As for the promotion committees, they probably already consider the metrics. I cannot imagine a committee ignoring publications, citations and grant income. They would have little basis for making a decision. This seems to be the point you are missing, that metrics use is already standard practice. That is why I keep asking you for an alternative decision model, one which does not consider these metrics.

The predictive rule being applied here is pretty universal, and it has nothing per se to do with the metrics. It is simply that if someone has done well until now they are likely to continue to do so, and if not, then not. Of course this rule misses some important turnaround cases. both positive and negative, but that is no reason not to use it. When applied to research the question is then how to judge if someone has done well and this is where the metrics come in. The standard metrics — publication, citation and funding — are all based on peer judgement, so this is where the papers get read. Each has an obvious relation to the probable quality of the work, unless you reject peer judgement.

I will admit that the Queen Mary case is rather extreme, interestingly so in my view. They have taken what are usually relatively vague applications of the metrics and made them rigorous. I could argue with some of the thresholds, and that they have failed to include citations, but the concept is far from being insane, as you term it. The distinct advantage is that the rules are clear. In the more typical vague case one is often terminated on the unexplained grounds that one’s committee felt one’s performance was inadequate. That cannot happen with the Queen Mary rules, or not much.

On the other hand there seems to be no room for the more subtle politicking and special pleading that probably often occurs under the vaguer rules. I like clear rules but that is just me. But in any case metrics are not the issue, as I think they are pretty universally used, just in a vague way. The QM issue is just whether their use should be this brutally clear and simple.

Davids, thanks for your interesting, forcefully-expressed but respectful (on the interwebs! whoa!) discussion. As someone who’s both associated with the field of bibliometrics and, and frequently criticizes its present state, I can identify with both your perspectives.

Just wanted to share two quick notes: one is that there has in fact been quite a lot of research supporting predictive validity for citation metrics, going back to Garfield’s landmark work (1970) showing he could (partially) predict Nobel laureates. Another is that there has been, in parallell to the bibliometric tradition for decades, a strong thread of criticism as well, much of it associated with of the so-called “social constructivist” perspective (contrasted to the “Mertonian” or “normative” view.

MacRoberts and MacRoberts have been stalwart and effective critics of citation mining; Simkin and Roychowdhury’s work is another great example; in a very entertaining article, they show that only about 20% of citers actually read what they cite. My point is that these arguments are not new.

And bibliometricians are, to their credit, not uniformly unresponsive to these arguments. Blaise Cronin, for example, a very respected name in the field, has written a lot about the limits of citations and the importance of uncited influence. See (2005) for a very readable version of this; also see a very thorough analysis of research supporting both the normative and social constructivist positions from Bornmann and Daniel (2008).

These arguments are not going away; they parallel other debates about measurement and reductionism like over psychometrics (Gardner, Sternberg et al. vs. Spearman) and educational assessment (standardized test supporters vs, well, most people that actually teach kids). We won’t fix that here. But I think we can at least, with altmetrics, take a more inclusive and diverse approach to measurement, to help minimize some of its problems.

Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45.
Cronin, B. (2005). A hundred million acts of whimsy. Current Science, 89(9), 1505–1509.
Garfield, E. (1970). Citation Indexing for Studying Science. Nature, 227, 669–671.
Simkin, M. V., & Roychowdhury, V. P. (2002). Read before you cite! cond-mat/0212043. Retrieved from http://arxiv.org/abs/cond-mat/0212043

I’m sorry, but it simply isn’t true that the use of metrics is universal,

After a pilot study (http://www.hefce.ac.uk/pubs/year/2011/201103/ ) the entire Research Excellence Framework (which attempts to assess the quality of research in every UK university) made the following statement.

“No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs”

It seems that the REF is paying attention to the science not to bibliometricians.

It has been the practice at UCL to ask people to nominate there best papers (2 -4 papers depending on age). We then read the papers and asked candidates hard questions about them (not least about the methods section). It’s a method that I learned a long time ago from a senior scientist at the Salk Institute.

It is not true that use of metrics is universal and thank heavens for that. There are alternatives and we use them.

Incidentally, the reason that I have described the Queen Mary procedures as insane, brainless and dimwitted is because their aim to increase their ratings is likely to be frustrated. No person in their right mind would want to work for a place that treats its employees like that, if they had any other option. And it is very odd that their attempt to improve their REF rating uses criteria that have been explicitly ruled out by the REF. You can’t get more brainless than that.

This discussion has been interesting to me, if only because it shows how little bibliometricians understand how to get good science.

At last we have your tenure and promotion decision model– 2 to 4 papers plus an interview, with no other information. This is just as extreme as the QM model, but the other way, with no consideration of external peer results, except the few papers. The big downside I see is that given that few, if any, of the committee members would be subject matter experts there is no way to judge the scientific significance of the work. Hence the focus on methods no doubt. You will however get some strong methodologists this way.

It is easy to see why a lot of schools would want to go beyond this very limited model, as would a lot of candidates. Publications, citations and funding are significant considerations, not to be ignored. That is why the metrics are so widely used.

@Jason Priem
Yes, I am aware that not all biometricians are uncritical. After all, Eugene Garfield himself has said that impact factors are not a sensible way to rank individuals. Their use (insofar as they have any use) is to rank journals. That means that they are of interest only to journal editors. But that has not been apparent in some of the contributions here.

The enormously skewed distribution of citations in high impact journals means that they are calculated in a pretty silly way. It is statistically illiterate to use arithmetic means as indices of the central tendency of highly skewed distributions. There are solutions to this -removing the top and bottom 25% would be one way. At least median should be used, not the mean. Has anyone done that? My guess is that using the median would reduce considerably the difference between Nature, Science and the rest. The hegemony of half a dozen journal is one of the main obstacles to open access. They have done real harm (unintentionally, of course).

I notice that you have not actually responded to Jason’s points, nor to mine for that matter. You are merely repeating your oped points. None of this supports your initial wild claim that metrics are doing untold harm to science, or that they should not be used. I think we are done here.

Comments are closed.