dump in nic
Image via Wikipedia

One of the complaints I’ve had about the current stewardship of the scientific literature is that we seem to be cultivating a system that rewards publication to such an extent that there is no real filter any longer on the literature (unless owning a copy of Word is viewed as a legitimate signal of brilliant scientific insight).

This leads to a prevalence of dross that not only causes specific problems for specific people (patients, policymakers, engineers, and others), it threatens to make the entire scientific publishing process a laughingstock in the public’s eye.

A recent editorial in the European Journal of Clinical Investigation covers this topic very well, asking:

Why would scientists publish junk? Apparently, the current system does not penalize its publication. Conversely, it rewards productivity. In 1986, Drummond Rennie noted that nothing can deter a paper from ending in print. Since then, more papers are published each year and more authors flock to the masthead of the average manuscript. Nowadays, some authors have been co-authoring more than 100 papers annually. Some of these researchers actually published only 3 or 4 papers per year until their mid-forties and fifties. Then suddenly, they developed this agonizing writing incontinence.

The authors of the editorial also cover the role of citations in providing a bulwark against trivial findings and speck-sized ideas, but find little to support the notion that citations are providing any clear indicator of quality:

Two decades ago, only 45% of published papers indexed in the Web of Science received at least one citation within 5 years. This pattern has now changed: 88% of medical papers published in 2002 were cited by 2007. Almost anything published this year will eventually be cited.

Self-citation is a major issue in the sciences, and again, there seems to be no check to balance it out. In fact, one of the 300 most-cited authors on ISI’s Highly Cited list generated more than 90% of his citations through self-citation.

As the editorialists describe the recipe for success in today’s publishing environment:

. . . co-author more papers (salami slicing, elimination of quality checks, undeserved co-authorship and acceptance of gifts from ghosts can all help); ignore unfavourable reviewer comments; keep submitting until you find a hole in the peer-review system; self-cite; and expect random citations.

The Female Science Professor has an interesting post that touches on the same topic, but in a slightly different manner. In it, she talks about the slow sabotage of the review process at one journal, and the alternatives she then faced. The comment thread is worth reading, especially for how aware many authors already are of the problems with the incentives in the system.

And while the editors of the European Journal of Clinical Investigation can patrol their own backyard, there is a systemic issue here, one that includes publishers, authors, academic committees, information companies, and many others.

A responsibility rests on the shoulders of leaders creating incentives in scientific publishing, academic advancement, funding bodies, and authorship: If you need to resubmit your paper five times to get it published while ignoring reviewer comments throughout; if you make publication a requirement for advancement or for receiving funds before knowing if the findings are worth publishing; if you create publishing vehicles that lower the bar and drive the race to the bottom; and if you exploit rather than rectify the situation — well, there’s still time to become a responsible member of the scientific community.

I would urge you to start now.

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

25 Thoughts on "Academic Incentives Gone Awry: Why Junk Persists in the Scientific Literature"

In your proposed makeover of scientific publishing, what percentage of papers presently published would not be published? I personally favor more communication, not less.

Two statistical points are worth noting. First, given the trend toward co-authorship it follows that each author will have more papers. Many of these co-authors play minor roles, including advisors and grad students, but it is still nice to communicate who they are. Especially since mapping co-author networks is becoming an important science of science tool.

Second, self-citation is often important and proper. The standard paper format has four sections — here’s the problem, here’s what we did, here’s what we found and here’s what it means. Naturally most citations occur in the first section, explaining the problem, how it arose and work already done on it. If the author has been working this area for some time their prior publications may be very important. I frequently read the self-citations first to see who the author is.

Perhaps you should explain just what perceived problem you are trying to solve, such that we should change the system, as your article does not make this clear.

The problem is stated clearly, I hope — namely, that if anything can be published in the scientific literature, what’s to separate it from just a repository of academic musings? The public and society at large are supposed to benefit from science. If the information flow from science is undifferentiated junk, a level of trust erodes, and science risks becoming a laughingstock. We’re already losing credibility with cancer scares based on bad information, disease entities described once and amplified by mis-citation, environmental media frenzies fed by citation-wielding idealogues, and scandals around fake journals, fake articles accepted by legitimate journals, and so on.

If you ask Mr. and Mrs. Middle America how much faith they put in science these days, they’ll probably scoff. And I think part of the problem is that we have an undifferentiated junk pile of studies these days.

We have the kind of incentives in publishing that lead to people paying to be published, self-cite beyond the norm, and dole out authorship casually as barter. It’s all at such a prevalence and degree that it’s disturbing. Author lists are expanding, citation lists are bloated, and nearly every paper gets published.

The complaint from the editorial that struck a chord with me is that authors, editors, academics, and granting organizations should be more responsible. We have so many examples of industries that have misaligned incentives and irresponsible participants (medicine incentivizes procedures, so doctors get paid when they run tests, prescribe drugs, or perform surgeries; airlines are rewarded with on-time departures so passengers sit on the tarmac for hours waiting; the food industry is rewarded when it produces cheap food people can’t stop eating, so we have obesity and diabetes problems).

If getting cited in PubMed is no longer a badge of honor in scientific publishing — if anyone with a half-baked experiment or half-cocked argument can get cited, create citations, or brag about being published — then what exactly is that badge honoring? If publication in a journal is no longer a differentiator of quality or achievement, then how does that format hold any value for much longer? If authors can be rejected by a dozen journals without changing their paper or responding to peer-review comments, and still get published, what was the purpose of all that wasted, donated effort?

As usual you are working both ends of the street, with no facts in between. First you were talking about someone who had to submit to four journals, so I was thinking of a 3 strikes rule. The problem with that is that there is a well known hierarchy of journals, and the sensible thing to do is to start at the top and work one’s way down.

But perhaps you want every author to include a submission history, plus all reviewer comments from prior submissions, so the editor can see it these are properly addressed. Would that work for you?

But now you are talking about someone who has been rejected by a dozen journals. Is this really a problem? Is science really being weighed down by articles that are published after 12 rejections? Do you have any data or are you just ranting?

So far as I can tell you have not identified a real problem, much less a solution. If anything we need more journals, not fewer. On more than one occasion I have found that adverse reviewer comments merely reflected the fact that I was breakng new ground.

And you say I’m working both ends of the street? Man, listen to yourself. First, you say there’s a “well[-]known hierarchy of journals” and then say we need more journals so that things not accepted by those at the upper echelon (the “sensible” place to start) will ultimately be published. So, readers are going to read the top-tier journals, but just being published is going to . . . what? Make your persecuted paper suddenly glow with righteous beauty? Sorry, but in an ocean of published results, it will just drown. And you want to add water?

My solution is hopeless — ask people across the system of incentives to show a little more restraint and perspicuity, from authors to funders to publishers. But it’s clearly stated.

As I said at the beginning I favor communication. But then we publish every DOE contract report, on the principle that if the research was paid for the results are worth publishing.

I am still trying to figure out what your solution would look like. How much of what is presently published do you consider to be junk? One tenth percent, one percent, ten, twenty, or fifty percent, or what? If we are going to solve the problem we first have to understand it.

We actually don’t have to quantify the problem to fix it. We just need to fix incentives so that publication isn’t so central to funding, academic advancement, and career prestige. Just by tweaking a few controls, we could make it so that the author of a paper that’s been rejected by 2-3 journals could stop submitting it without fear of repercussions. We incentivized our way into this mess, and we can change incentives to recalibrate the system.

Your principle that “if the research was paid for the results are worth publishing” may work in the DOE, but what about trends showing that commercial interests in medicine pay for more published open access research? What about fake journals? What about journals with 70% acceptance rates to drive bulk publishing revenues? A system that feasts on funding decisions and sheds post-hoc filtering is ripe for abuse. But author-pays journals emerged because of misaligned incentives. We need to realign incentives with the public interest, create clear tiers of journals/repositories/depositories, and remember that, as you imply, it’s the public we serve, not the increasingly cynical gamesmanship of publish-or-perish academia.

Kent, the genie is out of the bottle. There will be more scientific publishing (or posting), not less. This cannot be stopped. The game has moved to a different place further down in the value chain, where post-publishing commentary takes the place of prepublication peer review. What is needed is a system or multiple systems for enabling such commentary.

Joe Esposito

I agree, but it leads to a series of questions. Where do editors belong? Where are the filters? How do we make a system that isn’t a sham or a relic?

Addressing this proactively creates opportunities, but I don’t see us engineering a systemic solution anywhere. Instead, we’re just racing to the bottom.

I’m not sure it’s a zero-sum game. Isn’t a combination of pre-publication review and post-publication commentary preferable to just the latter? In an age of information overload, aren’t more filters welcome? Why can’t both peacefully coexist?

Also, I’d argue that incentivizing such commentary is a bigger problem than creating a system to enable it.

I agree with you Joseph – it is too late to try and reduce scientific publishing. There are of course some post-publication peer-review tools already developed and becoming increasingly established, such as Faculty of 1000 which uses a Faculty of over 5000 world-leading experts to highlight the best papers published across life sciences and medicine. As they are known experts (and named, i.e. not anonymous like much peer-review), such a process circumvents many of the problems raised in the original post of self-citation and ignoring referee comments, and focusses on the individual paper and the quality and value of the research it covers.

One of the outstanding issues however is still that there is not a strong enough incentive for reviewers (whether pre- or post-publication) as the time required to do a good review is not rewarded by funding bodies and others.

I can’t resist pointing out, XML evangelist that I am ;-), that one problem here is a semantic one. The term “author” is about as specific nowadays as the term “XML.” If we’re going to have something called we at least need some clearly defined roles: or might be helpful. . . .

Oops, my use of XML angle brackets mangled my comment. Here’s what I meant to say:

The term “author” is about as specific nowadays as the term “XML.” If we’re going to have something called “author” we at least need some clearly defined roles: role=”grad-student-in-the-same-lab” or role=”bigshot-whose-name-will-help-get-this-accepted-and-noticed” might be helpful. . . .

David Wojick makes an intriguing case, born out in part by Trueswell’s Law. Some readers will remember this as the old “80-20 Rule.” Dr. Richard Trueswell’s research reported the 20% of any academic library’s collection, a relatively small proportion, made up 80% of the library’s actual usage. Dr. Trueswell developed his research during the print era, but no countervailing arguments have presented themselves to the best of my knowledge in the digital era. This said, many practical considerations as well as ethical ones would seem to prohibit any Wojick project to come about. Most academics are extremely familiar with the top-rated journals in their profession, and know how to separate the wheat from the chaff. If they are not familiar, various journal ranking tools are easily available, with probably more on the horizon. More problematic is that authors of extremely innovative, ground-breaking research articles may opt to get published in lower-ranked journals to avoid the lag time in higher-ranked journals, or may disagree with peer reviewers from the higher ranked journals.
Let us also not forget how how many pioneers of science throughout history were rebuffed by the “peer reviewers” of their time.
In the face of these complex issues, the lesser evil, as well as most the savvy logic
presented above by Joe Esposito, Kent Anderson, and David Crotty may well be to “let a thousand flowers bloom.”

I’m a little surprised by this honestly – I’ve never noticed this was a problem. Actually, I’m about as surprised reading this as I was when I learned that some journal editors return articles asking authors to include more citations to their journal before they can be published.

Having said that, I’m in a relatively small subfield of Psychology with only a few thousand people actively publishing. Do you think there are substantial differences on the basis of size and field maturity? Is it that some fields, especially in the hard sciences, are over-saturated with academic-types trying to publish?

In my area of Psychology, the field is pretty wide open, so there are many, many unanswered questions. If anything, I feel we need more PhDs conducting research to help answer some of them. But I suspect if you’re an organic chemist, the best paper many can produce may be “how does protein x that no one cares about interact with protein y that no one cares about?” Do you think this is an accurate characterization?

Good point. I think the larger fields are struggling more with this problem. In fact, there appears to be a generally agreed upon prestige of disease in medicine that might drive research itself. If the filter isn’t immune to this, then this prestige perception becomes self-reinforcing. I think part of the benefit of a tighter filter is that there’s a barrier to overcome. Without it, stories can emerge because of intellectual trends, not intellectual findings.

Prestige is a major part of publishing. If it’s no longer a challenge or hurdle to publish, what does that mean?

Yes, this is an accurate charactization. As someone previously involved with psychology journal publishing, it was clear that there were many intriguing if not urgent “research fronts” or newly evolving, cohesive bodies of research not represented enough by the existing journals, or not afforded quick outlets. Only the restrictions on academic library budgets seemed to slow down the growth of innovative, subscription-based psychology journals.

There is an opportunity here (aligned with Bill Kasdorf’s observation) to apply post-publication rigor which selects from among the 100% and repackages 20% according to explicit parameters. Not sure if this is a “best of” or “custom mix” and on what basis quality is measured (via redefined objective/subjective criteria).

What constitutes good science, observation, research, or writing? Popularity/citation, originality, influence/connection with persons or brands connoting authority? An alogrithm that incorporates elements of all? Not all of this is currently captured in XML tagging.

In microcosm, one could seek to assemble rigorous social ranking data by experts in conjection with other tagging specifics. It would be interesting to try to create a model for sorting the wheat from the chaff that can be data-driven.

Re Kent’s comment

I do not believe there is a “mess” in scholarly publishing. That is why I asked you to estimate whether the proportion of what you call junk was a tenth of a percent or 50% or something in between. I have a concept of acceptable efficiency in human systems, which implies that every small problem does not need a big fix, and indeed probably cannot be fixed.

Your car is maybe 33% efficient, so how does that compare to journal publishing? You seem to be back to a 3 strikes rule. I would oppose such a rule.

You keep wanting to quantify the filter. The point of this post was to consider how incentives affect outcomes. When incentives (sales, regulations) in the auto industry favored car size over safety, efficiency, and fuel economy, we had a very different car culture. Now, we have cars that are safer, more efficient, and much better with fuel economy. The incentives had to change to make this happen.

But I’ll play your game — I’d estimate that 35-50% of papers could vanish tomorrow and science wouldn’t be harmed in the slightest.

I disagree, but not with the estimate. Even if 35-50% of findings are ultimately uninteresting or unimportant moving forward, they do often provide guideposts for what research questions are interesting for others to pursue in the future.

Once I know that Technique X has already been investigated by a couple of researchers, and they didn’t find anything particularly compelling, I know not to waste my time on questions in that domain.

Well then, for clarity, it would be nice to create a bin of “been there, done that, didn’t work” research. But the pressures of publish-or-perish too often force people to put a positive spin on uninteresting or unimportant findings, making that unlikely.

Landers is totally on point. Negative results are valuable. More information please.

Dealing with it will be the challenge of tomorrow. Open data will help, not hinder.

I wish to cite the editorial in European J. Clin.Investigation to do with “why should scientists publish junk?”

I can’t find the information on the journal site.

Can you help me please?

Comments are closed.