Earlier this month, Wellcome Trust announced a new partnership with F1000 to create a journal-esque entity geared for post-publication peer review.

The Wellcome Trust journal announcement sowed confusion and consternation from the start. Was it a journal, a preprint experiment, some new and strange hybrid? Why would Wellcome, which is funding eLife currently and recently committed to an extended term of support, also fund a branded F1000 journal? If F1000 Research already exists, why create another title that seems to do the same things and take the same approach, but within the strictures of Wellcome-funded research? Why work with a commercial publisher?

Welcome Mat
Image via The McClouds.

Discussions on Twitter and elsewhere highlighted the announcement as strange and unwieldy. Wellcome and F1000 both seemed reluctant to call what they’re doing a “journal,” using the more cryptic term “publishing initiative.” F1000 Research has exhibited this reluctance to be specific about what they’re doing for years. They seem to want all the trappings and benefits of being a journal without the responsibilities.

Yet, if it walks like a duck, quacks like a duck, and looks like a duck, it’s probably a duck. And Wellcome Open Research certainly resembles the duck known as F1000 Research — with publication prior to peer review, rejection of blinded peer review (authors invite post-publication peer reviewers), and no requirements for article forms (you can publish a picture, for example). No new ideas here, and F1000 Research has not really worked all that well, either as an outlet for strong research or a model inspiring emulation.

On Twitter, Robert Kiley, Wellcome’s head of Digital Services, was soon pinned into admitting that Wellcome Open Research is a journal, and one that will guard its papers in the usual manner:

Wellcome seems to want to blaze a trail to a funder-sponsored publication platform, with Kiley quoted as saying:

The expectation is that this, and other similar funder platforms that are expected to emerge, will ultimately combine into one central platform that ensures that assessment can only be done at the article level.

Peer reviewers would be invited by the authors. This is an abdication of objective peer review practices and responsible editorial approaches. But for F1000 Research, “objective peer review” means not disinterested peer review, but rules-based peer review. This means that it only accepts papers from people affiliated with research institutions or with MD or PhD degrees. For instance, F1000 Research refused to peer review a paper because the author didn’t have these academic credentials — her two Master’s degrees and extensive experience apparently weren’t sufficient to even get her paper reviewed.

On top of this, the quality of post-publication peer review at F1000 has largely been found wanting. And not just once.

Recently, the ability of F1000 Research’s peer review process to identify and deal with plagiarism, and respond to plagiarism charges in the proper manner, has also been criticized. (Additional reading here and here.)

The weakness of post-publication peer review is brought up again in coverage of the new publication, with Kiley providing no convincing reassurance that they have any new ideas or solutions:

It doesn’t take very many words to explain that something is either seriously problematic or largely fine. Furthermore, the fact that the referees’ name and referee report are public means the referees are more careful and conscientious to back up their comments because they know that they will be publicly judged.

My eyes widened at the idea of a paper being “largely fine.” Is this a new standard for scientific publishing, one in which a general approach of cursory review is acceptable? I’ve been a peer reviewer for years, and if you don’t chew on nearly every sentence, you’re not doing your job.

Maybe he means to say that it doesn’t take many words to write an anodyne review for a friend.

Post-publication peer review as a substitute for pre-publication peer review is really quite a game — publish first, and thereby put reviewers in an awkward spot of having to decide to undo a publication decision that’s been made without peer review. This creates a much higher barrier for rejection. And it’s still not clear that indexing services know how to deal with papers and materials unpublished in the F1000 Research model.

Wellcome Open Research brings back to the fore the inherent conflict of interest (COI) with funder-sponsored journals, which I wrote about in 2012. This is not a minor point, and deserves some reiteration.

A good place to start is the International Committee of Medical Journal Editors (ICMJE) definition of COI:

Conflict of interest exists when an author (or the author’s institution), reviewer, or editor has financial or personal relationships that inappropriately influence (bias) his or her actions. . . . Financial relationships (such as employment, consultancies, stock ownership, honoraria, and paid expert testimony) are the most easily identifiable conflicts of interest and the most likely to undermine the credibility of the journal, the authors, and of science itself.

Wellcome is funding the research and the publishing outlet, creating a clear conflict that compromises their or their designees’ ability to believably execute unbiased publication decisions and evaluate works to a standard. Wellcome has avoided a perceived conflict becoming an actual conflict with eLife so far because eLife is open to submissions from anywhere, and because eLife has a robust and independent peer review system. Having authors invite their friends to give a paper a quick once-over and thumbs-up isn’t the same thing.

So the conflicts are more than just perceptual with Wellcome Open Research. F1000 only gets paid when Wellcome authors publish with them — not when they submit, not when they are rejected, but when they publish, adding to the straight line of conflicts. To put a finer point on this, imagine if this were Pfizer Open Research teaming up with another commercial publisher. Would you believe that Pfizer Open Research — dedicated to Pfizer researchers — and the commercial publisher were making publication decisions in the same manner as a third-party journal run by an independent company?

The motivations for Wellcome — to demonstrate value for funding, to have research outputs, and to show research throughput — may not be entirely commercial, but they are prone to the same conflicts of interest.

This inherent COI from a funder also puts authors in a tough spot:

  1. I received funding from Wellcome.
  2. I did a study, and it turned out well.
  3. On the one hand, it makes a lot of sense to get the paper published in a very high-impact and prominent journal in the field I work in. On the other hand, Wellcome has established a journal of its own which is a mixed bag and has no direct relevance to me or my colleagues. Should I submit it to their journal? Will they punish me if I don’t? Or should I submit it to the top journal? Which path is better for my career?

This is not an easy dilemma for a researcher or research team to solve. I’d argue it’s not a position they should be in.

There is also the weak peer review associated with author-invited reviewers, which can only be exacerbated by the post-publication status of the works contemplated here.

Tying themselves to the F1000 wagon is also a puzzling choice for Wellcome. Wellcome has promised to pay all the publication fees F1000 will charge, an exclusive arrangement. F1000 is a commercial organization. Clearly the commercial prospects of F1000 are bolstered by this announcement. Vitek Tracz, the owner of F1000, is a serial entrepreneur, and F1000 is reaching a stage when he must be looking for the exit (rumors have rumbled for a year or more that this is the case). What happens to Wellcome Open Research when (not if) this happens? What if it goes to one of the major commercial publishers, as other properties have in the past? Is this deal the last piece of the exit strategy Tracz was executing, putting a major named journal into the F1000 portfolio before he shops the company?

Fortunately, all these machinations to start a “publishing venture” and foist an overly-friendly peer review process on it seem to have been met by healthy skepticism. Many of the points above were articulated in a flurry of exchanges on social media and email.

Others noted that Wellcome continues to divert research funding to support redundant and unnecessary publication efforts, which is clearly the case given the fact that a commercial entity is benefiting directly from what appears to be an exclusive arrangement. This is a troubling use of philanthropic dollars — bolstering the fortunes of a for-profit company.

While there was the predictable cheerleading of this announcement by the usual suspects, it’s reassuring that the serious side of our industry seems to have generally caught on to these games of quasi-journals, unreliable peer review, wasteful spending, and the obvious COI problems with funder-backed journals. I’m hoping that authors also can sniff out the problems and limitations, and that serious efforts to enhance the intellectual rigor of peer review and the quality of published research gain more support in the future from funders who really want to see scientific and scholarly information improve at a more fundamental level.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

23 Thoughts on "Wellcome Money — Involvement with F1000 Opens Door on Sketchy Peer Review, COIs, and Spending Decisions"

My immediate reaction to this was that it represents a marriage of convenience between a publishing model that has not proven itself to be financially viable and a funder that is trying to keep a lid on spiralling APC outgoings brought upon itself by its own mandates.

I like the Pfizer analogy. The missing piece of the puzzle is to what extent if any Wellcome will attempt to incentivize or even coerce participation amongst its grant recipients.

The rise of altmetrics makes this effort of interest as more researchers are seeking alternatives to the current “peer review” and journal impact factors as the principle measures of the merit of an individual’s oeuvre. One must remember that new ideas do not emerge fully mature as a goddess out of the head of Zeus. Likewise, as the existence of the journal submission site “Scholars” makes very clear, not all journals and “review” processes fit well within a standard format. Additionally, it has been shown that the traditional peer review often fails and the double blind review is not as “blind” as the term implies. Additionally, as those in this Kitchen know well, the current system is well understood and there are “masters” as in Las Vegas, who have learned well how to play the game using the extant “rules”.

It seems that as with altmetrics, scholars are saying that there needs to be alternatives to the current pub/perish journal industry. As with emergent ideas, like chickens breaking out of the shell, one is not sure what will be hatched. The rise of intelligent systems that can, probably, near term, do an equal or better job at policing the scholarly communications, pre and post than humans, begs the entire system to be examined, even to the point of the continual refining and parsing into new “journals”.

This piece seems to be a view through the eyes of the publishing industry which has a vested interest in maintaining a system which is increasingly seeing pressures to transform as so clearly discussed in the writings of Clayton Christensen.

The data actually show high levels of satisfaction with current peer review practices. If there were so much agitation for change, and if someone actually had created something better, you’d expect wide adoption. The insinuation that there is some sort of “mafia” running scholarly publishing is absurd and paranoid. Alt-metrics have been widely adopted and with little fuss, but also with little benefit other than some brightness on our pages. Your complaint about people knowing how to work the current “rules” and then advocacy of an algorithm-based alternative doesn’t make sense. And we keep talking about “disruption” as if it’s relevant to us. It might not be: https://scholarlykitchen.sspnet.org/2013/08/27/stick-to-your-ribs-why-hasnt-scientific-publishing-been-disrupted-already/. In fact, Christensen’s premise may have very little general relevance: https://scholarlykitchen.sspnet.org/2014/06/18/well-that-about-wraps-it-up-for-clayton-christensen/

I am in basic agreement with your critique until one gets into the trenches as authors and/or editors:

a) the publishing of an article is in many ways a right of passage as well as a way to communicate “findings” however defined. As Kahneman notes in his writings, he “needs” to publish in part because of institutional pressure, a pressure felt by those seeking promotion and tenure and/or funding. Much of this has turned into a form of counting coup. In other words, its the system and “satisfaction” as you note is, I believe, if probed, would lean towards the path of least resistance when no ready alternative exists.

b) There are indications of edge thinking as different approaches arise, fail, are adopted in part, etc. Open Access and Altmetrics are but two. Alternative venues for scholars who may have different ideas do appear. Admittedly most of these are outside the STM area. And it maybe here where the differences arise when one takes the spectrum of scholarly publications beyond the focus on most here in the STM/STEM community.

And here is the “rub”. Peer review, originally, in the sciences, was designed to validate ideas and practice. In today’s world, the pressures and capabilities to do such has become problematic when exercised in more than the ritual or formulaic demands of the publishing community including the mechanics of publishing as well as the internal conflicts of reviewers balancing their own need to publish and more.

It is particularly problematic when, as with Impact Factors and other numeric validations, the evaluations of the worth has been defaulted to reduction of numbers trusting that the reduction can be accepted as reflective of the oeuvre, particularly as the number of journals and articles, today, probably can’t be fully assessed until the “wine has matured” and it gains posthumous recognition.

d) Today, the Internet world now allows prepublications, early cites and a variety of ways findings are released. The “journal” has beoome the “record” like an official validation more than the release of results. To obsess about post publication review, today seems more reactionary than identification of a questionable practice. Overt sponsorship of such publications, when really in the open seems much less problematic than full disclosure of the sponsored research by any source including governmental agencies. Publishers are also not immune as we have seen by several public retractions.

Post-publication peer review is fine as a supplement to solid pre-publication peer review. In fact, it’s normal.

Maybe this would be clearer if we didn’t call this “post-publication peer review” and instead called it “pre-peer-review publication.” That’s what it really is. What we have here is pre-peer-review publication, with the further conflict of then having authors invite their friends to do the review after the non-peer-reviewed publication has already occurred and in a venue paid for by the same entity that paid for the research.

And here is the “rub”. Peer review, originally, in the sciences, was designed to validate ideas and practice. In today’s world, the pressures and capabilities to do such has become problematic when exercised in more than the ritual or formulaic demands of the publishing community including the mechanics of publishing as well as the internal conflicts of reviewers balancing their own need to publish and more.

What are you talking about?

Ah, alt-metrics. I can just picture some asst prof going in front of the tenure committee and stating his case for tenure based on a series of post publication reviews and data showing that his papers have been seen (not necessarily read) by 1,000 people!

If you really want to play the system then one should adopt your suggestions because alt-metrics are open to all sorts of fraud!

Lastly, I am not too sure you know what the conventions of a scientific paper are. As for other means of presenting science, well that really is not journal publishing is it!

The optics of this arrangement are so bad that it throws into question the rationality of the management of Wellcome. The kind of “peer review” system this will encourage looks like it will be akin to what we see as “reviews” on Amazon.

Nice article, with much to ponder and agree with from my pov. But I’m troubled by this:

“This is a troubling use of philanthropic dollars — bolstering the fortunes of a for-profit company.” Should philanthropic organizations only benefit other philanthropic organizations? Are you suggesting Wellcome etc should bar their fundees from ever publishing with a for-profit organization?

I don’t think you mean this. But I don’t get the point of it.

The point to me is that Wellcome has established a journal (eLife) within their philanthorpic remit, and that there are plenty of non-profit publishers who could work on a project like this with Wellcome — so why choose F1000? It’s a puzzling choice on many levels, and the fact that it’s a commercial, for-profit publisher is just one.

For-profit and non-profit companies do business together all the time, and that’s normal and fine. However, an exclusive arrangement with a for-profit company is an odd choice.

This sounds like a good example of the OA confusion I wrote about several years ago here: https://scholarlykitchen.sspnet.org/2013/11/11/open-access-on-the-sea-of-confusion/.

However one cannot simply dismiss funder sponsored journals on the grounds of COI. Back when the US Government created the National Science Foundation, it was a deliberate policy choice to have the research results published in journals, rather than by the Federal Government. Some folks disagreed with that decision and it could be reversed now in the name of OA. As some OA advocates have pointed out, the cost of publication is a small fraction of the cost of the research, so the Government could easily take over the publication game. We already have a Government-wide US Public Access Program. It is green OA for now, but that is a policy choice that can be changed.

Mind you I am not suggesting that this is likely, merely that it is possible, especially given the politically attractive concept of Open Access. The Government already has a major Open Science initiative. See https://www.whitehouse.gov/administration/eop/ostp/initiatives#Openness.

In the policy world OA is a bit of a loose cannon at this point.

At the risk of being labelled a cheerleader, it’s worth noting that’s a very biased and unnecessarily alarmist perspective.

Because on the flip side, having all those conflicts of interest so explicitly declared in the journal’s own aims and scope enables readers to easily account for said COI when reviewing the literature. It’s those naughty undeclared COI that usually cause issues. Furthermore, having review reports on hand enables essentially anyone to keep tabs on sloppy reviews and other forms of foul play in the review process. Transgressions from the norm are plain to see and easily picked up on, with the funder-publisher itself actually being the most incentivized to track review activities and keep their grantees in check.

Ultimately, Wellcome Open Research is hardly about Wellcome being after “the trappings and benefits of being a journal” (on that note, funny how appropriately that quote illustrates why there’s such a fuss about Wellcome starting their own journal, huh?). No, it’s likely about Wellcome wanting to have deeper and more meaningful insight into results and outcomes of the research they fund with their philantrophic dollars.

Considering the percentage of (supposedly) negative or not-interesting-enough research results that don’t make it to publication, it’s not surprising Wellcome can’t afford to depend solely on third-party journals any more for quality assurance purposes and analytics. It’s not like Wellcome and their grantees are oblivious to the fact that a thorough review adds value to the research, while a half-hearted one may detract from it. What they achieve with Wellcome Open Research is that both they, and the public, get clear insight into the reviews, instead of having to depend on third-party journals that promise quality review, while generally trying what they can to keep the reports — the very evidence of the promised quality review — strictly confidential.

With that in mind, it’s only fair to at least give Wellcome a shot at it before gunning them down for having the nerve to think that they could possibly run a journal as good as them real good double-blind certified publishers do.

Wellcome isn’t doing anything novel here, except making a Wellcome-branded version of F1000 Research.

The COI issue isn’t an article-by-article issue, but a broad positioning issue. Disclosure and assurances don’t erase the COI issues — both with Wellcome and with the peer review approach. In fact, I don’t think they’ve actually disclosed the main COIs I’m pointing to. As I wrote in an earlier exploration of this issue, that might read like this:

Wellcome Trust funded this research, and also provides funding for the editors and editorial staff of Wellcome Open Research. The work was published before peer review, and the peer reviewers were invited by the authors.

You’ve misattributed the quote about wanting the trappings of a journal without the responsibility. That was pointing to F1000 Research. Now, through transference, you’re correct it also now points to Wellcome Open Research.

I doubt this post will dissuade Wellcome from their venture. They will get their shot, but it will be much like F1000 Research, which has not inspired imitation or been a success in most terms we’d normally apply. Wellcome seems to be doing a replication trial for a weak and possibly failed experiment. That’s not exactly a great use of scarce resources. Maybe they could fund some more research for the glut of scientists we have in the world instead.

With that in mind, it’s only fair to at least give Wellcome a shot at it before gunning them down for having the nerve to think that they could possibly run a journal as good as them real good double-blind certified publishers do.

This comment is written as if the new Wellcome journal were something novel. It is not. We already know what F1000 Research is, and can already gain a sense of how it performs. It is also worth noting that Wellcome won’t be “running” this journal at all, as it appears everything is outsourced to F1000 Research. All that Wellcome is adding is their name, branding and money.

If open peer review was the key benefit here, why not instead stipulate that Wellcome funded researchers must publish in one of the many (hundreds?) of journals that offer open peer review? It’s interesting that eLlife, Wellcome’s first journal, does not practice open peer review. They allow authors to decide if the decision letter and responses should be published along with the paper. It is optional, and the entire set of reviews are never revealed, nor are the names of the reviewers unless they also consent to be publicly named. If this is so important, why not implement it in your own journal and make it a requirement?

A few things that come to mind:

First, if I were one of the other journals that work on the same basis as F1000 Research (Science Open, RIO, etc.) I would be extremely angry with Wellcome for choosing only one such journal (and a for-profit commercial version) as the one to be anointed with their blessing.

Why not just encourage researchers to publish in their choice of journal that works in this manner? What is the benefit of segregating off Wellcome funded research from the research of the rest of the community? The only reason I can think for this would be a cynical attempt to game the Impact Factor, assuming that Wellcome research is going to be better cited than the rest of F1000 Research, and so by separating it, the journal will look more “prestigious” and attractive to authors.

If I’m a peer reviewer, and I do reviews as a way to give back to the community (as many state that they do), what am I to make of a journal that is not allowed to publish my research (assuming I’m not Wellcome funded)? Would I be willing to do peer reviews for such a journal, or would I rather spend my time doing reviews for a journal where I am “allowed” to publish?

Your question about reviewers is an important one. I suppose we can assume that its all Wellcome funded authors reviewed by all Wellcome funded reviewers? I am also curious about what happens to papers that don’t get two “thumbs up” reviews and is therefore not indexed (which means its not in the journal). Can authors then submit the paper to another journal? Is publication of the work a consideration of future grants?

Once the paper posts on F1000 Research it is considered “published” and cannot be submitted to another journal. There are some articles in the journal that have been sitting unreviewed for more than a year, and it appears the authors are stuck in limbo. If the article is rejected, it sits forever marked as such, a public showing of the authors’ failure. Authors can revise their articles but there is no guarantee of any further review to change its status.

Perhaps you are giving reviewers too much credit, something post publication review can address. I have a rejected F1000Research article that I do not in the slightest consider a failure. In fact I explain in my response what is wrong with the reviews, mostly that they fail to address the substance of the paper. It is a proposed taxonomy of funding-induced biases but they do not even mention that. Then too the first and most negative reviewers are journalists, not scientists, with apparently no grasp of scientometrics, which is what the paper is about.

I like the fact that this is all made public and the paper is getting good altmetrics. See http://f1000research.com/articles/4-886/v1. Public review makes bad reviews public.

Let’s say you are a graduate student or a postdoc about to hit the job market. Which is better for you, an article publicly marked as “rejected”, that you’ll have to go to each job search committee and explain why it’s actually a good article and why the reviewers are wrong (and that they’ll need to read both the article and the reviews before they can make any sort of judgment), or the same equally valid article that was privately rejected from journal 1, and then submitted and accepted with no changes to journal 2. All the search committee sees is an accepted article.

I would suspect that most people on the academic job market (or tenure market or funding market) would prefer the latter, to not have their paper branded publicly as a failure, and to not have to do all that extra explaining.

Two comments and a comment.
1. Peer review has been in place forever, yet claims made in the best science journals appear to fail to replicate over 50% of the time.
2. Retractions are as likely to happen in 1st tier journals as 2nd.

The fact that F1000 requires data to be public seems at least is a useful direction. My experience in asking for data sets is that most often the data will not be provided.

Two responses:

1. There are serious concerns about the quality of the claims stating low replication rates. In addition, there is some good evidence that problems with replication are not a failure of peer review, but a failure of laboratory controls, consistent naming and manufacturing of reagents, and the effects of variables nobody thinks might matter (relative humidity, shelf of the fridge the samples are stored, handedness of the lab worker).

2. How do you define “1st” and “2nd” tiers? And, what is your point?

Comments are closed.