Convex lens Taken by fir0002
Convex lens Taken by fir0002 (Photo credit: Wikipedia)

Recent developments around open access (OA) publishing — the RCUK mandates, the OSTP memorandum — have had an interesting theme to them, which is revealed again in an article by Richard Van Noorden published in Nature News. This theme might be described as a sobering dose of reality.

Van Noorden’s long article does an admirable job of breaking down the costs of publishing a scientific or scholarly article, but even the trenchant Van Noorden finds himself thwarted by complex and sedimentary accounting practices, where newer costs like IT and corporate-wide costs like legal and HR and PPE are not placed against specific product lines. This means the profits quoted are more akin to a compromise between gross and net profits — they are not the final net profits fully loaded with expenses. In addition, separating journals budgets from books budgets and database budgets, separating subscriptions from advertising and licensing, and separating scientific from scholarly from professional — all are fraught and difficult because every business has evolved significantly. Where do all the new platform and IT costs go? What about the staff that works some on e-books, some on XML, some on UI, and some as DBAs? How are these allocated? Are they even allocated? Or do you just put them in a corporate function and manage it that way?

But the major thread of “this is getting real” is clear in Van Noorden’s article — no longer can OA rally the revolutionaries with visions of milk and honey and hope for the best. Now, its advocates need to start addressing hard questions based on what their approach might portend for scientific publishing. Major questions are becoming clearer to the general population:

  • Will this actually save money?
  • Could this actually be more expensive?
  • Will authors become the drivers of businesses instead of readers?
  • Will I still be able to find quickly and easily good material that interests me?
  • Can I still trust what I read?

There is a wrinkle Van Noorden acknowledges but which remains stubbornly difficult to address — namely, the complexity of the labels we throw around. Is calling the Journal of Biological Chemistry and PLoS ONE both “journals” that enforce “peer review” and publish “finished articles” to a well-defined “audience” really fair to either entity? These are different animals, each with their own purported strengths and potential weaknesses. There is a spectrum at play for each quote-marked term above. Why can’t we be more specific about where on these spectra each lies?

Lacking such a device, Van Noorden’s article breaks down costs into three bins — print and online, online-only subscription, and online-only OA. This is a container-based filter — PLoS, for example, has online-only OA journals with very different editorial structures and quality thresholds. We continue to have a fascination with the containers, and this can create a false sense of equivalence, especially around editorial processes and outputs.

The configuration of editorial offices and editorial duties is a potentially major difference between approaches to publishing articles, and one we have actually learned to disrespect to some degree — anecdotes of failures are used to undermine processes that work well the vast majority of the time, for instance. It’s the same tough sell preventative medicine has — one failure, and the peace and quiet of effective prevention can’t drown out the howls emanating from that single problem.

One significant editorial difference is whether the publication has an editor-in-chief or not. New publishing initiatives seem to handle this issue blithely. In emails between PeerJ and the National Library of Medicine (NLM) retrieved via my ongoing Freedom of Information Act (FOIA) request, PeerJ confidently notes its similarity to other journals that have no editorial leader at their helms — PLoS ONE, Scientific Communications, and F1000 Research, just to name a few.

On January 17, 2013, Peter Binfield emailed Chris Kelly (with a cc: to David Lipman) about their editorial process:

Actually, our model doesn’t use an Editor in Chief. In a similar way to journals like PLoS ONE, F1000 Research, Scientific Reports etc there is no EiC and instead individual papers are handled directly by individual Academic Editors.

There are substantial questions about how these practices work out in the real world. Mega-journals can publish many articles, but journals need to do more than that — they need to cultivate an audience that goes beyond a line of authors rotating through turnstiles. A distinctive voice and audience niche can help this happen. An editor-in-chief can make this happen. Perhaps a key distinction is actually being outlined in Binfield’s email — whether a journal has an editor-in-chief is an important defining characteristic.

Having participated in a few transitions of editors-in-chief in my day, I can tell you that a different person at the helm can make a world of difference in the performance, attitude, and emphasis of associated editors and reviewers. A forceful editor can put the snap back into the shorts of a lackadaisical editorial team. The lack of an editor-in-chief strikes me as a difference we should not easily accept as equivalent. Who is responsible for the editorial content? Individual “academic editors”? The authors? The publisher? Who stands behind the brand? The “process”?

To me, this is a very important omission from the quality equation, and betrays a belief that all peer-review does is validate something about an article — call it “scientific soundness” or “methodological soundness” or what you will. We forget that peer-review is also about filtering by importance and relevance. These newer journals may be cheaper, but those savings might be achieved by leaving out a key editorial member — namely, the editor-in-chief.

Van Noorden’s article is useful and interesting, and continues the growing movement for accountability and transparency from OA publishers. However, we need to become more serious about distinguishing the various approaches to scientific publishing, and editorial goals and processes are perhaps more important than publishing modalities.

The containers have been fascinating. But what really matters in the long run is what we put into them and that the right people use their contents.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson

Discussion

23 Thoughts on "The Lens We Look Through — Are We All About Containers or What Goes Into Them?"

You bring up a good point, in viewing journals as containers with various characteristics. Obvious to some extent you get what you pay for. Using highly qualified professional editors to manage peer review, low acceptance rates, high quality professional copy editing and typesetting all add substantially to the costs.

The questions are is what you pay worth what you get and what is necessary? What can we afford? Is it a good deal?

There was a jaw-dropping statement in Richard’s excellent article.

“Philip Campbell, editor-in-chief of Nature, estimates his journal’s internal costs at £20,000–30,000 ($30,000–40,000) per paper.”

Wow!!! $30,00 – $40,00 per paper! Obviously Nature is a superb journal, but $40,000??

According to Pete Binfield who was quoted in the same article, PeerJ costs are in the low hundreds of dollars per article. True, Nature puts a lot more effort into publishing an article than PeerJ and the “fit and finish” is certainly better but they both do quality professional publishing.

I am one of the 800 or so academic editors at PeerJ and having gone through the process of managing the review of one of the published articles I feel very comfortable in saying PeerJ does do high quality peer review.

First, we are all highly qualified to be academic editors, check the web site.

https://peerj.com/academic-boards/editors/

The review I managed was real peer review with a couple of very qualified outside reviewers. It was a good research paper to start and went through a major set of revisions based on the feedback received. It came out significantly better paper. Pete, like any good managing editor got on my case a bit when the review dragged out because I had the usual trouble rounding up reviewers over the Christmas holidays etc, etc. The PeerJ review standards are more narrow, not lower than the typical high quality scholarly journals I have reviewed for and which I published.

I think a fair question is what are you getting from Nature for $30,000 to $40,000 and can we afford publishing costs of this magnitude?

John Wilinsky in his excellent book The Access Principle make a rough but reasonable estimate of the average cost of producing an article funded by an NIH grant by simply dividing the total NIH budget by the number of articles published that year. It came out to about $60,000. Should it take over half as much to publish a research article as it costs to do the research? What is the value? What do we get beyond what PeerJ can provide for a few hundred dollars?

Your response shows how the definition of “peer review” gets elided so easily. The cost of outside peer-review is low, and managing it adds a few hundred dollars. But this is only part of peer-review (the “validation” part), and only part of the “validation” step at that. After all, part of “validation” is “validation for whom?” If you don’t know the audience, you might accept an author’s perspective on what their study said, yet perhaps they’ve picked a perspective that fits an outcome. “Methodologically sound” is weak tea for validation. Is it good enough for high-level scientists in the area to care about? That’s a stricter validation standard, and often an EIC sets the tone for that.

But even if we accept that validation is the less expensive part of peer-review, then we need to look at perhaps the more expensive part of peer-review, which is editorial review for “relevance” and “ranking.” Is it important enough? Is it right for this audience? Those dimensions of peer-review have been shunted from some new journal approaches, but the organizational memory, organizational awareness, field awareness, field memory, and so forth are really important. I’ve seen papers get to near the end of the line in peer-review only to have a senior editor say, “Don’t we already know this?” and then quote the studies that have already shown what the authors purport as novel and interesting. These people (and the supporting infrastructure for them) don’t come cheap. Is it important? Top-tier journals with large audiences succeed, and use editors like these. From what I can tell, PLoS ONE articles get a few hundred reads on average — not a large audience, and who knows if it’s an audience that is just Google exhaust or composed of really qualified practitioners in the field?

Then we get into what it takes to publish to an audience, and that includes fulfillment systems, lists, access controls, print runs, marketing, data management, etc. You can kvetch all you like about subscription journals, but they do a much better job of identifying an audience and delivering information to a relevant audience. Some selective OA journals are doing well at this, too. But those are either more expensive, losing money, or both. It is expensive to be selective, and expensive to attract and keep a relevant audience.

Publishing isn’t complete without reaching the relevant reader. Neither is, I assert, peer review. Just saying something is valid isn’t enough. Getting it ranked and making it relevant is demanding and expensive work, and involves review, publication, marketing, and editors.

I thought the very high end journals like Nature and Science have such a high per article cost because their rejection rate is so high – they’re incurring the costs of reviewing 99 papers for every one they accept. The typesetting/hosting/marketing costs are probably only a small fraction of the $30,000, and so the cost per submission is probably under $500, which brings it back close to PeerJ’s number.

I don’t have time or the inclination to argue with you over this but where is the evidence supporting what you assert? PLoS One does manage to have a very respectable IF, not to be catty but I believe actually higher than your journal. IF not the end all and be all and with the variance in citation rates it may not be a fair comparison but it does show researchers are finding, reading and citing articles in PLoS One. It seems like there’s an audience.

Again, I ask the question, is the added value Nature provides really worth 30K?

Impact factor is a lagging indicator. Given PLoS ONE’s publication volume and citation patterns, I project their IF will drop to 2.x in the next couple of years, and others are more pessimistic, guessing the additional articles are less citable, and have stated they believe it will be 1.x before too long. But it takes years for these things to sort out. There is “an audience” for PLoS ONE, but which audience is it? Their own executives have said that they do a poor job of getting their articles into the right hands.

As noted in other comments, traditional journals spend a lot of money on rejecting articles, contextualizing articles, polishing articles, improving articles, and publishing articles to the right audiences. It’s not as easy or cheap as you think. What OA will never allow us to see easily is whether their audience is actually cohesive and meaningful, or transient and unqualified. That would be a very interesting question.

What always mystifies me about these conversations is why the new guys get a pass, but the proven players are put on the defensive. As this post notes, I think the new players are being put under the microscope more and more, and they aren’t looking too great. Objections to the CC-BY license, the impingements on academic freedom and research budgets, and so forth — all of these are raising big questions that a low price-point doesn’t satisfactorily answer.

Time will tell on the PLoS One’s impact factor, it was still rising 2010 to 2011.

Who’s giving anyone a pass? I simply made two points.

First, PeerJ does do real peer review and the external peer review is managed by academic editors who are experts in our fields.

Secondly, is the 20 – 30K USD that it apparently costs to publish an article in Nature worth it? You and nobody else seems ready to touch that question.

The question you’re asking isn’t really being asked well. Is $20-30K to publish in Nature worth it? Currently, it costs nothing to publish in Nature. Their business model is reader-pays, which means people who want the material they’re publishing are paying to receive it, or having someone pay on their behalf. They aren’t asking the market whether it’s worth $20-30K to publish with them. They’re asking readers and their proxies if it’s worth a few hundred or a few thousand dollars for access to their content. Apparently, the answer is often, “Yes.” To perform the same work in an author-pays model, with the same rejection rate and contextualized coverage readers expect from Nature is their estimate. But right now, it costs authors nothing, from what I can tell, to publish in Nature.

PeerJ does real peer review to a point — some validation, but even they admit it’s not full validation, because they don’t have an audience in mind so can’t just novelty or interest level. It’s just what they feel is OK to publish. That’s very different than full, audience-directed, rigorous, editor-in-chief approved peer review publication.

The question here is what is a high rejection rate, a tight filter on relevance, and a high-ranking publication cost to create? The OA examples without EIC’s can’t answer those questions, because their rejection rates are relatively low, their filters on relevance are uneven, and their rankings aren’t sorted yet.

I think it’s an interesting question though, one I posed a few weeks ago in a comment thread. Assuming a world where Nature goes fully OA and offers an APC of $30K, would it be worth $30K to a researcher to publish an article in Nature?

For most, the answer is a resounding YES. Will a Nature paper result in, collectively for all the authors of that paper, least $30K in additional funding or income due to career advancement? Given how strongly our system for allocating funds and jobs favors publishing in this manner, I think that would be seen as a wise investment by many, if not most researchers.

The problem, of course, is that most researchers don’t have a spare $30K sitting around to invest in this manner. That might make for an interesting debate though for an RCUK-funded author in a department with a block grant for APC’s, could you get your department to spend a large percentage of the year’s budget on one high profile paper?

Yes, exactly the point. A resounding yes when spending someone else’s money. While I am a huge fan of open access I am not a huge fan of the APC model for funding it especially without some cost controls and standards for the quality of publishing funders are willing to pay for. One good thing APCs do is clarify exactly what is being spent. Focusing on cost from the reader perspective as suggested by Kent Anderson would be fine if it were a completely elastic model but it is not. Subscriptions for key journals are a monopoly particularly when they are part of accreditation requirements. That is why I feel it is useful to look it from the perspective of costs per article.

“As noted in other comments, traditional journals spend a lot of money on rejecting articles, contextualizing articles, polishing articles, improving articles, and publishing articles to the right audiences. It’s not as easy or cheap as you think.”

I have always published in traditional journlas so far (mostly ACS journals), and I never had any experience of ACS “contextualizing” any of my articles, “polishing” them (apart from some minor typographical corrections) or “improving” them (apart from the peer reviews, which were most often shorter than the ones I can see in the PeerJ site, and sometimes wrong: I have had several reviewers who did not understand a trivial point of my paper because they did not know something as basic as the Eyring equation). “publishing articles to the right audiences” is quite subjective: how can a single editor (no matter how eminent) ascertain the “importance” of the hundreds of disparate papers he/she gets every year, unless he/she is omniscient about the tastes and interests of the bulk of the journal’s readers? How is the audience of European Journal of Organic Chemistry different from that of the Journal of Organic Chemistry, or that of Tetrahedron, or Organic and Biomolecular Chemistry? Their audiences are the same, and the decision (by more “selective” journals) to reject a paper without sending it to review is inherently much more prone to bias regarding what the “hot topics” are, or whether the report has been written by a well-regarded PI / Department.

How much does it cost the author(s) to publish an article in a PeerJ journal?

Can that charge cover the costs of publishing the article?

How much money did the company get in seed money and if it is being successful why did it have to raise an additional $950K?

What do they publish? According to a recent article in Nature

Despite the low publication cost, PeerJ’s founders promise that, as with PLoS ONE, articles will be peer reviewed for scientific validity — but not for importance or impact. Other open-access journals have also adopted this policy, including Nature Publishing Group’s Scientific Reports. It marks a distinction from selective open-access journals such as the forthcoming eLife, which plans to publish only high-impact work. To avoid running out of peer reviewers, every PeerJ member is required each year to review at least one paper or participate in post-publication peer review.
http://www.nature.com/news/journal-offers-flat-fee-for-all-you-can-publish-1.10811

When working at ACS is was proudly stated that their journals only accept the top 10% of articles. That is a mark of quality. Does PeerJ have the same bonafide as say JACS?

PeerJ does not have an EIC.

What is the editorial vision of PeerJ? Is it just to publish everything that comes along that has scientific validity? That is like the old saying throw it against the wall and see what sticks. Or is the goal simply to make money off of scientists who need to be published regardless of the import of their work so long as there is a trace of scientific validity even though what they have done is not important nor does it display any signs of having scientific impact! Is PeerJ the fall back position for someone going for tenure, promotion or seeking a grant?

We wonder and sit in dismay over the lack of an appreciation for science. With an editorial vision like PeerJ why do we wonder?

Wow that would really suck if I had to read a paper and think about it in order to evaluate its worth, rather than just the journal title and impact factor! totally worth 40k per paper

You really think you can do that with 26,000 papers published in one year alone in PLoS ONE? Good luck getting that or anything else done.

That is a straw man: no one reads all papers published by nature, either. Every one should be able to read the papers directly relevant to their research and to have an informed opinion about them. Accepting a paper related to your research “on faith” is not the mark of a critical scientist

you’re right, those search engine things are still pretty primitive, hardly ever give you what you are looking for! Someday they will perfect those so I can use some words to define what I am looking for and I’ll be able to see just the articles I’m interested in! that would be cool…

And tell me you won’t use brands (or that Google or Bing don’t use brands) to sort those results.

I don’t control what the search engine does – what i do is use google scholar, then scan the title for whether it is really relevant. If it is, I check and see if I have access. If I do, I click through and read the abstract, and then either read it or not. The “brand” is of minimal importance to me. But then again I know what I am doing.

I see. You might be surprised at how important brand is to citations, and how important citations are to Google Scholar, ipso facto how important brand is to Google Scholar. We’ve recently published a post by Phil Davis about research covering this topic:

http://scholarlykitchen.sspnet.org/2013/03/25/can-f1000-recommendations-predict-future-citations/

Apparently, the F1000 editors know what they’re doing, too, but citations still beat them. And citations are very brand-dependent.

There would be a lot to discuss, but I especially want to stress a point about your questions
“Will I still be able to find quickly and easily good material that interests me?
Can I still trust what I read?”

By asking them in the context of a move toward open access, you are implying (and stating pretty clearl at other times) that subscription journals actually do a good job at this. They do not, at least not more than what an open access journal can do.

Let us start with the first one: to find quickly and easily the material that interests me goes in two parts.

First one needs to identify this material, and this mostly rely on branding (as you stress), databases and social network (with the weight of these factors depending strongly on the field ; for example astronomy papers are mostly published first on arXiv, then in two big, relatively equivalent journal without much trouble, so branding is not very strong there). None of these factor are specific to subscription journals; that some OA journal have specific filtering practice does not make it a specifiicty of OA, but only of these journals.

Then one needs to access this material. There, clearly subsription journal do quite a poor job, as in too many cases one has not the right to read the papers one needs. I know no library that can afford all relevant subscription, and the big deals often make them unsubscribe from important journals.

Let us turn to the second point: Can I still trust what I read?

Here again, it is very bold to imply that subscription sets journal to a higher level of confidence than OA does. Each journal, through its editorial policy and process, and very importantly through their transparency, all of which are independent of it being OA or not, sets a level of confidence. In fact, there is strong evidence that the filtering by expected importance (which you praise so high) is in competition with the filtering by scientific soundness. Nature searches sexy papers, but does not do a very good job at ensuring that the papers published are sound. Moreover, there are a lot of non-selective subscription journals and of subscription journals with very poor review process, and almost any research paper can find a subscription journal to be published in. I now several subscription journal that do not have editor-in-chiefs; most of them have managing editors, but in many of them each academic editor makes the decisions about the papers he processes alone. All in all, I do not see any evidence or argument that OA should have a negative influence on all of this. It could have a positive influence if raw data where also made available, so that one could much more easily check the published work.

No, what I’m saying is that journals with editors-in-chief add a layer of peer-review that’s important to the relevance and ranking filtration journals have traditionally contributed to the scientific community. These types of review are undervalued in many OA programs (PeerJ, PLoS ONE, F1000 Research), and those journals have a hard time getting their papers to the right audience, because they don’t have a sharp prow to their ships. They’re more like rafts in the waves, buffeted about by what is submitted, but directionless. This makes it difficult to form an audience of any particular demographic or domain specificity. This hurts editing (who are you editing for?), peer review (what standard are you reviewing to?), and discoverability (can I trust this journal that’s not routinely part of my community?).

OA journals with editors-in-chief seem to be more expensive to run (PLoS Medicine, PLoS Biology, both of which are subsidized by PLoS ONE’s bulk publishing model). So business model really has nothing to do with it. You could conceivably have a subscription journal with no editor-in-chief. Maybe they are out there. I’ve personally never seen it. I think this might be because to attract subscriptions, you need to have a defined market/audience and a respected or at least accountable authority at the helm.

If you respect editorial processes, you need to be willing to be skeptical of those processes that drop potentially important elements. Who sets the standard for peer-reviewers if there is no EIC? Who defines the level and expertise of the putative audience if there is no EIC? Who signals to the community who the journal is intended to serve? Who has the authority to improve editorial processes? If nobody has these and other related roles, why should I be all that interested in the editorial output? I have many other things to do. It’s a competitive information space.

It seems to me that the journals are doing a rather good job when it comes to publishing reliable content and that includes Nature. The question seems to be is the content of importance. Again the PeerJ role and scope statement seems to put quality at the bottom of the list and quantity at the top of the list. see: Science publishing: The trouble with retractions http://www.nature.com/news/2011/111005/full/478026a.html

Yes, and look at where the retractions are coming from for example 21 peer-reviewed articles over 13 years from Scott Reuben based on completely phony data.

http://www.anesthesia-analgesia.org/content/112/3/512

I am not saying Anesthesia & Analgesia and the other journals that published Reuben’s studies based on faked data don’t do a good job of peer review, they certainly do. Just don’t go blaming the rise in retractions on OA journals like PeerJ which also does good peer review that is focused on methodology. It’s poor methodology and in this case outright fraud that causes retractions, not whether or not an article in someone’s opinion is interesting or important. Reuben’s research was certainly considered interesting and important. It just happened to be fake.

Comments are closed.