sk podcastIn this episode, Peter Binfield, the publisher of the innovative open-access journal PeerJ, talks with host Stewart Wills about progress at PeerJ in the seven months since the journal’s launch, its unique business model, the key role of cost control at making PeerJ sustainable, and his perspective on this latest venture in the context of his 20-year career in scholarly publishing.

Listen:

Download MP3 of this episode

Subscribe:

Scholarly Kitchen podcast on iTunes

Scholarly Kitchen podcast RSS feed

Discussion

24 Thoughts on "Scholarly Kitchen Podcast: Peter Binfield on PeerJ"

Very disappointing. You didn’t press on a single key issue, ask for a single specific number, or question anything he said. That was one long advert for PeerJ. A wasted opportunity.

Can’t agree with Thomas re. “That was one long advert for PeerJ”.

Whilst there was a fair bit of discussion about PeerJ, there was equally a fair discussion about publishing generally. Good job, Stewart.

Discussion implies a back-and-forth. There really was none.

Pete was allowed to trumpet his company and often bend the truth to push his agenda.

One clear example: Pete declares they don’t have any offices, that’s how they save money. Yet their website says they have offices in San Francisco and London. Now, these offices could just be addresses for mail, or even people’s homes. The key point is this, when Pete wants to talk about slashing costs “he doesn’t have an office”, when he wants to look like a legitimate business, he has two.

The Podcast was full of that type of example that was just allowed to pass unquestioned.

There’s a really interesting contradiction in what Peter says in this interview. He attributes much of the planned success for PeerJ to cutting costs, and talks at great length about all the things a publisher normally provides that they are choosing to do without. Yet at the conclusion of the interview, he talks about how publishers are moving to more and more a service industry, a midwife charged with presenting and distributing the information from the researchers in the optimal way.

So are these two notions in conflict? If we cut costs to the bare bones, are we providing optimal service? To use one example Peter brought up, if we do away with our legal department, we no longer provide any support for authors in protecting their rights and reputation.

I guess you could do both (cut costs while also focusing on author services) as long as the features and services being cut are those that were traditionally reader-facing rather than author-facing. It seems to me that publishing has always been a service industry inasmuch as it deals with authors: authors come to the publisher’s table with a valuable commodity in a raw form, they hand it over, and publishers provide authors the services of filtering, review, editing, and dissemination, all of which are valuable. Then readers come to the publisher’s table with cash, they hand it over, and they get access to the product that the author created and the publisher enhanced. So publishing has been a service industry in its orientation to authors, and a commodity industry in its orientation to readers. In an environment where readers are no longer paying — either because they aren’t interested or because the content is being provided for free under an OA regime — the only business left for the publisher is the service it provides to authors. But that’s not a new service, it’s just a new concentation of focus.

FIltering is not a service to authors. When I submit my work to a journal, I want it to be published, not discarded as “insufficiently novel” or similar. That’s one reason why as an author PLOS ONE and PeerJ are much better for me than Nature and PLOS Biology.

And sure enough, filtering is one of the “services” that OA publishers, who are paid by authors rather than readers, are discarding.

(For what it’s worth, I consider filtering as a disservice to readers, too. I’d much rather all competent papers were publishers, and I could do my own filtering based on what I’m interested in. But I realise that others’ mileage varies on this.)

Filtering is in some ways a service to authors, if their papers pass muster and are accorded a high slot in the hierarchy. Getting a paper in Science or Nature can provide a major boost in terms of career advancement or funding.

And we essentially live in an era where everything does get published (I think this is an important role that PLOS ONE plays, getting a lot of material that might not be made public out there on the odd chance that it may prove useful). I take the position that the more filters available, the better.

So, publish everything reasonable and find the right level of sorting for your needs. If you just want an unfiltered flow then you probably do a Google search on a topic and dig deep to find everything coming out. If you want a certain level of filtering and work in biomed, you use PubMed which requires some level of quality for inclusion. And if you’re like most researchers, you note where articles are published and this helps you prioritize your reading list. But each step for the reader is entirely voluntary.

That all makes sense. I’ve sometimes analogised peer-review with a hazing ritual. I guess that goes double at the glam mags: 90% of the purpose of their review processes is so that people who make it through to the other end have something to crow about. (We know that the filtering done by those journals doesn’t by any means pick out the best science.)

BTW., as a point of interest, when I was at the Scholarly Publishing: Evolution Or Revolution debate in Oxford a few months ago, several publishers mentioned their acceptance rates. At PLOS ONE it was about 60%, and very similar figures were given for Nature’s Scientific Reports and one other journal (whose name, irritatingly, I can’t remember) which also practices “accept if it’s sound” peer-review. I was surprised how low those figures are, suggesting as they do that 40% of all submitted articles are essentially unpublishable.

I’d be inteerested to hear from editors of mid-to-low-ranked conventional journals, but my guess would be that 60% is pretty much the standard acceptance rate. If I’m right, then it seems the PLOS ONE model of peer review was not a novelty introduced by that journal — they were merely the first to come out and say it.

Peer review can only be analogized to a hazing ritual if it’s done badly. If it’s done right, then I would argue (with David) that its filtering effect does, in fact, benefit the authors who go through it. It acts as a facet of certification, a service that is of critical importance to scholarly authors, especially those on the tenure track.

Sure. The hazing analogy is a rather snarky one, which by no means applied uniformly. I have had peer-reviews that have improved my papers. But, honestly, they’re not in the majority — or at least, for most reviews I’ve had, the incremental improvement in the resulting paper was not a good use of the significant time involved.

Filtering is a service for both authors and readers.

In a recent survey I performed, the number one issue that came back from scientists was not a lack of access to research, or publishing fees, or the review process. It was, by an astonishing margin, having too much to read, not knowing what was important, and not having enough time to read it. They wanted an efficient way of filtering content. Be it through selective journals, recommendations from colleagues, or clever search algorithms.

As a result scientists still look to long running, subject specific journals, with good reputations as a way of helping them save time. For the author this means an increased chance of being read ny the right audience, which is a service. That’s going to stay that way for the foreseeable future and widespread, low barrier OA is only going to increase this problem as scientists have more to read and less time to read it.

We filter every day and we trust other companies to do it. I use Google to filter my search results, BBC to filter my news, Pandora filters my music, Netflix filters my videos. Yet when it comes to science, we’re throwing the baby out with the bathwater.

I don’t want to make it sound like I disagree with OA as a principal, I don’t and I have worked on an OA journal. However we position OA and OA journals in a shinning light. We say they are gold, or green. They are open! But we should also acknowledge the accompanying issues, otherwise we’re doing a disservice to our authors and readers.

Of course filtering of some kind is a service to readers. But filtering performed by a journal excluding articles for being “insufficiently novel” is not. If someone wants to publish an extremely nice article on serial variation in the parapophyseal laminae of the dorsal vertebrae of diplodocine diplodocids, then I absolutely and emphatically want that article to be published, and I will find it. A journal that rejects it on the basis of aiding readers is not helping at all.

Any scientist who thinks they can stay on top of their field by reading, or even skimming, everything that appears in his fields journals is living in the past. It’s been a while since that’s been possible, at least without investing hours a day. We need (and have) completely different, and much more personalised, filtering mechanisms. Your examples of Google, Pandora and Netflix are all of this kind. All three “publish” orders of magnitude more than you could ever scan by eye; you use post-hoc filtering to pick the subset that matters to you.

While finding a very niche article on something you’re extremely interested in is one route to finding research, it is not the only way and you’re applying a one-size-fits-all approach to science publishing.

My analogy would be that you’re confusing the web, with websites. Saying that all web pages are of equal value, no matter the domain or the author, as long as you can find it. That’s just not true. The BBC offers far more value to me on a daily basis than fixmyiosdevice.com does. Yes, when I’ve broken my iPhone I’ll search for the latter, but it’s not part of my routine.

The same goes for good journals. I’ll read Current Biology for off the beaten path articles, Genome Research for top articles in my field, and Nature for the mainstream high impact science. Do I miss items? Yes. Do I skip articles because they’re entirely irrelevant? Yes. But I discover a lot that search results would never have returned too. That’s a service to the reader, that’s a service to those being read.

With the approach you’re suggesting, it makes no sense for you to continually promote one publisher or journal over another. As long as it’s OA and appears in a search engine it shouldn’t matter if it’s PeerJ, Plos, or Elsevier. Because “you will find it”. In fact, we wouldn’t even need a publisher, just a repository with a good feedback system.

When the reality is, important research should find you.

Does PeerJ has a press dept. or has that been culled for not being a service to authors too? What about the service to authors of getting the results of research in front of people who aren’t even looking for it!

“With the approach you’re suggesting, it makes no sense for you to continually promote one publisher or journal over another. As long as it’s OA and appears in a search engine it shouldn’t matter if it’s PeerJ, Plos, or Elsevier. Because “you will find it”. In fact, we wouldn’t even need a publisher, just a repository with a good feedback system.”

Yes, that’s it exactly!

“Does PeerJ has a press dept. or has that been culled for not being a service to authors too?”

As far as I know there is no press department. Pete seems to do a decent job of keeping a certain amount of media attention (this podcast, for example!) in among all his other duties.

“What about the service to authors of getting the results of research in front of people who aren’t even looking for it!”

We call that Spam.

Pete keeps the attention on him and his journal. He didn’t once mention a single piece of science.

Getting science in front of non scientists isn’t spam. It’s very important for improving public opinion when it comes to budgets.

It’s important though, to acknowledge that it’s not some faceless algorithm called “journal” that’s making the judgment call and filtering the paper (thumbs up or thumbs down as far as level of interest). It’s a carefully selected panel of experts. We hear so many complaints about relying on poor metrics or shortcuts for evaluating the work of a researcher. The conclusion of those complaints is nearly always that to really understand the value of the work, one has to carefully read the paper in depth. Isn’t that exactly what the peer review process (for most journals, assuming the editor is doing their job properly) provides? What possible better filter mechanism can one ask for? This paper has been read by experts in the field and ranked at the level of this journal.

It’s not a disservice to readers to reject articles under those circumstances. The paper, no matter how obscure, will get published as long as it is the least bit competent. Rejection does not reduce the overall quantity of articles, it just moves them to a different journal (though to be fair, it does delay access).

You may not value the judgment of your peers and choose instead to use different filtering mechanisms. But for many, this is a valuable way of prioritizing one’s reading stack–I know that several of my colleagues who were carefully selected for their expertise have read this and judged it to be of X quality. Thus I can slot it into my stack here.

That’s not the only filtering mechanism people are going to use–“how directly relevant is this to my work” is probably more important, but it is valued. More filters available = better, and each reader gets to decide how best to use those filters.

David I think that example cuts both ways. Let’s be clear, the legal department works for the publisher and is focused on protecting the publisher’s interest not necessarily the author’s interest. There are times the publisher’s interest may coincide with the authors but others when they might not. For example if there is a disagreement between the author and the publisher over the what is allowed in author agreement that same legal department could be suing the author.

It’s a matter of providing the services that authors want (and are willing to pay for) rather than the ones that the publisher has traditionally provided and keeps providing out of habit.

Here’s a slightly more cynical example. Some publishers perform a plagiarism check on submitted articles. That is a service to readers, but not to authors. As an author who doesn’t plagiarise, I don’t want to pay you to waste your time doing this; and of course if I were an author who DID plagiarise then I would certainly not want you to check!

So this is an argument against Gold OA; this is particular area, readers will be less well-served when the author is the customer.

(For avoidance of doubt, I remain very much pro-Gold OA. But that doesn’t mean I’m blind to its drawbacks.)

Mike, as an author certainly you want to be protected against those who might plagiarize your work?

That is a good point. Thomas Hilock’s is even better. Yes, taking a longer view, there’s a case that plagiarism detection is in my interest as an author.

“Some publishers perform a plagiarism check on submitted articles. That is a service to readers, but not to authors.”

That’s a rather short-sighted view. Would you want to publish in, or have had published in, a journal whose reputation is tarnished by wide spread plagiarism?

Comments are closed.