Better Business Bureau logo.
Better Business Bureau logo. (Photo credit: Wikipedia)

In a recent opinion piece in the Chronicle of Higher Education, Robert Deaner, associate professor of psychology at Grand Valley State University, a liberal arts college in Michigan, proposed a Web site that would aggregate author opinions of journal review processes and times to publication so that future authors could avoid journals that have negative reviews or allowed long delays before publication.

I suggest the development of a crowdsourced, “author reviewed” journal-evaluation Web site. The idea is that authors from various disciplines would share their experiences with particular journals, both negative and positive. There would be quantitative information such as time until receiving notice of being reviewed, time until receiving first review, total time from initial submission until final publication, and, of course, acceptance or rejection. And there would also be opportunities for rating or commenting on key issues, like the fairness and constructiveness of editors and reviewers and the efficiency of the journal’s production staff.

The Burnable Books blog suggests this might be a “Consumer Reports for journals.”

Framing the idea within the Consumer Reports concept suggests that authors are a customer group, capable of shopping between service offerings based on equivalencies. Which seven-passenger journal with ABS and handsfree phone integration do I want? Clearly, that’s not how journals work, as Rick Anderson recently pointed out, but there is something to the idea. To get to the idea, we need a better model for a monitor on the trust economy.

In the journals economy, the shopping list for authors shortens almost immediately. Most authors have 2-5 journals on their list when they’re ready to submit, and these journals are well-known to them. Authors, or people immediately available to them, know these journals’ reputations for speed, editorial process, and selectivity. Authors often know the journals so well that they can even suggest reviewers.

Generally, authors are satisfied. They encounter submission systems they’ve seen before, and receive good treatment. Most journals survey authors to gauge their satisfaction with the review and publication process. Of the few dozen of these I’ve seen, there is a very low ratio of dissatisfaction to satisfaction. If this ratio is generally low, a site like the one proposed will attract only squeaky wheels, and quickly lose any initial credibility it might possess.

Social media in academia runs into the problem of incentives right away, forcing the question of why someone would come to a site like this to publicly criticize a journal they may want to be published in later. Reviewing takes time, and there’s no clear upside. The incentives just don’t line up.

Authors are only part of the equation for journals. Deaner seems to suggest that one dimension of dissatisfaction an author might express on his hypothetical site would be a low acceptance rate. Journal publishers want to satisfy authors to a certain extent, but selective journals gain their reputation largely by rejecting a high proportion of submissions. They are not citation mills, set up for authors to use to generate citable objects in the literature.

In a blink, authors turn into readers, and they want other dimensions of quality. A high acceptance rate is not likely to be one of these. Aligning the interests of researchers, who are both authors and readers, requires a balance. Focusing on one role only fosters a hard limitation.

Deaner ends with a question, “Why doesn’t this exist already?” In a way, it does, but at the community level for established journals. The more pertinent question we’ve been wrestling with recently is what to do with all the upstart journals the Internet has fostered. This week, many posts and comments have touched on this issue, from Jeffrey Beall’s list of his so-called “predatory” publishers to new OA publishers exploiting CC-BY licenses to validate themselves. In one particularly productive exchange, the notion of a Better Business Bureau for new journals was batted around, something that takes the more positive approach of credentialing new journals rather than approaches that put them in a negative category they have to work out from — an uncredentialed journal might make authors wary if this were made to work.

So, I don’t think we need a Consumer Reports for journals, but we may need a Better Business Bureau for small and new journals, to separate the legitimate from the fly-by-night. The Internet has unleashed many things, but we don’t need to just stand by helplessly. Credentialing those journals that are able to get their houses in order soon seems to be a job somebody needs to take over. MEDLINE does it to some extent, but that’s a slow process, and the confusions with PubMed have deteriorated its power.

There seems to be a need. Is someone going to fill it?

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

21 Thoughts on "Do We Need a Consumer Reports of Journals, Written by the Authors?"

Interesting Post Kent. In my talks with researchers I’ve also found a growing sentiment that a tool like this is overdue. So to answer your last question, Rubriq is going to fill this need by implementing a credible and transparent journal search/compare/rate system along the lines Dr. Deaner suggests as a free tool for researchers.

Publishers do provide a service to authors (regardless of business model) and there are several dimensions that authors should know prior to submitting their papers – especially since they can only submit to one at the time and there are real costs of associated with making poor journal choices for everyone.

I’ve been asking researchers about this very need and below is a quick list of data elements they feel would be valuable in an “apples-to-apples” view of journals. I’m sure there are many more we can add over time, but the notion of more transparency should be embraced in any service industry.

Time – average times from submission to first decision, average review cycle times, and average time from acceptance to publication, etc.
Copyright / Intellectual Property Policy
Journal Visibility and Distribution – Indexes, Circulation, etc.
Reputation –Impact Factor, Acceptance Rate, Journal Rank, Strength of Editorial Team, Age of Journal, etc.
Ease of Submission Process / Author Guidelines
Peer Review Process Description / Acceptance Criteria
Quality of Peer Review Comments
Costs – APCs for OA journals, page charges, etc.

Some of these will depend on authors to submit information on their own experience, but much of this information can and should be provided by the journals. Will there be some squeaky wheels? Sure. But if similar systems can be used for searching/comparing/rating Elementary/Secondary Schools, Universities, and Health Care Providers then why not Scholarly Journals?

This seems a better way to distinguish the reputable journals from those that may not be and provides authors with needed information to make educated decisions.

All of the quantitative or objective information you list already is available on the websites of most reputable journals. All it takes is a little exploration. What might be helpful is a template page so prospective authors could find this info more quickly.

This is a wonderful idea, worthy of support and enthusiasm.

One suggestion is that such a “BBB for Journals” might make sure they have some method to prevent the certain types of less-ethical publishers from attempting to thwart the system from the get-go.

We see what happened on Amazon, with so many rave reviews actually prepared by the publishers or their PR firms, or even the authors and their friends.

Is this technically possible?

“Consumer Reports” seems the wrong metaphor here. That magazine is set up as a neutral third party that does its own (supposedly) unbiased testing of products. For a parallel here, you’d need them to submit papers to journals and report back on performance.

Instead, this is more of an Amazon style review system, which, as has been discussed ad infinitum here and elsewhere, is something useful, but also something flawed in its own way.

As for the BBB approach, doesn’t OASPA do something like this for OA journals?

It would be nice if the OASPA did perform a “ratings/feedback/report” function it would be nice, but does it? It’s a respected membership organization. It doesn’t seem to rate/report individual journals, other then imply respectability on the journal level by dint of publisher’s membership in the organization. If this is a positive start, might it be helpful if the OASPA provided on its website a constantly updated union list of journals (of its members) for both academics and tenure & promotion committees?

While most journals do a pretty good job of reviewing and most editors are sensitive to the needs of authors seeking promotion and tenure, almost everyone I know in academia has a personal story like Robert Demmer’s friend Jana. Just look at some of the comments on the article in the Chronicle of Higher Ed.

It’s not that you get rejected, that is expected fairly often. It’s when, like for Jana, editors sit on the manuscript for 8 months and then say, well, it doesn’t really fit with what we publish and we didn’t send it out for review. Or worse yet, sit on it for 6 months ask for a bunch of changes, sit on the revised version for another 6 months and then say well gee, the manuscript doesn’t really fit with our scope. Well, crap, any decent editor, or should I say human would have told you that in a couple weeks.

What gets me is that it is considered a crime somewhere between armed robbery and murder by editors for authors to simultaneously submit an manuscript to multiple journals yet this type of behavior by editors is shrugged off as no big deal. If there is an expectation that authors refrain from simultaneous submissions which of course makes sense, it seems there is an obligation by editors to process manuscripts as quickly as possible. I’ve been an editor and understand how it is possible to get swamped and for manuscripts to fall through the cracks and inadvertently get sidetracked for one reason or another. There is no excuse however for the arrogant disregard some editors have for authors and the effort that goes into a manuscript and the research behind it. I don’t know if Robert Demmer’s suggestion for such a site would work but I certainly understand and sympathize with his motivation.

If 5% of your authors have a bad experience, and you’re a major journal, that can amount to enough anecdotes to be observable. But the silent 95% are 19x more important. So, yeah, everyone has a story in this town, but mostly it goes pretty well. The question is whether there’s enough to make a social media project work around it, and at that ratio, I think it’s unlikely. And there are other disincentives along with a general lack of positive incentives. The comments on CHE are inevitable, but they don’t answer the question.

Even rejected authors are often satisfied with the process at most journals.

The prohibition on double-submissions comes from lessons learned. As you note, there are varying degrees of latency between journals. Say you get accepted at one journal while your duplicate submission at Journal 2 is out for review. What do you do? Make the editors and reviewers at Journal 2 angry by confessing this? What if Journal 2 also accepts it a week later, and was the journal you really wanted to publish in, but you’ve already agreed to publish it in Journal 1? It’s not arrogance, but an attempt to keep the process for becoming completely FUBAR.

As for the perception of “arrogant disregard,” sometimes that’s just objective third-party behavior. At some point to a surgeon or physician, you have to be just a piece of meat or a set of vitals and findings. Sure, it’s nice if they have good bedside manner, but ultimately, if they aren’t able to step back and treat the condition well, all the warm and fuzzies in the world won’t matter as much as competence and quality.

Read a little closer, it’s not about being rejected and I didn’t say simultaneously submitting manuscripts should be allowed. What I said was if you are going to require that authors tie up a half a years work on a study waiting while you decide whether or not to accept their manuscript, you should have the common decency to make the process as fast and efficient as possible. Most editors do but a few don’t.

As for arrogance, other than your comments above, I was talking about the editor that screwed over Robert Deaner’s friend for no other reason than to be mean and they could do it.

The behavior of Editors is idiosyncratic. Having had experience hiring and replacing hundreds of journal editors for 30 years in the scholarly commercial sector, I can report the gamut of the scale: the best were fastidious, fully professional, fast, fair, and scrupulously careful. The worst were disdainful, even arrogant. Some candidates sought or accepted a journal editorship when they should not have. Some had unrealistic expectations. Others rose to the occasion. Was it possible to tell in advance who was or was not a great journal Editor? Most of the time it was, but not all of the time. Editorship selection and replacement was a skill set acquire over time.

I strongly agree on the need. The arrogance level can be high for even second or third level journals, let alone the top journals. Eight months is much too long for any journal level.

“Or worse yet, sit on it for 6 months ask for a bunch of changes, sit on the revised version for another 6 months and then say well gee, the manuscript doesn’t really fit with our scope”.

Or even worse, someone moves the goal posts during revision of a paper (see http://journalofpathology.wordpress.com/2013/05/15/moving-the-goal-posts/). . . but things are sometimes complicated!

We have all had bad experiences as authors but I do think that most editors and editorial teams try to (as one might say) try to “play with a straight bat”!

Double submission by authors just complicates everything and wastes scarce resources!

With my tongue placed firmly in my cheek . . . .

If authors have a “Consumer Reports” of journals and editors, can editors have something similar about authors?

😉

Kent, in citing my “signal distortion” post, you’re actually representing my argument as the opposite of what it is. In my view, authors are in fact a customer group, shopping between journals that offer fundamentally equivalent services: each offers some combination of editing, distribution, and prestige. So an author choosing between Biology Journal A and Biology Journal B is choosing between equivalencies the same way a car buyer does when he chooses between a Ford and a Chevy (both of which do pretty much the same thing, though one may do it better than the other), whereas a library or reader who has to choose between those journals is more like a homeowner choosing between the gas bill and the power bill (because the two journals offer completely different content, albeit within the same discipline). This isn’t to say that readers aren’t customers as well — just that authors and readers are both customers of two different services provided by publishers.

Interesting! You’re absolutely correct (quote from your post: “Journals do compete for content suppliers (authors) because every journal provides authors a functionally similar service.”). So I apologize for a mischaracterizing citation. However, it’s an interesting point to debate. One study I think Phil analyzed found that authors shoot high, then if rejected, quickly settle on what was probably the most likely outlet anyhow, and doing so garners them more citations. Because authors want readers, I guess my internalized understanding that their behavior is not about “services” but about “relevance” and “readership.” In this case, they are more in the same boat as purchasers, because even if 1,000 different journals use EZManuscript3.2, authors aren’t attracted by that. They are attracted to the same inexplicable outlets libraries have to buy, and in much the same way, I believe.

So, I apologize for citation misdirection. That was not cricket. However, I think there’s more here than just competing for authors with functionally similar services.

No worries (I knew it wasn’t intentional), and I agree that it’s a complex and interesting question — maybe worth a whole post of its own to tease out the various issues and ambiguities.

As I noted in my comment on the CHE piece, the Times Higher Education Supplement performed a valuable service by regularly reviewing journals. But those reviews focused not on the metrics discussed here but rather the intellectual content and substance of the journal as a contribution to the field. That kind of review is perhaps even more valuable for scholars (and P&T committees) than the kind discussed here. It certainly could help with the problem of “predatory” journals (though there are so many such journals these days that the task of reviewing them all in this manner would be daunting indeed).

Many years ago I published a book such as you describe for (if memory serves) microbiology. It did very poorly.

Understanding what really motivates the contribution to and the use of this BBB would be interesting.

The tech to build it is here, but I wonder if the motivations that could, or could not, drive its use are well understood. What fundamental behaviors could motivate publishers to systematically provide data, and/ or authors to regularly score journals? Who are they creating this for, what do they want in return, and what could drive mass utilization?

For some Friday fun, should a BBB on journals, (publishers and editors), be complemented by a BBB on authors? Maybe that could be equally useful for the advancement of research…

Comments are closed.