In a recent opinion piece in the Chronicle of Higher Education, Robert Deaner, associate professor of psychology at Grand Valley State University, a liberal arts college in Michigan, proposed a Web site that would aggregate author opinions of journal review processes and times to publication so that future authors could avoid journals that have negative reviews or allowed long delays before publication.
I suggest the development of a crowdsourced, “author reviewed” journal-evaluation Web site. The idea is that authors from various disciplines would share their experiences with particular journals, both negative and positive. There would be quantitative information such as time until receiving notice of being reviewed, time until receiving first review, total time from initial submission until final publication, and, of course, acceptance or rejection. And there would also be opportunities for rating or commenting on key issues, like the fairness and constructiveness of editors and reviewers and the efficiency of the journal’s production staff.
The Burnable Books blog suggests this might be a “Consumer Reports for journals.”
Framing the idea within the Consumer Reports concept suggests that authors are a customer group, capable of shopping between service offerings based on equivalencies. Which seven-passenger journal with ABS and handsfree phone integration do I want? Clearly, that’s not how journals work, as Rick Anderson recently pointed out, but there is something to the idea. To get to the idea, we need a better model for a monitor on the trust economy.
In the journals economy, the shopping list for authors shortens almost immediately. Most authors have 2-5 journals on their list when they’re ready to submit, and these journals are well-known to them. Authors, or people immediately available to them, know these journals’ reputations for speed, editorial process, and selectivity. Authors often know the journals so well that they can even suggest reviewers.
Generally, authors are satisfied. They encounter submission systems they’ve seen before, and receive good treatment. Most journals survey authors to gauge their satisfaction with the review and publication process. Of the few dozen of these I’ve seen, there is a very low ratio of dissatisfaction to satisfaction. If this ratio is generally low, a site like the one proposed will attract only squeaky wheels, and quickly lose any initial credibility it might possess.
Social media in academia runs into the problem of incentives right away, forcing the question of why someone would come to a site like this to publicly criticize a journal they may want to be published in later. Reviewing takes time, and there’s no clear upside. The incentives just don’t line up.
Authors are only part of the equation for journals. Deaner seems to suggest that one dimension of dissatisfaction an author might express on his hypothetical site would be a low acceptance rate. Journal publishers want to satisfy authors to a certain extent, but selective journals gain their reputation largely by rejecting a high proportion of submissions. They are not citation mills, set up for authors to use to generate citable objects in the literature.
In a blink, authors turn into readers, and they want other dimensions of quality. A high acceptance rate is not likely to be one of these. Aligning the interests of researchers, who are both authors and readers, requires a balance. Focusing on one role only fosters a hard limitation.
Deaner ends with a question, “Why doesn’t this exist already?” In a way, it does, but at the community level for established journals. The more pertinent question we’ve been wrestling with recently is what to do with all the upstart journals the Internet has fostered. This week, many posts and comments have touched on this issue, from Jeffrey Beall’s list of his so-called “predatory” publishers to new OA publishers exploiting CC-BY licenses to validate themselves. In one particularly productive exchange, the notion of a Better Business Bureau for new journals was batted around, something that takes the more positive approach of credentialing new journals rather than approaches that put them in a negative category they have to work out from — an uncredentialed journal might make authors wary if this were made to work.
So, I don’t think we need a Consumer Reports for journals, but we may need a Better Business Bureau for small and new journals, to separate the legitimate from the fly-by-night. The Internet has unleashed many things, but we don’t need to just stand by helplessly. Credentialing those journals that are able to get their houses in order soon seems to be a job somebody needs to take over. MEDLINE does it to some extent, but that’s a slow process, and the confusions with PubMed have deteriorated its power.
There seems to be a need. Is someone going to fill it?