The vast majority of academic authors are supportive of peer review, believe that the process improves their manuscripts and, if done properly to blind identities, helps to minimize discrimination, a recent study reports.

The white paper, Peer review in 2015: A global view was released Friday from Taylor & Francis. In it, readers will find a summary of an online survey and six focus groups. The white paper is accompanied by a supplement that reports individual survey questions and their results.

toolbox
Toolbox, image via Per Erik Strandberg.

Comparable to the responses of authors from other peer review surveys (see Sense About Science, 2009, and more recently, NPG’s Author Insights, 2015) authors were largely supportive of the process and conservative in their views. There wasn’t much evidence that the “system is broken,” contrary to the opinions expressed by frustrated academics and tech companies promising to fix the system.

The online survey, sent to Taylor & Francis authors who published in 2013, asked a lengthy set of questions that attempted to discern between experience and expectations. Responses were divided by broad discipline (STM versus HSS), and role (author, reviewer, and editor).

The response rate for these surveys was low–5.5% for STM but 11% for HHS. From the world map on page 4, it is also clear that T&F had a very difficult time getting responses everywhere except North America and Europe. As a result, we should not consider the results to representative of the STM and HSS populations. (Note: The authors of the report confuses statistical precision with sampling representation.)

Respondent fatigue is also apparent as the researchers documented a “sizable, and sudden, drop-off in the response rate” (p.12) midway through the survey. This is not entirely surprising, given the survey length and the cognitive work asked of respondents.

While the questions posed in the survey were clear, I found many of them leading. For example, Q8 asks:

In your opinion, how capable are each of the following types of peer review of preventing discrimination based on aspects of the author’s identity (such as gender, nationality or seniority)?

This question, posed hypothetically, leads respondents to select Double Blind Review–the only form of peer review in the answer list that attempts to conceal author identity. Similarly, the wording of Q9 (delaying publication of a competitor’s work), and Q10 (encouraging favorable reviews) also leads respondents to rating Double Blind Review over all other types of review.

Surveys are extremely difficult to construct and analyze, and I don’t wish my remarks be construed as being entirely dismissive of this report. In spite of its methodological and analytical weaknesses, there is a lot that can be learned from this survey. But just like mom and apple pie, it doesn’t tell us anything that we don’t already know: Most people like their mothers, or at least the idea of motherhood.

The fundamental weakness of the T&F study–and others like it–is that it treats peer review as a concept, rather than a collection of diverse tools used in a varied set of situations for specific purposes. No one would seriously disagree that we need a toolbox, but its the tools we should be evaluating.

Nothing exemplifies this confusion between peer review as a concept versus peer review as a toolbox more than survey question 1, which asks authors the “purpose of peer review” and provides them a list of objectives, from improving the quality of a paper to detecting plagiarism and fraud. This is like asking a carpenter what the purpose of a toolbox should be, by providing him with a list of jobs ranging from hanging a picture on the wall to determining whether the house was wired to meet code.

I was contacted by Taylor & Francis to review this survey for The Scholarly Kitchen, so in a sense, I am participating in post-publication review. I do feel that this report would have benefited greatly had it gone through rounds of pre-publication review — especially by those trained in social science research and statistical analysis — before being disseminated.

If publishers are sincere in their intention to learn about how to improve peer review, the first step is to stop thinking about peer review as a concept and start thinking of it as a toolbox.

While this change in thinking may seem like a simple rhetorical flip, it fundamentally changes the way we pose our investigation, from a marketing approach, “What do authors expect from peer review?” to a scientific approach, “Can we identify specific problems and develop tools to solve them?” Surveys, like T&F’s Peer Review in 2015, looks at peer review from a marketing perspective. Regretably, an approach that allows respondents to rate and rank tools based on their overall popularity is not going to improve our understanding of which tools work best in any particular situation.

Should it matter if screwdrivers get the highest user response, followed by hammers and saws, if your job just requires some spackle, tape, and a putty knife?

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

8 Thoughts on "Survey: What Do Authors Expect From Peer Review?"

A good analogy Phil. Thanks Surverys can also be seen as Sort of like fishing off a trawler and fishing with a handline. Different purposes, skills and different tool sets. I suspect peer review and its products are more like fishing with a handline.T&F is asking questions similar to many survey’s I’ve seen recently. They read like attacks on some concept the authors want to “get”. i.e. discredit. But the best analogy came from a former T&F employee. Journal start ups he argued were like fishing with a handline, hoping something will “catch” out there and you will make more money than your most optimistic projections. Or it will go the other way, and you have an overpriced underwhelminig catch if you come up empty. No amount of surveying will change that I suspect. .Without change in the expectations of the writers of the questions. Not every questionnaire is worth the trouble. Chuck

Hi Phil. One of my professors when I was in grad school said it very well. If you conduct a survey without piloting it, your survey becomes the pilot.

We appreciate peer review is complex, and acknowledge that this research could not possibly attempt to cover all the nuances involved. We aimed to address what the practical experience is like now for those involved in it, and believe we did establish areas where respondents were clearly expressing a desire for process improvement on issues like timeliness, politeness and communication. This is something we must continue to improve to ensure we serve researchers better.

You mention low response rates and respondent fatigue, with the drop off in the ‘ethics in peer review’ survey section highlighted as evidence of this. We can’t be sure whether it was the complexity of the questions, fatigue or whether respondents felt uncomfortable addressing issues such as institutional, regional, seniority and gender bias. Avoiding this issue may have increased our number of respondents but we believe it’s a crucial area to address.

Whilst the survey responses aren’t in the tens of thousands, the views of some 7,400 researchers cannot be dismissed and a 9% response rate is commensurate with other surveys of a similar scope and focus (the excellent Sense about Science survey you mention received a 10% response rate for instance). On the point you raise on leading questions, this was certainly never the intention. The aim was to make every question clear and understandable, encouraging more non-native English speakers to complete the survey.

Your detailed feedback has given us much food for thought though, and we’ll be taking on board all your comments for the future. Thanks for taking the time to review.

Elaine, Thanks for your response.
I’m still not sure your analyst understands the difference between representation and precision. From page 4:

Survey response confidence intervals
Responses from researchers in science, technology and medicine (STM): 2,398 over a population of 43,296, meaning we can be 95% sure that any survey statistic from the STM respondents lies within 1.95% of the true value for the STM population.

The calculation provided above assumes that the researchers were able to get a truly random sample from the target population, which, based on the response rate, geographical bias, and drop-off rate, does not appear to be a valid assumption.

It is not impossible to get a much higher response rate from scholarly authors. For example, the last two surveys I worked on received a 44% and 46% response rate, respectively.

However, any response rate less than 100% is prone to survey bias and its up to the researcher to explore this source of bias. With regards to this particular survey on peer review, it is likely that people with poor recent experiences, those with strong opinions, and those with a lot of time on their hands to take extensive online surveys dominate the response set.

This does not mean that their responses are invalid, but the researchers need to be careful about generalizing their results to the entire STM and HSS author population.

The report provides a detailed overview of the perceptions, expectations, and realities about the peer review system. However, it would be interesting to understand the views of authors, editors, and reviewers about certain problems that surround the peer review system: How can peer review rigging be avoided? Should peer reviewers be considered as experts? How effective is peer review in highlighting specific problems with a manuscript before it reaches publication?

Comments are closed.