It’s well known that publishers have done an inadequate job conveying the importance and complexity of the work they and their staffs, vendors, experts, and specialists do, day in and day out, to produce thousands of high-level journals, many of them with daily or weekly publication schedules. But it hurts even more when it seems we’ve succumbed to our own inadequate marketing, and can’t muster the effort to reveal the nesting dolls of value inside even one of the activities.

This sad reality appears any number of times in meetings, conferences, and blog posts, in which labeled activities (marketing, editing, copy editing, formatting) are spoken of in passing, as if their value is well-understood and the work involved well-articulated. I was recently in a meeting consisting of a number of publishers, when one mentioned that we couldn’t forget to mention the “value-add publishers bring through peer review.” And we moved on, again not pausing to consider just how many layers of activity that includes.

matryoshka
Image via Sergiev Posad Museum of Toys, Russia.

Not only are we bad at conveying much about what we do, but we haven’t even internalized it all yet.

This post is about unpacking the “value-add publishers bring through peer review,” with the explicit note that this is but one of the 80+ things that publishers do. Once you see what we call “peer review” unpacked, I hope you’ll agree that it is, in itself, a non-trivial effort, investment, and process when done with serious intent.

Where do we begin? We begin with the existential.

Journals don’t occur in nature. They have to be brought into being by people who invest time and money into their creation. Journals have to exist before they can attract authors, editors, and reviewers, and before staff can put these participants into a process of one sort or another with the implicit or explicit goal of selecting quality materials and delivering them to an audience. The mere existence of a journal creates a priority system of some type, and this priority system drives certain positive behaviors among scientists, as noted by economist Paula Stephan:

Knowledge has properties of what economists call a public good:  once made public, people can’t be excluded from its use and it is not used up in the act of use (non-rivalrous).  Economists have gone to considerable effort to show that the market does a poor job providing items with such characteristics. That’s where priority comes in: the only way that a scientist can establish priority of discovery is to make his (her) findings public. Or, stated differently, the only way to make it yours is to give it away. Priority “solves” the public good problem, providing a strong incentive for scientists to share their discoveries. The upside is that priority encourages the production and sharing of research. There are other positives — one relates to the fact that it is virtually impossible to reward people in science for effort since it’s virtually impossible to monitor scientists. The priority system solves this, rewarding people for achievement rather than effort.  Priority also discourages shirking — knowing that multiple discoveries of the same finding are somewhat commonplace leads scientists to exert effort.

Without journals, we would have to reinvent a major priority system. (Of course, any priority system drives some negative behaviors, as well, and journals are no exception.) Without peer review, we wouldn’t have valid journals. Without editors, we wouldn’t have peer review. And the list goes on, with the essential message being that peer review is not “value add” but is key to the priority system that drives science and the sharing of scientific information. Those who manage it are working essential processes, not marginal activities.

Bringing something into existence means courting risk, and publishers court risk for the sake of facilitating communication. As I wrote in 2012, publishing is not “a button,” but a complex risk activity. Starting a journal requires financial stakes and has no guarantees. Recall that when PLOS started, it needed a multi-million-dollar grant to fund its startup years. PeerJ required venture funding from O’Reilly. Many journals have in their histories tales of brushes with failure or strange paths to modern success. Starting a journal is a non-trivial matter, especially if the goals of the journal are audacious and expansive. (And while there are edge cases of journals starting for far less, these are not the norm, and don’t seem valid for much extrapolation.)

So, the first major piece of value to unpack from the peer review function is the mere existence of a journal and a publisher, both of which require capital, specialist knowledge, and commitment to a complex business in order to exist. This allows an independent peer review system to exist, and is the largest of the nesting dolls.

The next layer comes in the recruitment, management, and evaluation of peer reviewers. This work involves staff, full-time editors, and volunteer editors. Done well, it requires creating and managing an evaluation process, monitoring adherence, and periodically reviewing the results. The hours involved are significant.

Disclosures are another layer of managing the peer review process. Disclosures are not straightforward in many cases, and require judgment and interaction with authors and editors.

Artwork has to be prepared for peer review. If interactive graphics, videos, or audio files are part of an article, ways to include these in the peer review process and gather feedback must be created and managed.

Plagiarism is still a problem, and the peer review process seeks to eliminate as much as possible. Often, anti-plagiarism tools are deployed selectively, as using them incurs expense. The results can be marginal or mixed. All of this — selecting which papers to check, evaluating the results of any analysis — requires more time, interactions, and judgment.

Record-keeping is another major part of providing peer review services. Records have to be kept straight during the entire process, which often lasts weeks and can last months. Because most journals are part of stable communities, authors return, whether previously accepted or previously rejected. If corrections or retractions emerge, these records can be important, providing editors with a way to sort out the extent of a problem.

Record-keeping is also important because part of managing the peer review process can involve dealing with legal claims against authors, editors, or journals. Subpoenas can appear. Files must be produced, legal advice sought.

There are other dolls nested inside — publisher insurance, hiring and training of staff, dealing with vendors, creating expedited peer review approaches — which only underscore the complexity of what is inside the label “peer review.”

Peer review is more complicated than it may appear, and requires many functions that are interdependent, including building a brand that not only attracts papers but which carries forward the function of establishing a relatively clear and well-understood priority system for scientists and science.

So, the next time you consider summarizing a function involved in publishing with a label and then moving on, take some time to see what’s inside. It may surprise you.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

6 Thoughts on "Buried in the Matryoshka — Unpacking the "Value Add" of Peer Review"

Kent: An excellent summary of what we do.

I think we face a group who upon seeing someone do something casually says: I can do that! and then walks away. I have found that there is no amount of talking or writing or what-have-you means of communication that can overcome ingrained ignorance.

Indeed there is a group of people who like to degrade what one does. I don’t know why, but they are there they do not think nor begin to reduce a task and see the components of accomplishing something.

As my jump master used to say: You ain’t a paratrooper until you jump out of the plane.

Joe Esposito eloquently explained this in his now classic post on “Governance and the Not For Profit Publisher”:
http://scholarlykitchen.sspnet.org/2011/10/24/governance-and-the-not-for-profit-publisher/

The big mistake for most NFPs is what I call the “editorial fallacy”, the view that a publishing operation is entirely editorial in nature and that selecting the finest content will naturally lead to success. The second big mistake is the idea that superior intelligence can solve any problem. Or maybe that’s the biggest mistake. Thus, the distinguished life scientist pronounces on how publishing operations should be run without reflecting that perhaps there is more to the game than being smart. A geneticist is not likely to lecture sociologists on their methodology, nor is a professor of comparative literature likely to offer a critique to the chemistry department, but publishing is one of those fields where everyone is an expert. There are other fields like this as well — politics, certainly: everyone thinks he or she could do a better job than the bozos in office; or the management of a sports team, a task that may very well be as complex as working in some academic disciplines. But publishing is different in that a professor of cognitive science would most likely stop short if asked to manage the New York Yankees, but as for publishing — well, what’s so hard about that?

Kent, great post and great points. There are probably a thousand other factors which impact peer review. Not the least of which is that there is not just one “correct” approach. We’ve currently got pre-print servers, traditional pre-publication review, post publication review, open review, portable peer review.

There are even sub-sets within these approaches: pre-pub review can be single, double, triple blind or open. Open review can mean a variety of things as well. So we’ve got several types of Matryoshkas each with their own layers.

Each style of peer review comes with pros and cons and it seems that there are those who are so set on proving that “their” version of peer review is the “best” that too much time is spent on tearing down opposing views rather than working to improve on the positive to the benefit of all. I personally feel that a combination of the various approaches is the most valuable.

At its heart there’s a human element to peer review that requires participants exhibit honesty/integrity, personal & professional responsibility, good behavior, and trust. We’d like to think this is simple, but the reality is it’s not always the case. Perhaps this is the tiny doll at the center of the Matryoshka.

The layers you describe speak to the complexities anyone who has ever worked hard at peer review, whatever the approach, have to deal with. I’m not THAT old, but I am old enough to remember the days when authors had to snail mail multiple copies of their manuscripts to editorial offices and then those had to find their way to reviewers and back again (There may be a Tolkien joke in there). Plagiarism checking software is fairly new, but something that’s expected now. Tools to check for image manipulation are emerging. New layers.

I guess my point is that things are a lot better than they used to be and it continues to get better. But as you point out peer review and other aspects of academic publishing don’t happen by magic. It takes much experience, work, thoughtfulness, cooperation, trial and error, and resources (including money) to add new layers to the dolls.

Thanks Kent. Editorial offices are spending enormous amounts of resources on “peer review.” We have a person who spends at least 1/2 of her full time job dealing with ethics issues. Another full time job goes to being the system administrator for our “off the shelf” peer review and submission system. Five whole people help authors, remind reviewers and editors, follow up with all parties, review submissions for QC, review final accepts for QC, answer tons of questions, train new editors and associate editors, and generally keep everything moving without missing a detail field or step. These are all cogs in the process and when something does not go well, it has the potential to kick off a spectacular disaster.

I am leaving a ton of stuff out, such as monitoring the health and well being of the journal and running tons of reports related to this. Starting a new journal is 10 times more work.

Comments are closed.