Recently I submitted an article for publication in a peer-reviewed professional journal. The process was a nightmare. In order to finalize my submission, I had to

  • negotiate a misleading and counterintuitive third-party platform
  • read and try to absorb several pages of arcane (and sometimes self-contradictory) format guidelines
  • categorize my article according to a rubric that did not make sense
  • follow an uploading process that left me, at several points, unsure of whether I would have the opportunity to include essential figures

Never has the word “submission” seemed so bitterly apt as it did during this process.

Even worse, I strongly suspect that I screwed up at several points in the process, and that while reviewing my submission, the editors probably rolled their eyes and said something to the effect of “Why won’t these stupid authors read the directions?”

There’s a simple answer to that question, of course: your directions suck, your submission process is ridiculously and needlessly complex, and authors have better things to do than learn and adapt to your workflows.

Based on this and similar personal experiences, and on what I’m hearing from fellow authors (and editors), it seems clear to me that we have a major problem in the world of scholarly publishing: too many publishers are making life easier for their editorial staff by making life harder for their authors. The fact that a journal will typically have many, many more authors than editorial staff suggests that this approach is exactly backwards. What I preach to my fellow librarians would seem to apply here as well: we need to focus on bringing complexity indoors—imposing it on ourselves behind the scenes rather than imposing it on those who are trying to use our services.

In libraries, the negative response I get to that assertion is often based on the fact that our patrons are mostly students who are here to get an education. Why should we try to make life easier for them? College isn’t supposed to be easy (so the argument goes); it’s supposed to be difficult.

I actuallly agree with that statement. But my rejoinder is always the same: yes, college should be difficult. It should be a challenging, stretching, exhausting intellectual experience. But it seems to me that the challenge and the stretching should come from engagement with intellectual content—not from the process of trying to get access to the intellectual content.

Of course, it’s also true that libraries are in a very difficult competitive situation when it comes to patrons: an awful lot of what the library offered in the past at a high cost (in time and energy) is now on offer at a very low cost from competitors, and this has implications for how cavalierly we can treat our patrons’ time and pateince. Publishers are competing for authors as well, and it’s only to be expected that those publishers who have the least difficulty attracting authors should also be the ones that worry the least about making it easy for authors to submit content.

But I don’t think that makes it excusable.

So, enough with the complaining. What can we do to improve the manuscript-submission process? I have a few ideas, and I welcome more and better ones from commenters:

  • Instead of making every author master the arcana of manuscript preparation for your particular journal, the publisher should have a few people master it internally so that authors can submit their manuscripts according to an absolutely minimal number of standardized rules. This might be expensive—but consider the expense imposed on the system as a whole by the current arrangement.
  • What if we had an industry standard for the manuscript submission process? The standard process would make life easier for authors, and adherence to the standard would both create an advantage for journals in their competition for top authors and also save every journal or publishing platform from having to reinvent the wheel of article submission itself.
  • Given that a standard that works for social science papers written in MS Word may not work for mathematics papers written in LaTeX, what if standards were established by discipline rather than industry-wide?
  • Perhaps the first round of editorial and peer review should be format-agnostic. Deal with pure content first, and accept or reject the paper on that basis—if accepted, then and only then will the author have to deal with format specifications. Make the initial submission process a simple matter of sending in one’s paper as an email attachment in one of several acceptable document formats.

Now, I realize that there are probably publishers out there already doing some (or maybe even all) of these things. If so, I apologize for my ignorance and would be interested in hearing about them from commenters—please do let us know which publishers are doing this particularly well. I’m also probably missing other and better ideas that are obvious to others; by all means, let’s hear them.

But one way or another, if we could get this problem resolved by the next time I have to submit an article, I would really appreciate it.

Rick Anderson

Rick Anderson

Rick Anderson is Associate Dean for Collections and Scholarly Communication in the J. Willard Marriott Library at the University of Utah. He speaks and writes regularly on issues related to libraries, scholarly communication, and higher education, and has served as president of NASIG and of the Society for Scholarly Publishing.

View All Posts by Rick Anderson


59 Thoughts on "The Manuscript Submission Mess: Brief Notes from a Grumpy Author"

I would apply the same comments above to the peer reviewing process. As a volunteer who doesn’t get all that much benefit from donating my valuable time to the scholarly communication process, I shouldn’t have to jump through similar hoops to help the publisher. I was very close last night to giving up and telling the publisher to go find someone else to review the manuscript. At least the author gets a possible publication and usually some reward from his/her institution.

Early in my post-secondary life I realized the same and decided, for better or worse, that I would only publish with those who made it easy for me to submit – 70 peer-reviewed publications later I am happy I chose this route. I am now a full prof in my 30th year of teaching.

Thomas G. Ryan
Nipissing University

Rick, I share your pain! Having just submitted several mss. reports this spring on platforms that made me create a password, no, use a password I’d last used for this one journal last year (“forgot your password?” Well, yes), and then answer a series of questions rather than submit the detailed report I’d written in response to the piece, I started a list of the journals I won’t review for again. And this is what submitting letters of recommendation for graduate school, for graduate students applying for fellowships and for jobs has become like, too. For one of my students on the job market last year I must have submitted 20 different iterations of a “recommendation letter.”

You say that the motivation here is “making life easier for their editorial staff.” In my experience for smaller journals university presses are pushing these platforms without a clear sense of how much more cumbersome these systems are than a pretty straightforward “email us your essay in x format.” Perhaps for _Nature_ volume control is the key issue, but I don’t see editorial staffs benefitting from these complexities any more than authors.

Good point, Karin — but it leaves me wondering who exactly does benefit from these terrible systems. Clearly someone does (or expects to), or else the systems wouldn’t be put in place, sometimes at considerable expense to the publisher.

M.S. submission, peer review, recommendation letters, and – my most recent nightmare – job applications. Every system its own island with its own rules, the same information re-entered on multiple sites, and passwords, passwords, passwords.

Is this a generalized phenomenon of the digital archipelago?

Each organization can derive the cost-cutting and convenience of on-line systems if the user covers the work of fielding, standardizing and entering the data. Internally, they get simplified operations, better workflows, cleaner data, and metrics.

Elsevier now have “Your paper, your way” on many journals, which allows the initial submission to be just a pdf file formatted the way you want, though they strangely say that you should put figures and tables at the right points in the paper rather than at the end. It is certainly useful for publishers to have author names and details and the title and abstract etc. entered into the system and they still do require all that. But some journals, even Elsevier ones, are extremely pedantic about getting the exact right format at the first submission, which makes no sense.

Why “strangely say you should put figures at the right point”? It’s nearly impossible to carefully review a long paper that has figures at the end. Even worse, some journals even have figure captions in a different place from the figures (and text, of course). I have begun to refuse to review for journals that don’t provide me with a readable manuscript with in-line figures.

Well, because it is supposedly my preferred way of formatting papers. I do prefer to have the figures and tables at the end for reviewing and I find it a hassle to insert them in the text with MS Word.

Nice rant, Rick, but I may have a topper. I recently tried to post an article on ArXiv, the preprint server. After registering to become a “member” I was informed by the computer that I had to be “endorsed” by an existing member in my “field.” My paper is titled “A Taxonomy to support the statistical analysis of funding-induced biases in science” so I had picked “statistics” as my field, from the available options, which did not include logic, my actual field. I have no idea how to find such a person. And this is just a preprint.

Browse the authors already on the arXiv in statistics, pick some likely looking candidates, and email them. Would you have statisticians vet each other if one happened to write a logic paper?

The temporary hassle you have of finding an endorser is offset by the collective benefit to the world of having access to your paper. I hope you haven’t formatted it in submission/draft style, since that is going to lessen the resulting benefit 🙂

The benefit of access would still be there if the hassle of endorsement were eliminated, so that benefit does not justify the cost. I assume that endorsement is required simply to keep out the cranks, so the question is whether that benefit is worth the hassle?

Research, then picking out and bothering people I do not know, is in fact what ArXiv recommends; see But it gets even more complex because all authors are not qualified to be endorsers. ArXiv says “Endorsers must have authored a certain number of papers within the endorsement domain of an archive or subject class. The number of papers depends on the particular subject area, but has been set so that any active scientist who’s been working in her field for a few years should be able to endorse IF her work has been submitted to arXiv and IF she is registered as an author of her papers.”

As hassle factors go, this is a pretty high level. Like many applied researchers, I am not in a publish or perish environment. One wonders how much important research is never seen because the submission hassle hurdles are so high?

The endorsement system exists because the arXiv was set up by physicists, and there are absolutely insane numbers of cranks who have (for example) explained all of particle physics with a simple geometrical model, or maybe even worse, have built a perpetual motion machine in their basement. At least in the past when the endorsement system was first set up, the benefit was very clearly worth the hassle.

I can sympathise with this justification, Christopher. I once made a collection of blogs claiming to refute either the special theory of relativity, or the general theory, or both. It was a large collection. That the Large Hadron Collider would create a black hole which swallowed the earth, or the galaxy, or the universe, was a more recent fantasy. But perhaps ArXiv should consider having an editor as a gatekeeper for cranks, rather than a computer.

It’s important to remember that when it comes to submissions systems, and the companies that build them, publishers are the customer, not authors, so systems have been tailored for the needs of the publisher. I think publishers have started to realize that these systems are clunky for authors, and are pushing for improvements, but it is probably not their top priority. Should it be? Maybe – authors are the publisher’s customers, so their needs should be considered. But there’s more to it than just the author experience.

Not to be dismissive, but how often do you, as an author, have to interact with these systems? A few times a year? Maybe a dozen? Publishing staff interact with these systems for 8-10 hours a day, every day. Optimizing the process for authors is great, and it should be done, but if it’s done at the expense of the journal staff, operations will grind to a halt. So yes, maybe submission will be easier, but it will take longer to publish the manuscript, and no one benefits from that. I compare this to when I have to call my cable company when my service goes out, or there’s an error on my bill. Yes, it’s a headache, and I wish there was a better process to resolve my problem, but it happens once or twice a year, and honestly, I don’t lose sleep about the process the other 363 days, so maybe it’s not such a big deal.

Also, a thought on “sending in one’s paper as an email attachment.” Sure, that sounds great and easy, but what happens when the email gets lost in transit (it happens), the author puts a typo in the address, or the publisher simply mishandles the email? It becomes a shouting match, and there’s no record of when anything actually happened, or what the proper submission date should be. And then there’s the circumstance of an author emailing two different people at the publisher with the same manuscript, and the back-and-forth that must ensue to ensure that efforts are not duplicated, or the author who sends “updated” manuscripts by email 10 minutes after the original is received. Which file is correct? Time to contact the author and find out. For better or worse, a tracking system alleviates some of these problems. Does it introduce others? Sure, but let’s fix those instead of throwing the system away and relying on something as flaky as email to ensure the successful operations of a journal.

To be clear, I am totally in support of making the submission process easier for authors, and I encourage all publishers to push for this. And as a publisher, I’m grateful when authors share their ideas about how to improve the process. I’m just trying to add a little perspective here.

Not to be dismissive, but how often do you, as an author, have to interact with these systems? A few times a year? Maybe a dozen? Publishing staff interact with these systems for 8-10 hours a day, every day.

True, but interacting with those systems is the job for which they’re being paid. I often tell my staff and colleagues that, when we set up systems for library patrons to use, we need to bear in mind who is being paid and who is paying. Too often our systems are set up to make life easier for the ones who are being paid, at the expense of the ones doing the paying. Why? Because we, the ones being paid, have power over the system. We have to constantly remind ourselves that we are here for them, not vice versa. (I realize the librarian-patron relationship isn’t exactly like the publisher-author relationship, but in this particular context I think it’s analogous enough.)

Optimizing the process for authors is great, and it should be done, but if it’s done at the expense of the journal staff, operations will grind to a halt.

Surely we have choices other than the current system (which is maddening for authors) and one that causes the system to grind to a halt for publishers. What I’d like to see is a reasonable compromise: one that minimizes the complexity of submission itself, while allowing a reasonable amount of standardization and workflow efficiency after acceptance. What seems particularly crazy to me is the requirement that authors jump through a huge number of hoops designed to optimize format before the paper’s quality and relevance have even been considered.

As someone commenting here from the publisher’s side, I’d have to agree with a lot of your suggestions at least up to a point. The second and fourth suggestions here the most important in my view, because so much time is wasted on submission up-front that are rejected shortly after editorial or peer-review. In some journals, the figures are staggeringly high on amounts of wasted time and effort from the editorial staff end on submissions an Editor may reject almost instantly upon reading. So having such strict rules at such an early point is certainly not an ideal way for anyone, and at the very least should be saved for a revision request.
An industry basic standard would also be extremely beneficial here, because you’re correct, in my experience within quite a few major publishers, I can vouch that many of those author guidelines either have not been touched in years, or only seen by the lower-level staff who create increasingly complex processes, often based on a single submission (ie one paper in 50,000 was formatted with XYZ, so now all incoming papers will be checked for XYZ), rather than having say, a brief yearly scan by a wider department team to ensure everything is still essential to the early submission process.

Excellent rant! I have mentally declared 2015 as the year of the author. Here are my issues: Submissions have increased by 60% in the last 5 years. Editorial staff has decreased by 2 in the last 5 years. Added to that, the staff job has gotten a lot more complicated. We now have to evaluate CrossCheck reports, duplicate submission scores, handle ethics issues like never before, do everything faster, manage about 700 editors, AND provide excellent customer service to all involved.

We (my team) counts on standardization across 35 journals in order to not entirely mess up the system. There are things that I know other publishers do for their editors that we cannot.

When I say it’s the year of the author, we are seriously evaluating our submission process to make it easier for the authors. We already accept new submissions in any format (the Elsevier/Burger King way). There is no need to upload figures separately with a first submission. This doesn’t mean that we have thrown out the author instructions. In fact, those who follow the instructions will find the revision process easier.

Other steps we are taking is the elimination of the cover letter and as many forms as possible. This will require a few additional submission questions and click through agreements but it should be easier than asking authors to write a letter or download a form to then snap it into PDF and upload it. That is a waste of time. We are also reviewing the main reasons why papers are returned to the author prior to review and looking at how we can make that easier.

The part about having staff on hand to manuscript formatting is a non-starter. Instead, production staff need to evaluate whether this can be done post acceptance. There is a risk, of course, that errors will be introduced and authors may not do a careful review of page proofs.

I agree that publishers should be looking for ways to make things easier for authors and reviewers; but, you will be hard pressed to find a society publisher that will “staff” per your suggestion.

For what it’s worth, I don’t think authors particularly mind having to click a lot — what is troublesome is having continually to stop and ask yourself “Now what do I do?,” and then never being sure whether or not you did it right. (Again, there’s a strong analogy here with the library-patron relationship. Patrons don’t mind clicking a long string of links to get from, for example, an alphabetical list of ejournals to the particular article they need — as long as the route they follow is clear and logical. What leads them to abandon the process is having to stop and re-strategize at each level.)

Thank you for stating so boldly what anyone who has ever submitted an article as thought. My co-author and I are struggling with a system right now trying to figure out exactly how our potential publisher wants things like bar charts because the language for “figures” seems to apply exclusively to photographs. OA and commercial journals are equally bad. These systems are designed to maximize return for the publisher and the frustration and confusion of the authors aren’t considered. And why should they, authors aren’t the customer, we’re the product.

For first submission, our journals only require authors to upload s single document (Word or PDF) that contains the text, figures and tables. We make no requests for how this is formatted. Authors are only asked for all the source files when submitting a revised version.
The comments of Bob and Karin about unneccesary hoops for reviewers to jump through are noted, however. As publishers we do need to make this process as easy as possible.

HA! I was just ranting to Chef Alice about this yesterday! Nothing is more tedious than the — literally — HOURS spent ensuring that references are done in the journal’s preferred style. My field at least uses APA, but even within APA there are subtle variations. Do you put a period after the author’s first initial? A semi-colon or a period after the journal name? A space after the colon separating the issue and the page numbers? And what about the journals that use numbering yet insist on having the numbers entered manually? (Milbank.) And then there was the journal that insisted on using the author’s first name. Which was easy enough to get for well-known people, but what about Chen, L — doubtless a grad student who skiddero’d back to China after getting his/her degree? Oh, and the conventions for citing online material. “Accessed at” vs “Retrieved from”. vs nothing and a date. Madness. And don’t get me started on “percentage” vs % or other such peculiarities. And yes, I have grad students who should, in theory, be able to perform this grunt work, but sadly never seem to do it with the appropriate eye to detail. In any case, it’s hours of pointless effort. Please, please, agree on a standard! PLEASE!

What a timely discussion that was Pam! I completely agree that what we really need to reduce the admin burden on researchers are industry standards – not publication, publisher, or platform standards. ORCID is one of several organizations working on the research infrastructure ‘plumbing’ and, while we’ve made a lot of progress there’s so much more we will all be able to do to make life easier for researchers with increased take up and support from researchers, publishers, and vendors. Hmmmm – I feel a post coming on!


My sympathies.

Joe Esposito covered the feelings about peer review systems (meaning the computer systems) in a post late last year:

“What do almost all scientists agree about? The manuscript management systems employed by most publishers. Scientists detest these systems. They find them to be cumbersome and they express real frustration and sometimes outright anger over them. I will forebear naming names here, but some of the most prominent publishers and vendors of workflow management systems come under fire.”

Joe’s piece later goes on to say

” I was intrigued by a group of scientists who praised the submission system at one particular journal. So I investigated. That particular journal, beloved by all its authors, accepts submissions as email attachments. The managing editor calls authors to answer questions. It does not use workflow management software.”

I think somewhere in the last 30 years we “industrialized” scholarly publishing, perhaps STM in particular, and that led to a focus on efficiency. It might be interesting — now that we have all of these online — to study the sheer length of “Instructions to Authors” from 1960-2015 to see if these have gotten longer. Some of the length undoubtedly is to deal with added industry standards; some of it is to deal with things that are much better to get right the first time, rather than attempt to correct later (the list of authors; the correct names of the authors), some to select reviewers, and some things to aid in understanding the manuscript by the reviewers. They all do add up!

Perhaps this is why there is new interest in “cascading” and “forwarding” of manuscripts: so that submitters do not need to repeat a daunting experience.

I have not spotted “ease of submission” high on the list of decisive factors for selecting which journals to submit to. Is that because the submitter is not (in the case of a lab with funded research) the same person as the decider? But in your case they are the same person!


“Ease of submission” is unlikely to be a deciding factor in an author’s decision about where to submit a research article, at least in the biomedical arena. From my own experience (and those of many of my colleagues) as an author of many journal articles, I can safely say that the overriding consideration in the decision about where to submit is the stature of the journal in my field of research. In the majority of cases, this (unfortunately) means the journal’s impact factor. We authors are a masochistic lot. We’ll jump through the publisher’s arcane hoops and navigate the cryptic obstacle courses of manuscript submission systems in the hopes of getting our paper accepted. Journals and publishers have us at their mercy! You are correct: the submission tasks usually are delegated to a student or postdoc coauthor.

Interesting notes. When an author has a manuscript rejected in a journal he/she usually has to rearrange it before submitting to other journal in order to adapt it to the guidelines, and this is a mess and time consuming task.

I know these systems can be incredibly frustrating for authors and reviewers alike. But in my experience (as a publisher) they’ve been put in place to help editors in the most part, for whom the endless record-keeping of submissions, revisions, reviewers etc was an administrative nightmare. These systems save a huge amount of time (automating reminders, promoting reviewers when someone else declines etc), and as such editorial teams can handle significantly increased numbers of submissions whilst continuing to reduce time to publication. They also bring much needed transparency for the authors – you can log in anytime and see where your paper is in the process. I agree they’re not perfect, but they’re getting better all the time and Publishers are absolutely pushing for improvements. There’s limited choice in the market though and these systems are pricey – we wouldn’t implement them unless they brought substantial returns. Nonetheless we want to hear where we can improve – without authors we wouldn’t exist so we want to keep you happy. So please do tell your Publishers your frustrations (just make sure you’re specific – “I really struggled with the system” doesn’t tell me what could be improved).

I’m also speaking from the publisher’s side, and your comments echo my thoughts. There are substantial in-house benefits to some submission systems that would not be clear to an (aggravated) author.
I also see that standardization (ie, a single system) is a terrific…concept. I am in wholehearted agreement with that concept. It doesn’t reflect a reality I can imagine anytime soon, as “Publishing” is not a coordinated, monolithic entity. The frustration driving the original rant is important to listen to, though—we should always be striving to update and to improve authors’ article submission experiences, starting from whatever system we choose.

I have been doing this since 1970 and your complaint has been going on since at least then!

Since the implementation of computers to handle MS complexity has increased. Why, well no one really knows but what I experienced is that someone said wouldn’t it be nice to know X or to have Y filed in this manner because that is how I do it! Wallah! X and Y are inserted into the form because it really is rather easy to do. One of the most complex systems in the world is the US Military, at least until it goes to war, and then the adage KISS is applied and enforced and all that complexity goes out the window for the combat troops.

Perhaps one solution to any MS submission system is to have a mandatory 3- 5 year review of it in order to make sure it is doing what it is supposed to do. Additionally, there should be a complaint file which should be reviewed along with the review.

As for a standard submission system. Well – forgetaboutit, why – because I like my eggs over easy!

Like many others, I’m adding my “yes, this!” For what it is worth, I’ve found similar frustrations in both the roles of reviewer and journal editor. Being clear about where one is in the process, the task being undertaken at the moment, and the work remaining in the process shouldn’t mess up anything on the staff side of the systems. And, in fact, I would imagine that the best thing for the staff is for authors, reviewers, and editors to make fewer mistakes in the process that require manual intervention/correction.

No one has mentioned submission of book manuscripts. Do publishers use systems like these for submission of books as well? We did not at the press I directed for 20 years, and no such system is used at the two academic presses for which i am working part-time now, one commercial and one university press. As acquiring editor, I simply ask authors to submit their MSS in PDF or Word as email attachments.

We are a small nonprofit with a very simple, un-automated submission process. The actual editorial review part of the process is handled by e-mail. Imagine my horror when my husband had a paper published by a small journal handled by a major publisher, and I discovered what is going on out there! Navigating the system took nearly as long as writing the paper! The worst part of it was that there was absolutely no way to get in touch with a live person to get his questions answered. When he submitted corrections to the article (mostly for errors the copy editor had introduced), many of them were not made, but he had no way to tell anyone. His only recourse was to get in touch with the academic editor in chief of the journal, whose e-mail he had. So, “I really struggled with the system” and there was no contact information for anyone to help me!

Now, imagine yourself in this situation but you are blind or otherwise unable to visually access the system. These systems are virtually completely inaccessible to a screen reader. Accessibility is obviously not a high priority.

By the way, as an editor myself, I hate to denigrate a copy editor as I did in the first paragraph, but with so many journals gobbled up by large commercial publishers, the copy editors have no familiarity with the subject matter and are just so many cogs in this automated machine. What a sad degeneration of our profession!

Thanks for these comments, Ellen.

By the way, I had no intention of mentioning this, but since there have now been multiple comments about how this problem seems to be arising (at least in part) from the increasing consolidation of the market and the gobbling-up of small independent publishers by large commercial ones, I should probably point out that the experience I described at the beginning of my post was not with a large commercial journal publisher, but a relatively small and new OA publisher whose professed first goal is to “serve researchers.”

Last year, I transferred a paper from Nature Geoscience to Scientific Reports on October 24. Check out the “received date” on the paper:

24 days between the true submission date and the claimed date! For an article that had already been reviewed by a journal in the same family! Scientific Reports checks that formatting is perfect before acknowledging “receipt”, presumably so that time from submission to publication looks great.

As an extreme counter example, check this out:

19 days from submission to publication! I have no idea where they found two reviewers who were willing to provide substantive comment in less than a week. At least 5 days of the time between submission and acceptance were due to us needing to write the revision, so in reality the journal did their entire job in just two weeks. MDPI has an astonishing workflow, I enjoy publishing there when I can.

On a different note, who else is crushed that ORCIDs appear to be completely useless? I first assumed that the entire point of ORCID was to make manuscript submission easier. It should be like looking up a DOI – manuscript central queries ORCID and then just fills all the information in automatically. But no.

19 days from submission to publication! I have no idea where [MDPI] found two reviewers who were willing to provide substantive comment in less than a week.

One possibility we should acknowledge is that MDPI did not, in fact, find two reviewers who were willing to provide substantive comment in less than a week — unless you have proof positive that this happened, you should consider the possibility that the peer review was either perfunctory or nonexistent. MDPI is a publisher with a controversial history.

So I looked back at the reviews, because I was curious. The first reviewer’s comments weren’t particularly long – I suspect it is someone from my field, but I couldn’t rule out that it was the journal editor. Anyway, they commented that “the paper is well organized and the findings are straightforward, so I have no critical comments”, which was the reason for a relatively short review (it was honestly a pretty simple technical paper). Of course, I can’t rule out that our paper stinks, and the editor just wanted it to go through 🙂

The second reviewer very clearly had used the instrument, because he or she made a number of points that only someone who had actually used it would know to make.

I sometimes get the feeling that MDPI (at least sometimes) asks a lot of people to review a paper, in order to speed up the process of finding reviewers. That’s the only way that I can explain how I had 6 reviewers for this paper:

Responding to six reviews was a real pain.

To me, “the paper is well organized and the findings are straightforward, so I have no critical comments,” smells strongly of a non-review — it sounds, frankly, like the kind of automated spam comment that I regularly have to delete from my music blog.

But it’s good to hear that you’ve found good evidence of at least one serious review taking place, and on such a short timeline.

On a different note, who else is crushed that ORCIDs appear to be completely useless? I first assumed that the entire point of ORCID was to make manuscript submission easier. It should be like looking up a DOI – manuscript central queries ORCID and then just fills all the information in automatically. But no.

To be fair, it’s still early days for ORCID. Most researcher I speak with haven’t heard of it, and most submission systems (papers and grants) are still in the process of functionally integrating it. These things take time.

That said, “the whole point” of ORCID is disambiguation of researcher names and providing a permanent identifier attached to research outputs. Improving the submission process is a positive result, but not the sole point of the effort.

Thanks David.

Sorry, that was just a one sentence clip from the (admittedly quite short) review, not the entire review. It continued with comments that indicate that the person has clearly read much of the paper:

“This paper analyzes the temperature behavior of Sky Quality Meter that is a device dedicated to measure the night sky radiance. Nowadays SQMs are routinely used and thus such a study is legitimate and important. The paper is well organized and the findings are straightforward, so I have no critical comments. Even if the radiance fluctuations are below 10%, they seem to be well correlated with the temperature profile. This is a good reason (and challenge) for drafting an empirical formula R=R(T), where T is the temperature while R is the radiance. Such a formula could be a valuable improvement in data processing.”

So it’s not boilerplate. But as I said, it’s general enough that you can imagine that it was written by the editor and not by a reviewer – you could make a very similar request for many instruments. The second review was much more substantive, and the writer had clearly worked with the instrument we were studying.

Regarding ORCIDs: I am so far the only Christopher Kyba that publishes (although there is also a Christopher Chyba), so I don’t have the uniqueness problem. I appreciate it’s a huge problem for the John Smiths and Li Chens of this world. To be more clear, I had hoped that what ORCID would do for me was simplify the submission process. Instead of helping me, it’s become just another field I now have to look up each time in order to enter, slowing the submission down further.

I think it will at some point provide the benefits you’re seeking, it’s just not there yet.

I want to echo the many many things that manuscript submission systems do right — it is like a miracle to me still. These systems were invented to mitigate the pain points that then existed. These systems solved problems that are barely remembered now but were absolutely serious. It allows many more submissions to be handled with fewer staff, is a big one — we all want more science to be published, yes? What is going on in part I truly think is that we have all forgotten the pain of the old days. Do you remember using a typewriter? And the ribbon would break? And ink all over your hands? Do you remember needing three copies of your manuscript and mailing them to the editorial office, including the camera-ready artwork? And in fact, remember you only knew if it had arrived because you included a stamped postcard or SASE? Do you remember not so long ago, putting your artwork on those big square things, because files over a certain size would not email? And ftps hadn’t even been invented? Some corresponding authors back in the day had to coordinate with their artist, their typist, the mailing department. Back then the corresponding author still had to submit a cover letter, with his/her address and contact data — and hopefully all the data of the other authors as well. Back then the editorial office had to re-enter that data to the editorial system, whatever that was (index cards, excel, a chalkboard?). Now the corresponding author just keys it in directly. Now, in a couple hours or even much less, your paper is submitted. The corresponding author is in full control. The time and money saved is enormous — and there must be fewer mistakes now, it only stands to reason.

Things keep improving so much, it is important to recognize that. Do you remember, if you work in the editorial office, having an associate editor go missing for weeks, even months, and there the paper sat on his or her desk gathering dust? Do you remember folks using the same reviewers over and over because those were the people they knew? Now — that missing paper can be tracked and even removed from one editor who has a sudden emergency and given to another. Now reviewers can be found based on a quick expertise search. Now when the publisher wants data, click a few buttons and produce some statistics. There is, without any doubt, more that can be done to help authors especially when the concrete specific pain points are identified. But “having the editorial office do it” is not a likely solution, at least not for the specific situations I am familiar with, which are very small offices that are already working flat out and, I should be working, not typing all this up! And small as we are, we are right here to help.

Last year I read “Uncommon Service” by Frances Frei and Anne Morriss and highly recommend it. It was very insightful about working together, so everyone from one’s customers to one’s staff is happy.

All good points, but would it completely vitiate the efficiencies to provide contact information for a live person who could help with the process?

By the way, the same type of frustrating automation has been put in place in the copyright and permissions arena. When applying for permission to reprint or adapt material from an article or book, if your request doesn’t fit into the form’s fields, you’re just out of luck.

Great post Rick. It does all beg the question of how we managed to get into this mess, doesn’t it?

I suspect that the original motivation for publishers may have been the idea that because they need all of the information, and the manuscript in the right format, when it goes to production, you may as well ask up front rather than having to ask twice. That makes sense, of course, until you consider the number of articles that get rejected, have to be reformatted and sent to another journal with a similarly burdensome submission process.

I think that there are some technology venders who have been so far away from researchers that they have, with the best of intentions, designed products that don’t serve academics well at all and then sold them very effectively to publishers, (and sometimes librarians) based on the needs that they imagine academics have. In the past, I think that it’s been hard for many in the industry to get really good feedback from their end-users. This is something that I think that we’re increasingly addressing now, and the reason that we’re having these types of discussions.

The silver lining here are the number of new start-ups and products that are emerging to address problems like this. In the last ‘ask the chefs’, which was about what we learned at the SSP conference, Michael Clarke predicted that this year would be the year of the manuscript authoring system. Michael listed 5 good examples, Authorea, Overleaf, Dartmouth Journal Services, Aries, and River Valley Technologies and I know that they’re all thinking about how to rework some of our less efficient workflows including the submission process but extending all the way through editorial, into production and final publication. I think that we’re going to see some efficiencies in several areas over the next few years.

More standardization of citation formats and abbreviations would also make life much easier for authors. How many citation styles do we need? Would not one do for each major disciplinary group (arts and humanities, social sciences, and sciences)? Chicago’s Cite Right (basically an executive summary of different styles) covers 10 and that is little more thant a starter set.

This post obviously struck a chord since it had 32 comments when I started reading it and 40 by the time I chimed in. For a forum of and for scholarly publishers, it’s not surprising the comments reflect the publishers’ viewpoints more than authors and reviewers. Rachel Russell’s reminder of the “good old days” of article submission reminded me of another past hassle – the >$50 of express mail fees to just submit that cumbersome package of multiple hard copies, camera ready oversize figures, disks, back and forth from Idaho to Dordrecht or wherever. While I assume the editorial staff who handled my first manuscript waaay back in 1999 were just as industrious as those at present, that one took 22 months from submission to publication.
Still, as one of the army of associate editors who also writes and reviews, I am into the manuscript management systems several times a month, and I invariably find myself muttering short (usually four letters in length) words that despite common usage are not found in most spell checkers, crossword puzzles, nor accepted in Scrabble. These mutterings are often echoed from across the kitchen table by my “health care provider” wife who is shackled to an electronic medical record system that shares some similar (if much more complicated) “features” as the scholarly manuscript workflow – unintuitive interfaces, multiple screens, and endless hunting and clicking.
The remark that “ease of submission” is seldom regarded as important factor in author surveys is interesting, because I recall seeing positive interactions with editors as an influential factor in a recent survey. Might that be because personal interactions with editors and staff are just that, personal, yet all the major workflow systems (Editorial Manager, ScholarOne, EES) are similarly painful to navigate? Or because the “decider” doesn’t bear the pain? More than once, I’ve interacted with a PhD student through the rounds of submission, revisions, queries, advising, editing (yes, we are sometimes more than gatekeepers), only to see their name replaced as “corresponding author” on the final by their esteemed professor. However, most first authors do it all in my field and journals (environmental science), and I wonder if an easier submission process would encourage author and reviewer loyalty. There is much room for improvement. I’ll follow with a list.

Yes I had not remembered the express mail charges — even regular postage was not trivial. I do want to add that our system started simple and has gotten more complex in order to serve the scientists. In 2001 when our eJP-based system launched we didn’t ask about Funders, Open Access options, Color options, conflicts of interest, special collection categories, and much more. These questions/options are all to serve the scientist, yes? There’s a feeling that everything should be simple. And a deep feeling of being overwhelmed. But in fact, submitting a paper has become more complex because the needs of the community are more varied and complex — the software should not be blamed for that, don’t you think? I wonder too if all this frustration is confusing submitting a paper with submitting a grant. If you don’t answer a question on our system upfront, we’ll just ask you later. If we quality check your submission and run across a problem, we’ll let you know and help you fix it. It is team work, not a test.

There’s a feeling that everything should be simple.

I’m not sure that’s true. All of us recognize, I think, that sometimes systems and processes are complex rather than simple because they need to be. But I do think there’s a feeling that things that are unnecessarily confusing, misleading, time-consuming, redundant, and inaccurate should be less so.

I wonder too if all this frustration is confusing submitting a paper with submitting a grant.

It’s not.

If you don’t answer a question on our system upfront, we’ll just ask you later. If we quality check your submission and run across a problem, we’ll let you know and help you fix it. It is team work, not a test.

This may be the case for you guys in particular, but in many cases the submission process is, in fact, very much like a test — one in which, if you don’t get the answer right, you will not be able to proceed to the next step, and in some cases the only possible answers provided are all wrong. In other cases, it may well be true that if we fail to answer a question (or answer it wrong) someone from the publisher will contact us later to clarify — but we often have no way of knowing, during the submission process itself, that this is the case. And if we don’t know that it’s the case, then the functional effect during the submission process is that it’s not, in fact, the case.

I’m feeling that small society-based journals should be getting some love, then, for our personal touch! At least mine….now I should note my journal is Earth science, not medical, so perhaps in a medical journal there would be lives at stake…. And of course all user experiences can be improved, always. I absolutely agree with you about that.

Not to harp on this point, but the experience I described in my posting was with a small society-based journal. I’m not sure the problems we’re complaining about are particularly endemic to either the big guys or the small guys. Whoever is handling the submission process well should definitely get some love — no matter what kind of organization it is.

It’s unfair that after putting in efforts to do the research work, authors have to undergo further hassles of decoding the myriad of journals’ author guidelines. Most authors face the dilemma of whether to focus on manuscript formatting or scientific inquiry, covered in this post: Publishers such as Wiley & Elsevier have launched the Your Paper Your Way initiative, but I agree that “an industry standard for the manuscript submission process” would make manuscript submission process smoother for both authors & journals. However, it would be a mammoth task to streamline all the citation and formatting styles, but this could be done at the level of subject areas.

Some ideas on improving manuscript submission process:

Part 1. Things the manuscript workflow software companies could do: (i.e., EditorialManager, ScholarOne, EES, and their kin). Part 2 will follow with things the publishers could improve.

1. Go find out how well your product really works with the intended users, in each role. Stand in the shoes of your users:
Author role: Have engineers get a dummy paper and go through the steps to actually submit it at various publishers. Then have someone not-so computer savvy do it, like a VP and CEOs do it. Have the engineer watch where they stumble. If those who design it had to eat their own stuff, they would find ways to streamline it.
Editor role: Working with a publisher (supervised by a managing editor), have the software engineers handle a dozen or so real articles for a real journal. See how easy (or clunky) it is to go through the submitted documents, inspect all the attachments, find reviewers of unambiguous identity who have recently published related work, invite them, follow-up, get blown off, find more, repeat cycle, … See just how useful those search tools to locate reviewers work. See if you can find, remember, and recognize all the commands. It shouldn’t take long before the
Editorial office role: I actually think this is an arena the softwares may work OK, perhaps because all the editorial assistants, managing editors, quickly become power users. Still, the maneuvering around between screens and menus, clicks and refreshes likely could be more efficient. (Rick Anderson mentioned people using don’t mind lots of clicks if they make sense. Maybe. As one who has suffered from carpal tunnel in my mouse wrist, I’m not so sure).

(incidentally, when I am in the Reviewer role, I find they all these products work pretty efficiently. Now the journals’ review procedures with long lists of inane questions are another matter.)

2. At least standardize within your companies to find a way to offer authors, reviewers, editors to work across client journals. Using ScholarOne and Wiley as an example, as an editor, I can see that Excellent Reviewer hasn’t reviewed for Environmental Toxicology&Chemistry in two years, but I can’t see that they got hit up last month by Freshwater Biology, Limnology and Oceanography, and J. American Water Resources Association, EcoHydrology, …. Standardizing across the industry is one thing, but how about just within a company? Likewise, why must I separate author profiles in EditorialManager for PloS One, Hydrobiologia, …. ORCID seems an obvious common denominator, but so far it’s yet another listing in my user/password vault.

I hesitate to get involved with this discussion because every journal is a bit different and I spend no time submitting to them. As a result, my sample is one: the journal I manage (in the field of economics). We use an online manuscript submission site that requires an i.d. and a password. Once you get past those, however, you find fairly simple requirements for a first submission — double spaced, 12 point Times Roman, etc. No particular software is required because our online submission system converts everything to pdfs. A tiny percentage of authors find the online submission process arduous; we do our best to help them through the process. Why these particular requirements? Some of our dozen editors still print out the papers and refuse to read anything single spaced or in smaller type. The simple requirements, thus, serve our editors, who are senior in the field. As a non-scholar, I could be wrong, but I assume that authors want these editors’ input, regardless of whether the paper is rejected or accepted, and perhaps, should be willing to accommodate their eccentricities.

Our acceptance rate is between five and ten percent. Once a paper is accepted, I’m a stickler about formatting. These requirements make it possible to get papers to print in a timely fashion. I send a detailed list of formatting instructions. I’d say about one author in five bothers to follow the instructions the first time around. Why can’t I drop what I’m doing and fix their papers for them? First, the author is the expert at his/her paper and is best suited to make sure all the information is correct (if I had a nickle for every botched reference entry . . .). Second, at any give time, I’m juggling three or four issues of accepted paper and hundreds of papers in other stages of consideration (involving editors and reviewers with complicated time requirements of their own). Mind you, I’m fortunate to have a staff (many journals do not), but we simply do not have time to sort out final versions in any random software and any random formatting.

I have worked at this journal since we used typewriters so I’ve lived through several systems of submitting papers. I’m quite grateful for the online submission process. We used to rely on mail and telephone. Communications with authors were slow and sometimes murky. I remember having to stay quite late at work to talk with an author in Europe. And so on. Despite the occasional clumsiness of the online system, it’s my favorite system so far. Like all labor-saving devices, it creates more work than it saves but the work is more accurate and more accessible to our staff and editors. And despite the built in inconvenience, I’d argue that online systems serve authors, too. At one time we considered it fast to respond to submissions within six months. An author might not know whether his/her paper had arrived, never mind gone to an editor or reviewer. Now everything is online and available in days.

I keep deleting snarky comments about the process because I’d like to be taken seriously and I know that not having a PhD is a deterrent to your ability to hear me. But remember that you are paid approximately twice whatever the person is paid who is handling your paper. You have a number of other work-related privileges they don’t have. And you probably need the publication or you wouldn’t have submitted. Don’t piss them off. Just a piece of friendly advice.

Thanks for these thoughtful comments, Jan. A couple of quick responses to your closing salvo:

I keep deleting snarky comments about the process because I’d like to be taken seriously and I know that not having a PhD is a deterrent to your ability to hear me.

I don’t have a Ph.D, nor (I believe) do most of the Chefs. We hear you just fine. 🙂

But remember that you are paid approximately twice whatever the person is paid who is handling your paper. You have a number of other work-related privileges they don’t have. And you probably need the publication or you wouldn’t have submitted. Don’t piss them off.

I don’t think anyone here is complaining about the work done by publishers’ employees — all of us, I think, recognize and appreciate their dedication and competence, and I suspect we all agree that they aren’t paid enough. The complaints being aired here are primarily about systems (and perhaps the organizations that run them), not about individuals, and certainly not about front-line employees.

That said, you might want to be careful about assuming that the authors you serve are dramatically better-paid and more privileged than the folks processing their submissions. Among the authors submitting papers, only the luckiest few are tenured faculty. Others are junior faculty working extremely hard and (often) for very little money; some are adjuncts and postdocs working for subsistence wages and writing articles in their nonexistent spare time in the faint hope of a future faculty appointment; some are graduate students. All of them are giving the journal (at no charge) content which the journal will then sell in the marketplace. I don’t think there’s anything fundamentally wrong with that arrangement — but it does seem clear to me, in this context, who the customer is: it’s the author, who is “buying” the services of review, certification, and dissemination from the publisher, using the coin of freely-provided content. And I think that reality does, in fact, have implications for how authors should be treated.

I’ll thrown in my 2 cents as someone in the medical publications world who has done many submissions on behalf of authors. The formatting process is not only tedious (and as was mentioned above, articles have to be completely reformatted for resubmission to another journal), but the guidelines on formatting are never updated regularly. I often download a copy of an article from the most recent issue to determine what the correct formatting actually is.

There are also no standards when it comes to reference style (per FJenkins1’s comment above). Yes, many journals just go with AMA style, but others have their own particular quirks (listing all authors, not listing article titles, putting refs in alpha order rather than order of first mention, etc.). I’d love to see all medical journals conform to AMA style.

As well, author guidelines don’t always say how the article is to be eventually submitted. If you have to submit separate files for abstracts, body manuscripts, tables, AND figures, say that up front! We often don’t know that until we actually start the submission process, and then it takes extra time to get that done when we’re often in a huge rush.

Genetics ( is now format-neutral for initial submissions.
“GENETICS accepts manuscripts in any format, including figures, tables, and references (e.g. it may be formatted according to another journal’s guidelines or a format of your preference). For the review process, upload one PDF of the manuscript. If your manuscript is accepted for publication, we’ll ask you then to format the files for publication according to the guidelines below.”
You might want to contact the editor Mark Johnston or the executive editor Tracey DePellegrin for their experience so far.

Comments are closed.