Editor’s Note: Today’s post is by Jerry A. Jacobs. Jerry is a Professor of Sociology at the University of Pennsylvania, where he has taught since earning his Ph.D. in sociology at Harvard in 1983. He has served as the Editor of the American Sociological Review, President of the Eastern Sociological Society, Founding President of the Work and Family Researchers Network, and was a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford in 2018-2019.

How long can I expect the review process to take? What is the journal’s acceptance rate? If the paper receives a “revise and resubmit” decision, what are my chances of having the revised version accepted? Prospective authors in my field of sociology — and I suspect in many other fields as well — often ask these questions.

I would like to suggest that journal webpages routinely provide answers to these and related questions in real time. The relevant information is readily available in manuscript management systems commonly used by journal publishers. Indeed, a few outlets – noted below – already provide some of this information on their websites. Reports required by scholarly societies provide pertinent data for selected publications; unfortunately, in addition to being dated, these documents are not widely known and not easily accessed.

3D illustration of an user experience gauge with the needle pointing excellent

Data on decision times and acceptance rates would greatly aid authors in their choice of journals. Moreover, it would help to level the playing field between “outsiders” and those who are “in the know” or who have access to those who are knowledgeable about the review process for particular publications. Finally, making this information routinely available might have the salutary effect of improving review times and standardizing other practices that vary significantly across journals and between successive editors of the same publication.

Selecting Journals in Sociology

As a former journal editor, students and colleagues often ask me for advice regarding which outlet would be most appropriate for their manuscripts. Sociologists often have a number of options to consider. Clarivate’s InCites Journal Citation Reports (JCR) currently lists 148 sociology journals. This list surely understates the set of options sociologists consider for their scholarship because many sociology publications are in specialty areas that spill over the boundaries of JCR categories. The JCR includes separate lists for criminology, demography, educational policy, ethnic (and racial) studies, management (and organizations), public administration, public health, and women’s (and gender) studies. These fields can be viewed as specializations within sociology as well as separate areas of inquiry. Further, the JCR does not include all social-science publications (for example, the Population Studies Center at the University of Michigan provides a useful list of 386 journals in sociology, demography and public health,  along with selected economics entries).

Given the specialized domains of many journals, the choice set facing any author for a specific submission is much narrower. In addition to whether there is a good “fit” between the paper and the journal, there are several relevant considerations. One dimension is whether a journal is “generalist” or “specialist” in its orientation. Another is the general level of prestige. In sociology, there are three or four high-status generalist publications, followed a second tier. The choice between a top-tier generalist journal and a top-tier specialist option can be a difficult one; and, in the event that a paper is not accepted during the first attempt, journal choices continue to be complex.

Another set of issues pertains to the likelihood of acceptance. The top journals in sociology publish only a small fraction of submitted manuscripts. The American Sociological Review, for example, accepts less than 10 percent of submissions. As submission rates have grown, some top outlets are routinely rejecting a considerable fraction of papers without peer review. The decision to try for a top journal thus means that an author is likely to require multiple attempts before a paper finds a home. One study  estimated that economics papers require 3-6 submissions before acceptance, though I am not aware of any similar estimates in the field of sociology. Moreover, sociology articles are rarely accepted “as is.” A positive first review typically leads to a “revise and resubmit” (R&R) decision; in other words, the document is not accepted but author is invited to submit a revised version of the manuscript. Consequently, in the best of circumstances, most papers will undergo at least two rounds of reviews.

In this context, the time between submission and decision is an important consideration. This review time varies considerably across journals. Compared to most of its peers, the American Sociological Review is prompt and efficient: favorable (R&R) decisions delivered in 2018 took about 2 months (8.7 weeks) on average. Authors who submitted their papers to Sociological Theory, in contrast, had to wait almost twice as long (17.1 weeks). (These statistics are reported by the American Sociological Association in 2019).

While authors are always happy to see their work published expeditiously, there are career stages when a prompt acceptance decision is particularly valuable. Graduate students who seek a faculty position are especially keen to land a timely acceptance letter. A similar focus on the publication clock holds for post-doctoral fellows and those who are being reviewed for promotion or tenure. Timely decisions in these cases can make or break careers.

Data on decision times and acceptance rates can be obtained for nine journals published by the American Sociological Association, but this information is not featured on these journals’ websites. The report that is eventually included in the Association’s newsletter (Footnotes) can be out of date. Information that is several years old may no longer be relevant because new editors are typically selected every several years. While journal rankings tend to stay more or less stable over time, new editors may revise aspects of the decision process, such how to handle R&R decisions, in substantial ways. New editors may also differ in their efficiency and timeliness.

Many other publication outlets do not provide similar information on acceptance rates and review times. This scarcity of official information leads authors to seek guidance from their personal contacts, as Mark Granovetter’s research on the role of social networks has shown. It also leads authors to seek information on the rumor mill.

One sociological blog site illustrates the risks posed by the dearth of official information. While this blog does present some accurate information (for example, a number of blog entries complain that the review times for the American Journal of Sociology can be quite long), taken as a whole, however, the information about journals on the sociology “rumor mill” is sadly deficient. It is often quite dated.

More importantly, the comments posted to this blog generally do not provide sufficient information to allow a prospective author to know if their work is similar enough in content, style, and quality to be a valuable guide. While a guest’s ranking the quality of an Airbnb may provide information that is likely to be quite useful to other prospective guests, the heterogeneity of manuscripts makes journal commentary less valuable. In my view, the complaints posted about the review process via the sociology rumor mill provide more in the way of commiseration and less in the way of useful guidance to prospective authors.

Journal Websites

In recent years, journal websites in sociology have become quite standardized. In addition to the journal’s impact factor and author submission guidelines, it is common to have tabs that list some of the latest articles. Data on the most read and the most cited papers are also posted.

This standardization is not surprising in light on the consolidation of the academic publishing market. A small number of organizations (Sage, Wiley, Taylor & Francis, Springer Nature, Oxford University Press, and Elsevier) publish the lion’s share of sociology publications. Together, these organizations publish 129 of the 148 journals listed in the JCR journal category (87 percent). Three firms — Sage (53), Taylor & Francis (25) and Wiley (22) — together publish 74 percent of indexed sociology journal titles. A decision by any of these firms to revamp their websites would quickly lead to a new standard for the field.

One firm – Springer Nature – now includes review times on its web pages for selected journals. Authors who are considering publishing in Human Ecology, Human Studies, Qualitative Sociology, Race and Social Problems, Review of Religious Research, and Social Indicators Research are treated to current information on review times and time to publication. Springer Nature has not — at least not yet — provided this information for other noteworthy journals, such as Demography, Theory & Society, and The American Sociologist.

In the case of Qualitative Sociology, for example, the good news is that it takes an average of less than two months (53 days) before a first decision. The bad news is that it takes more than a year and a half (522 days) between initial submission and publication for those manuscripts that successfully wind their way through the review process. Unfortunately, authors are not provided with information about the acceptance rate, the proportion of submissions that receive R&R decisions, the proportion that receive a second R&R decision, the number of papers (if any) that do not receive a complete refereed review process, and so on.

The data needed to provide all of these statistics are embedded in the electronic manuscript submission systems – such as ScholarOne and Editorial Manager – that have become ubiquitous. In other words, making journal websites more author friendly by reporting similar data in an easily accessible and transparent way would not require any additional time and effort on the part of editors or their staffs. Publishers that seek to provide a full range of services to authors could make a significant step forward by making information on acceptance rates and decision times available to authors.

Standardization

Any effort to display decision times and acceptance rates should adopt a common set of definitions. Acceptance rates can be defined in a variety of ways, depending on whether “desk rejects” and “R&R” decisions are included. The same is true for decision time. Some journals define it as the decision to desk reject or to send the paper out for review. Others use it as the time to the first accept/revise/reject decision, which can be skewed shorter by a quick desk rejection process and a very long peer review process. It would also be desirable for journals to be consistent with regard to the time frame used to calculate these statistics. My personal preference would be current data with a six-month or one-year window, depending on the volume of submissions. JCR statistics are updated once year, I suppose it would be acceptable for decision times and acceptance rates to be updated in concert with the latest Impact Factors.

Impact on the Editorial Process

Organizational metrics affect behavior, and there is every reason to expect that the routine publication of data on review times and acceptance rates would alter editors’ behavior. There has been speculation about editors’ efforts to “game the system” by enhancing their journals’ Impact Factor. My own research suggests that, however widespread such efforts might be, they have not succeeded in significantly altering journal rankings in sociology, which are quite stable over time.

One can hope for the best, namely that journals with particularly lengthy review processes would respond by taking steps to reduce review times. It is possible that editors could overshoot the mark by reducing the number of reviews required, thus risking editorial quality. In my view, the reporting of data on review times and acceptance rates on journal websites stands to substantially benefit authors and is likely to enhance the review process. Any unintended consequences in terms of altering editors’ behavior are likely to be marginal, unsustainable, and manageable side effects.

 

Jerry A. Jacobs

Jerry A. Jacobs is a Professor of Sociology at the University of Pennsylvania, where he has taught since earning his Ph.D. in sociology at Harvard in 1983. He has served as the Editor of the American Sociological Review, President of the Eastern Sociological Society, Founding President of the Work and Family Researchers Network, and was a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford in 2018-2019.

Discussion

10 Thoughts on "Guest Post — Author-Friendly Journal Websites"

On 1st June we (a group of early career researchers) are launching conpher.com – we have collated over a million article acceptance and publication times from NIH PubMed for almost 4000 journals. conpher is a platform created by researchers for researchers to submit their personal journal publishing experiences, including acceptance and publication times. Our study so far has discovered that for our sample of 1000 journals who advertise an acceptance or publication time, 4 out of 5 are incorrect by a figure approaching 3 months slower than advertised. conpher wants to work with publishing academics and publishers alike to help improve data available and at the same time, improve journal publishing experiences for all.

As noted in the post above, I think one of the big problems is that there are no standards for reporting this information. Does “time to first decision” include desk rejections (which often take only a few hours or days) or are they excluded and only manuscripts sent out for peer review count toward the number? Do the numbers include the time that a manuscript sits with an author for revision (which can take months) or do they just measure the time when the article is in the journal’s hands? Without a standard set of measurements and rules, it’s hard to say much.

Just for readers’ information, while many journal homepages do not include acceptance rates or times to review/publication, this information is collated and made available on the Cabells Journal Whitelist (https://www2.cabells.com/about-verified) where publishers make the information available to Cabells. The Whitelist currently has just over 11,000 journals indexed across most disciplines in science and social science, with the data on acceptance and review times updated annually.

Even terminology isn’t standardized. At the first society where I managed peer review, “Revise and Resubmit” was definitely not a positive decision; it was essentially a rejection and was considered a “final disposition” by the peer review tracking software and the statistics. The society had five tiers of decisions (from best to worst): Accept, Minor Revision, Major Revision, Revise and Resubmit, Reject.

Minor Revisions and Major Revisions were both considered a continuation of the original submission under the same manuscript record in the peer review system. But a Revise and Resubmit would have to be submitted from scratch under a new manuscript record. These were usually given out when further experimentation was required, probably motivated partially to avoid long revision times counting for the time-to-final-decision stats and partially by the reality that many authors would probably rather try for a lower-tier journal than do extensive additional experimentation. (At the time, the peer review management system didn’t have a way to dispose of abandoned major revisions w/o messing up the stats, so the graveyard of pending but abandoned major revisions was big enough without tons of revise and resubmits!)

Further to Simon’s comment, do be aware that acceptance rates and times to review/publication are not available for every journal listed in Cabells. If anyone knows the percentage of journals that have this information I’d love to know, please post it here.

Asking journal editors or publishers to provide information on their journal’s performance is asking bakers to rate their own bread. They will come up with good-looking averages that are difficult to check and forget to speak about the ‘exceptional’ cases of authors who after waiting for ten months got a two-sentence rejection message with a few lazy review reports.

A good picture of a journal’s performance, as experienced by us as authors, can only be obtained if we share our experiences with each other. At SciRev.org this is possible. If you go to SciRev.org/reviews you can read over 8000 experiences of researchers with the review process, including many in the field of sociology.

Besides information on the duration of the different phases of the review process, at SciRev also more qualitative information is available, including an overall rating of the experience. Besides long duration of the process, other major frustrations of authors are non-responsiveness of editors (not reacting to information requests) and contradictory review reports, without the editor giving any guidance how to handle them. These are important aspects that cannot be expected to be made available on journal’s websites.

I think the idea here is to set up standards and have reporting automated, rather than letting editors or publishers report on whatever they feel like reporting on.

I never quite know how much to trust review sites, where someone has to have a reason to make an effort to go to the site and post a public review of a product or service. I’ve only posted one or two in my life, when I was either so furious about a bad experience I wanted to warn others, or so happy with a good experience I wanted to pay it back. That leaves an awful lot of empty ground in the middle for unreported experiences that are just okay. I’m also reminded of this old post about ranking systems:
https://scholarlykitchen.sspnet.org/2009/11/16/how-meaningful-are-user-ratings-this-article-4-5-stars/

I agree that review sites have a tendency towards polarization. Review experiences posted on SciRev.org are often very positive or negative. But this is exactly what we want them to be. For authors it is more important to know the extremes than the center. We want to know the journals that have organized their editorial processes better than average and those that perform worse than average.

As Jerry says, a speedy review process is extremely important for early career researchers. Being able to distinguish between journals with better or worse organized editorial processes is therefore very valuable for them. Of the review experiences reported at SciRev, about 10 percent mentions a first review time of more than six months. For social sciences, humanities and mathematics even substantially more (see https://scirev.org/statistics/first-round/). To prevent bad experiences, it is important to know where these long durations are concentrated.

Very nice content. “Amend and Resubmit” was certainly not a positive choice; it was basically a dismissal and was viewed as a “last demeanor” by the friend survey following programming and the measurements.

As an editorial assistant for two academic journals, I always acknowledge receipt of the manuscript, and give the approximate timeline of when the peer review will be returned – usually 8-12 weeks (sometimes longer, especially now due to the pandemic). I’m always happy to answer any questions the authors have.

We do not use manuscript submission software. We looked into it, but found it was far too complicated for our needs.

Just a note to say, though, that not all journals are alike. We’ve been publishing ours for 40 years and have a great reputation. We value our authors and are cognizant of their time and effort.

Comments are closed.