Starting a new journal is fraught with risk, expense, and hard work.  The American Society of Civil Engineers (ASCE) launched two new journals this past year and the hardest part, without a doubt, is soliciting content. I warn editors and committees wanting to start new journals that it is a rough slog with constant attention to solicitations needed for years.

At a recent inaugural editorial board meeting for one of the journals, the editor sent a survey to all of the board members. One question asked them to identify the greatest challenge for the journal. Every single person said that the lack of an Impact Factor and/or the lack of indexing in Web of Science (WOS) are huge hurdles. The board discussed how they needed to focus solicitation efforts on more senior faculty as they are less likely to care about publishing in journals with an Impact Factor.

Unfinished bridge
Close enough.

These conversations happen all the time and are then followed by questions to the publication staff about the process for “getting” an Impact Factor. We explain that this is a three year lagging indicator and that there is an evaluation process at Thomson Reuters that has to happen first. The bottom line—we are looking at three years minimum and considerably longer for a niche journal. Given citation patterns in civil engineering, it is more likely to be 5-7 years. The editorial board is deflated. You can tell that they are trying to figure out how many tenured faculty members they know in order to convince them to submit papers to this unknown publication.

Coming off the heels of that meeting just two weeks ago, I was surprised to see that eLife was granted an Impact Factor this year. Having only started publishing issues in late 2012, how could they have a 2013 Impact Factor? Helpful folks on Twitter pointed me to other journals that had announced “partial” Impact Factors. Cell Reports also received an Impact Factor with only one year of publication under its belt.

By way of reminder, the 2013 Impact Factor calculation is as follows:  The number of citations in 2013 to papers published in 2011 and 2012 divided by the total number of papers published in 2011 and 2012. As stated earlier, none of the journals mentioned above had publications in 2011.

I reached out to Thomson Reuters to ask about this and it turns out that this is not all that uncommon. Patricia Brennan, vice president of Product and Market Strategy at Thomson Reuters, explained the process for inclusion in WOS, the starting point for Impact Factor evaluation.

“Thomson Reuters editors thoroughly evaluate each journal with a firm and rigorous set of criteria that looks at a journal’s timeliness, compliance with international editorial conventions, language (bibliographic information, at a minimum must be published in English), author and editorial board, international diversity, citation analysis and editorial content,” Brennan wrote via email correspondence. She further explained that the journal must publish either 15 papers over 9 months or 20 papers a year.

eLife launched with the first issue in October of 2012. But in those last three months of 2012, 46 papers were published making it automatically qualified for inclusion in WOS. Cell Reports began publishing in January of 2012 and had well over the 20 papers published that year. Both titles were highly promoted and anticipated. It makes sense that the editors at Thomson Reuters would want to ensure that the journals were included in WOS as soon as possible.

Brennan explained that citation patterns for new journals are certainly different than those of established journals. When they see citations happening in the first year of the journal, this shows promise and the journal is deemed adequate for an Impact Factor.

Remember though that the Impact Factor for 2013 includes citations to papers published in 2011 and the number of papers published in 2011 is in the denominator.

“A journal receives its first impact factor when Thomson Reuters has three complete and known years of source item data,” Brennan wrote. So again, how does a journal with no publications in 2011, even get included an Impact Factor in 2013 with only one year of publications?

“In instances of new titles—whether the result of a 2012 title change or because a journal was added to coverage with 2012 volume 1, issue – it will be listed with a Journal Impact Factor in its second year, as the known count of scholarly items is zero for 2011.  In this type of case, the Journal Impact Factor calculation comprises citations to just one year of scholarly items.  The known zero value is displayed in the Journal Impact Factor calculation,” wrote Brennan. I had to read this a few times. The known value for 2011 is zero and therefore, the journal qualifies for an Impact Factor.

If a journal is not indexed in the first year, but rather the second year—say volume 2 indexed in 2012—Thomson Reuters does not know how many papers were published in 2011 because they were not indexing the journal for 2011. This produces a null value and that journal does not qualify for an Impact Factor, Brennan explained.

“Basically it boils down to when we began covering a journal, and the difference between a known zero value vs. an unknown null value,” Brennan wrote. Oddly, both eLife and Cell Reports are credited with citations to papers published from 2009-2011. I asked Brennan about this and she said that they are dependent on the metadata provided with the citation. If an author puts the wrong year in the reference, the journal gets credit for that citation. In my past conversations with Thomson Reuters regarding citations, I was under the impression that there was validation of the citations, but these erroneous citations show the value in taking the time to dig through the citation data used for your journal and making sure it’s accurate.

A journal does not need to request an Impact Factor in order to get an Impact Factor. In fact, eLife has declared over and over again that it does not care about Impact Factors and that it will not promote an Impact Factor. They may have requested to be included in WOS. This is clearly not a case of eLife or Cell Reports trying to get the upper hand. In fact, Brennan makes it clear that this is all perfectly normal.

The problem is that certain new journals clearly have an advantage over others. eLife had a lot of start-up money for promotion, and a very influential editor and board. There are no fees for publishing in the journal, yet. This particular title would have a relatively easy time of ensuring that the first year of publication, October –December of 2012, included influential and highly citable authors in a very broad yet elite journal.

But the second year of publication for a new journal can look very different than the first—maybe not for eLife or Cell Reports but for less sensational though still higher end journals. The first year may be chock full of papers written by the friends of editorial board members. This is how journals start! You have to solicit the heck out of your contacts, and as discussed at my recent editorial board meeting, you need to go after more influential authors—tenured authors—who aren’t fazed about a lack of Impact Factor. By the second year, the board may sit on its laurels a bit or maybe have run out of friends to solicit. Year two is when you hope to see spontaneous submissions start rolling on. Year two for an OA journal may be the year that APCs are being consistently charged following the first year of gratis publication.

Why does year two matter? Because the Impact Factor is supposed to be based on two years of publication and citation data and year two may actually drag down the average citations per paper achieved by the big names solicited for year one.

If you look at the Journal Citation Report for eLife and Cell Reports, you will see that there are zero publications in 2011 but that is the only indication that the Impact Factor is actually only based on one year. The term “partial” does not seem to be an official designation of Thomson Reuters but perhaps there should be some qualifier for Impact Factors based on one year’s worth of data.

As for new journals hitting the market, my advice would be to collect at least 20 papers for publication in the first issue or two and promote them a lot to encourage quick citation. Get papers published ahead of print as soon as possible. Citations to those early versions online are counted and held until the paper is assigned to an issue. You can start collecting citations well before the first issues are posted.

While your efforts may be focused on solicitations at this point, equal effort should probably be made for increasing citations in that first year of publication. Subscription journals may want to consider giving away access to the content in order to increase visibility. Coach the authors of these first papers to use social media and email alerts to colleagues.

Perhaps with enough effort and solid content, a new journal could cut a little time off the long wait for an Impact Factor, though it’s still unclear exactly how one makes the cut. Thomson Reuters claims to understand that different fields have different needs and citation patterns. While our niche civil engineering journals do not compete with eLife or Cell Reports, it is a bit frustrating to see elite journals benefit from a system that does not equally benefit the average journal. Given Thomson Reuters’ new commitment to transparency, some clear guidance on how to jump the line would likely be greatly appreciated.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

50 Thoughts on "The Mystery of a “Partial” Impact Factor"

Or, you know, we could all stop obsessing about an opaque, irreproducible magic number assigned without consultation or justification by a self-appointed for-profit corporation.

(I know this is not a novel observation, but it is evidently one that needs to be made repeatedly.)

I would encourage you all in the academic community to keep talking about the misuse of this one metric. A more balanced approach from a wide variety of sources should be part of a journal’s “profile.” But for now, our authors care about Impact Factor and as such, we need to pay attention to it. For what it’s worth, ASCE doesn’t promote Impact Factors either.

I didn’t know that about ASCE — good for them!

Maybe someone should maintain a list of publishers that don’t publicise their impact factors. At the moment I’m only aware of PLOS, eLIFE and ASCE, but hopefully there are more!

It’s a tough call to make–authors and readers want this information. If you go to Google and type in the name of pretty much any journal, one of the first auto-suggestions is always the name of that journal followed by Impact Factor. We get a big chunk of traffic every day at The Scholarly Kitchen from people searching for “PLOS ONE Impact Factor”.

So clearly there’s great demand for the information. Deliberately taking a stand and not offering up that information does make a clear statement, but it also opens up the bigger question of whether journals should be serving the needs of researchers or dictating to researchers what those needs should be. It’s a bit of a slippery slope.

Well, people want a lot of things that aren’t good for them. If I let them, my children would live entirely on chocolate. At the risk of seeming paternal, it seems that pretty much everyone agrees IFs are way overused and abused(*), and everyone seems to be just waiting for everyone else to stop it. In that situation, you need first movers (and second, and third …). I like to give appropriate credit to publishers who are prepared to lead on this, even when it may be to their own short-term detriment.

(*) At least, I’ve never heard anyone saying “No, actually the impact factors of the journals he’s published in is a good way to measure the quality of someone’s work”.

But do you really want Elsevier, for example, taking a parental role over your career and telling you what you are and are not supposed to do?

It’s hard for a journal to say to an author, that thing that your thesis committee, tenure committee, hiring committee and your funder says you must do in order to advance your career, we think you shouldn’t do that. So please put your career on hold for our principles.

Where it seems more appropriate at least is in some of the cases you talked about above–when the journal is owned by and run by the community itself, such as the ASCE journals. This could be part of a community-driven attempt to change their own working conditions. Or for eLife, you’re talking about funding agencies making a clear statement that they don’t consider the IF important in their decision making process.

And for what it’s worth, at every single editorial board meeting I’ve ever attended, we spend 10 minutes bitching about the IF, and how terribly it’s misused. Then we spend 20 minutes on strategy to improve our journal’s IF because it is what the system dictates that we must do.

“But do you really want Elsevier, for example, taking a parental role over your career and telling you what you are and are not supposed to do?”

No 🙂

“It’s hard for a journal to say to an author, that thing that your thesis committee, tenure committee, hiring committee and your funder says you must do in order to advance your career, we think you shouldn’t do that. So please put your career on hold for our principles.”

That’s not quite what I’m advocating. Just as there is plenty of blame to go around for the present situation, so there is plenty of opportunity to go round for fixing it. So rather than having researchers standing off and blaming publishers, while publishers blame reseachers, administrators, funders, etc., what I want to see — and, thankfully, what we are increasingly seeing — is some members of each group working to break the deadly embrace. That means that every publisher, every researcher, every funder and every administrator who is still in thrall to the Impact Factor has trailblazers within their own field to look to — one examples of each being eLIFE, Mike Eisen, Wellcome and the REF.

Who says IF is overused and abused? The users? Or, those denied inclusion? Or, just contrarians?

If the IF is used it is done so because people find them useful!

Your statements are specious at best and unfounded without data to support them. In short, you are but one man’s opinion. I, before retirement, found them most useful.

I say it’s overused and abused. It’s a pretty good metric for doing what it was designed to do, compare one journal in a field to another. It’s a terrible metric for trying to determine the value of one researcher’s work or the value of one particular paper.

It has its uses but it has come to be used for things for which it was not intended, and it does not perform well in those areas.

“Who says IF is overused and abused? The users? Or, those denied inclusion? Or, just contrarians?”

The Wellcome Trust, the UK’s Research Excellence Framework, eLIFE, PLOS, David Crotty, the AAAS, the European Mathematical Society, the Howard Hughes Medical Institute, PNAS … I could go on.

I am amazed that you are sceptical about this. It’s not really controversial.

Mike the problem is not the IF it is how people use it! The IF is simply a number, a measurement. To complain about it is to complain that 2+2=4! I have not seen David Crotty complain about the IF, he has said that many misuse it but that is different than what you are implying.

No, actually the impact factors of the journals he’s published in is a good way to measure the quality of someone’s work. It is not the only way but it is valuable. This is why it is so widely used. Contrary to popular opinion, people are not stupid.

I tend to think of the journal’s brand reputation as being more valuable than its straight up Impact Factor. An average neuroscience journal will often have a much higher IF than a top psychology journal due to citation culture differences in each field. Does that make the researcher who publishes average work in the neuroscience journal a better researcher than the psychologist doing top level work?

No it just makes it easier for him to get tenure. As I have said before, the problems with IF are well known but they do not making using the IF as part of decision making a mistake.

How is it not a mistake to create an easier path to tenure for a mediocre researcher over an excellent one?

Actually I was being facetious. But if the tenure committee does not know about the disciplinary difference you describe then that is the mistake, not using the IF.

Not as I understand the concept of abuse. Measuring two things the same way in ignorance of their important differences is not an abuse of the measuring device. It is just a mistake. A cup of sifted flour and a cup of unsifted flour do not contain the same amount of flour. It is not an abuse of the cup not to know this, even if it ruins the cake.

In the case of the IF the real question is whether important (and correctable) mistakes are in fact being made. It is far from clear that they are. Wishing everyone was smarter is not generally a solution to any problem.

The empirical question is whether this easier tenure path actually occurs, and if so how often? If it seldom occurs then it is not a problem. The claim seems to be that tenure and promotion committees do not know what they are doing where IF is concerned. That is a very strong claim which I see little evidence for. After all, these are highly intelligent people making very serious decisions and they know a lot about the IF because they publish.

I think knowledge of the IF varies greatly. I have to explain the IF at nearly every single editorial board meeting I attend, what it is, how it is determined, what years it covers, all the basics. Some already seem to know this well, to others it is a revelation. And this is from people who are actively involved with a journal.

Yes, but we are talking specifically about tenure and promotion committees which it is claimed are making great use of IF numbers, the use of which is further claimed to be abusive precisely because they do not understand what they are doing. The scientific questions are how much these great uses actually exist and if they do how little the players know about what they are doing, such that it is abusive? (Editorial boards are not using IF’s to make life changing judgments on individuals.)

These are very strong claims, which I have not seen supported by appropriately strong evidence. This whole IF hate thing has the surreal aspect of advocacy to it. In fact it seems to go hand in hand with hating publishers. Maybe it is just that no one likes being judged, but being judged is the essence of both publishing and promotion. In any case banning the use of information is not the solution.

What “everyone agrees on” is that Academia has misused IF and created this problem. Continuing to blame publishers is inaccurate and misplaced. Efforts such as DORA have the right idea IMO.

Adam I do not think everyone agrees on anything subjective. Many possibly.

Indeed! The enemy is us (academics): Genetics. 2013 Aug;194(4):791-2. doi: 10.1534/genetics.113.153486; PMID: 23723423

“Continuing to blame publishers is inaccurate and misplaced.”

As I find myself having to say a lot recently, there is plenty of blame to go around. The idea that academics’ own culpability should give publishers a free pass for theirs (or vice versa) is quite insupportable. Funders, administrators and others are also not blameless.

Journals who do promote their IF do so because they think it’s something their community is interested in and/or because it might benefit the journal in some way. Once the academic community decides something, in this case IF, no longer has value it will disappear or at the very least not matter. Is this “insupportable?”

Adam, I doubt that the people who use IF agree that they are misusing it. But perhaps your quote marks mean that you were not claiming that everyone agrees. (Scare quotes change meanings.) DORA is part of a social movement against a common practice. Much follows from this. It is seldom, if ever, the case that everyone knows that common practice is wrong.

Thanks Angela. I thought I knew a lot about IF etc. but much of this is new info for me. I’d like to hear if there were other less “prestigious” or well funded journals who followed the same path but were denied entry. Seems to me if TR has a clear policy on this, it should cover all journals, not just a select few.

The process for selecting journals to include in Web of Science is entirely subjective. Whether that is fair could be debated but in reality, TR sells this product for a lot of money and curation is sort of the point. I should mention that it is not at all easy to get indexed in Scopus or any of the sub databases. I can’t even get Elsevier to tell me which of our journals are included in their Engineering Village database. I can also say that getting answers from TR can be excrutiating when you aren’t going through their PR person:)

The subjective nature of what is in and what is out of a prioprietary databse should not be that big of a deal. The fact that entry in that database is tied to an Impact Factor coupled with the fact that the Impact Factor has been given mystical powers is where the unease comes from.

The authority of the Impact Factor is based upon accurate reporting. For eLife, their Impact Factor calculation included a citation to a 2011 article (eLife started publication in 2012). While Thomson Reuters cannot prevent authors from making citation errors, they could include simple, automated error checking when calculating Impact Factors to prevent erroneous citations from creeping into their reports. While one bad citation will have little effect on eLife’s Impact Factor, it sends a poor message to those who view this source as trustworthy and authoritative.

That’s true. TR wouldn’t discuss individual journals with me. I did point out the errors in eLife and Cell Reports (which had a paper cited in 2009 despite their first issue in 2012). I don’t know what the correction process is. The errors stand out in the case of brand new journals. Errors would not be easy to detect for existing journals. It will be interesting to see if there is a correction made.

Angela, it sounds like your two new journals (congratulations) have a shot at a 2015 IF, or am I reading this wrong?

Not likely. One journal is just starting to publish content late in 2014. It seems unlikely at this point that we will publish 20 papers in 2014. So, if TR decided to index the journal in 2015, they would say that the first papers published in 2014 are “unknown” and therefore would not count. For this new title of ours, the first chance at an IF would be 2017.

So if you hold off first publishing until early 2015 then you might get an IF in 2016, a year earlier?

That appears to be the case but woud be difficult for a niche journal to acheive, regardless. The editors at TR would need to deem the journal worthy of inclusion in it’s first year. In my experience, it takes YEARS for a journal to be evaluated and included. The evaluation periods are cyclical. So we get information such as “the journal is being evaluated and may be considered for 2014.” Then 2014 comes and it’s not included. We then need to wait another 2-3 years (I can’t remember which) to be considered again. We have a journal that began publishing in 2001 and has just gotten it’s first impact factor with the 2013 release. TR told me in the interview for this article that the number of citations is not a factor but most of our journals that do not have an IF are denied one due to the low number of citations.

I do wonder if there’s some subject specific activity going on here. So much of engineering research seems to me more problem-solving than hypothesis driven inquiry. As a result, if you’ve solved the problem, that doesn’t open up new doors for the next round of research that would cite your paper–the problem is solved. So my understanding is that because of this, engineering tends to lead to low citation levels and low Impact Factors, no matter the quality of the research. It’s a big reason why so many biomedical engineers publish their papers in general biology journals rather than in engineering journals.

That is true. Also the average age of a citation is 5 years for Civil Engineering titles. The five-year impact factors are much higher for us than the regular IF.

I hardly think of ASCE journals as niche. ASCE’s name should certify them as valid.

But I guess the point is that you have uncovered a really bad situation. It calls out for reform. I think we see some of the same gyrations at PubMed Central. Who is out, who is in, and when? No doubt we will see these issues with the rest of the Federal agency public access programs as well. When arbitrary decisions like this have deep financial implications there should be legal recourse to challenge goofy practices.

Thanks. Some of ASCE’s journals are quite broad. You could not call the Journal of Structural Engineering a niche journal. But others are very specific. Most of the new journals proposed and started are very specific as the broad categories are well covered.

I am rather confused. If an association/society/commercial publisher/ does not mention the IF then what difference does it make if it has one or not?

If one thinks the IF is a bogus measurement why not say “We think the IF is a bogus form of measurement, and is misused by the scientific/scholarly community and therefore do not seek an IF and if one is given will request that the journal be removed from the IF listings.”

One could further state that instead of the IF we propose the following…..

In this manner, you can slay the IF and present a new form of measurement.

Part of the problem with this is that having an Impact Factor comes part and parcel with being indexed by Web of Science, which provides incredibly valuable tools for your own use as well as for the use of the research community. As far as I know, you can’t have one without the other.

David: What you say is so, I guess my complaint with those who rail against the IF is that they have no response to it. The detractors are rather like Don Quixote. If one has a fight to pick with TR and the IF and the Web of Science then present a viable acceptable alternative but don’t present what they perceive as a problem and then say well someone should do something about this!

As a publisher, I always looked at the IF as a measure of the acceptability of what I was publishing by those who read the journal. My question was: How can I attract papers that others find useful? My discussion with editors were from that perspective and not from the perspective of how can your colleagues be so stupid as to accept the impact factor as something important!

Once again a Kitchen post on a specific IF problem has drifted into a general attack on the IF. This is characteristic of social movements. Who wants to merely fix a problem when you think you can change the world, as unrealistic as that thought may be.

Why is it unrealistic to think you can change the world?

Every time the world has changed, it’s because someone changed it. Lucky they were unrealistic enough to think they could.

George Bernard Shaw said it best: “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

It is not always unrealistic, nor did I claim that, just sometimes. I think this is one of those times. The fact that we cannot discuss a specific problem with the IF, because of the inevitable anti-IF uproar, is itself a problem in my view. If you want to trade maxims I offer “the perfect is the enemy of the good.” (Author unknown.)

I’m gonna go with Groucho Marx: “Time flies like an arrow. Fruit flies like a banana.”

Comments are closed.