Thomson Reuters launched a new platform called InCites last week. The platform combines Journal Citation Reports and Journal Impact Factor information with the Essential Science Indicators, which tracks trends and highly cited authors. The platform allows for some interesting analysis including looking at collaborations between individuals and institutions. The dashboard allows for a personalized experience with saved reports and “tiles” containing data points and visualizations of interest. Thomson Reuters has also included a full listing of the “citable” documents for each journal. It has always been a bit of a mystery for users to really understand the denominator of the famous Impact Factor equation and the full list adds much needed transparency.

incites logo

I interviewed Patricia Brennan, vice president of Product and Market Strategy at Thomson Reuters, about the new platform and changes made in response to user feedback and the San Francisco Declaration on Research Assessment (DORA).

For those who have not seen it, what is InCites and what does it offer that the Journal Citation Reports site does not?

InCites is a platform for research analytics.  At its core, it provides an array of article-level citation indicators based on the Web of Science, allowing a user to view performance at an organizational, regional, individual or journal level.  It supports the filtering of data with options such as time, organization type, collaborations and document type (review article or book). Users can create their own reports, tiles, and dashboards, which can then be saved or shared with others. As an analytics platform InCites contains focused content sets, including the Journal Citation Reports (JCR) and Essential Science Indicators (ESI), which covers the top and highly cited literature by region, journal, and organization.

JCR remains focused on journals and specifically the aggregation of journal information at a specific point in time and to specific parameters. In terms of what InCites provides beyond the JCR, as a research analytics platform it offers  a comprehensive view of research performance by assessing multiple factors, while the JCR—just one of the modules within InCites—only looks at the journal level. InCites users can now go beyond the global influence of a specific journal with new visualizations and analysis of Journal Impact Factors that directly link to article level data for open, transparent analysis. Users can also view data on top research/researchers in ESI.  It also enables one to explore more entities including proceedings, books, people, and organizations. Its metrics are more dynamic as they are first calculated at the article-level and refreshed with much greater frequency than the annual JCR compilation.

How would someone get access to the InCites platform?

One can get a web-based product subscription from Thomson Reuters or in some cases organizations and partners have access to an API that provides specific feeds of Web of Science articles.

The press release that accompanied the launch of InCites mentions “benchmarking articles, journals, institutions, and people.” This seems quite a departure for the traditional JCR metrics that focus on journal level metrics. Are you introducing new metrics with this platform?

This is not really a departure, though we are pleased to be able to provide this breadth and depth of analysis in one, integrated environment. We are widely known for the JCR, but we’ve actually provided normalized article-level datasets for over 30 years. This integration allows us to unify this data all on one single access point. We are introducing some newer metrics around international and industry collaborations, percentage of documents in the top 1.10 percent by citation, average percentile, and normalized citation. Some of these metrics were a part of first generation InCites, but this will be the first time they will be connected to our other research analytics solutions on one platform.

Your release mentions that a third-party is supplying media monitoring and integration with ranking providers. Can you be more specific about your partners, what they are supplying, and how you are using that data?

There are really two different types of partnerships, though related in that a fundamental principle of our InCites platform, which is based on maintaining collaboration with leading organizations within the scholarly community. We are currently partnering with a number of data providers. Now with the unified platform, we are beginning to support our partners by distributing their content to our customers directly through InCites.

By integrating data from ranking providers we are able to extend existing partnerships.  For example, Thomson Reuter’s data is used by a number of partners who create rankings of universities. We now include a filter so that one can look at institutional data based on how they appear in a ranking. We have also included a filter based on the DOAJ open access status.

For media monitoring we are tracking activity and information outside of scholarly published literation. These feeds look at what is trending in social media, traditional media, and the open web, providing a view into technology hotspot topics and activity.

Mid to late June is the typical release time for Journal Impact Factors. This year, the information was delayed by a month. What was the reason for the delay?

This past year has been one of significant change for Thomson Reuters. We have implemented new systems and new platforms across all our products from content management, metrics production, as well as our product interface and user experience. In some cases we have kept classic and next gen products as we transition functionality to the new environments. Many of our customers have workflows built around our existing products so, we will be providing both versions of the JCR on InCites and the model through Web of Science for a while. Because of these infrastructure changes and publishing to new environments we took a little longer this year to compile and review data.

Some publishers are reporting problems with the information in InCites—missing articles counts and other anomalies. It appears that there may be some missing information in the legacy JCR platform as well. I’ve seen a lot of corrected Impact Factors posted by TR on Twitter this week. Are there widespread issues we need to be aware of? Should we expect more corrections?

The Twitter posts you are referring to are directly from our Notices file within the JCR. We post JCR additions and adjustments on a weekly basis through September. However, there are no widespread issues. We strive for accuracy and do extensive data review and validation in our data processing.

I don’t need to tell you that Journal Impact Factors are quite controversial. Some argue about the misuse of the Journal Impact Factors in assessing quality of individual papers and researchers. TR has always provided disclaimers regarding how the JIF should be used. But other concerns have been raised about the fairness of the metric. The San Francisco Declaration on Impact Assessment, or DORA, laid out several points of which they take issue with the metric. They recently posted a letter sent to TR in April of 2013, which they claim went unanswered. TR has promised some transparency regarding the Impact Factor calculations. Can you respond to the three main points from the DORA letter here?

That’s correct, and in addition to the document you reference above we also have guidelines. Nonetheless we know that this education is ongoing and one we take seriously. With regard to transparency, this year we introduced a link in next generation InCites that directly connects the JCR to the citable item count from Web of Science. Our goal with this step is to reduce the mystery about the path from the source content to the calculated metric.

We did receive a letter from DORA in 2013 and had posted our response. We have updated it based on recent inquiries and changes. I hope that we can have a follow-up discussion with the DORA coalition soon to get feedback on the changes we have made and are planning as we evolve InCites and JCR.

Some view Impact Factors as an old fashioned metric and there is great interest in metrics that track social media and blog activity as well as page ranks and mass media attention. Are there plans to move in new directions that might be more inclusive in assessing impact?

Impact Factor is two things: simple and specific.  The simplicity is in the calculation and the specificity is in the parameters of the JCR data. The base of calculation is the number of citations in the current JCR year to items published in the previous two years, divided by the total number of scholarly citable items published in those same two years. InCites contains many more indicators that are fine-tuned and normalized, but the consistency of the Impact Factor provides a great longitudinal dataset that researchers and others in this field find invaluable. In terms of interest in metrics, there are opportunities beyond the realm of scholarly citation through experimentation and creativity. We are looking to eventually go beyond the citations with the introduction of the Recorded Future tiles in InCites.

The Recorded Futures tiles  do show promise in that it ranks “hot topics” and then creates visualizations of mass and social media mentions. I see some Twitter and blog mentions as well as Eureka Alert press releases. This is a neat visualization tool but, will this remain an independent feature of InCites or will these citations to scholarly work be included in future metrics?

We currently monitor 500,000 web sources for content. These range from mainstream media, blogs, niche publications, journal abstracts to university press sites, which generate the bulk of data in the tiles. We also perform social media harvesting of Twitter and Facebook, but for a variety of reasons, that content is unlikely to appear in the tiles.

At this point we will not mingle scholarly citations with the open web citations simply because it is a case of apples and oranges. They are different venues and we would be measuring different attributes. Like others within the scholarly community we are looking at the best ways to measure and report these activities in the context of the core citation indicators.

There are a lot of knock-offs to the Journal Impact Factors. There is now a cottage industry of predatory publishers and services waiting to take advantage of APC-paying authors. Predatory journals are using fake metrics or just fabricating Impact Factors to encourage submissions. Given that the JCR database is only available by subscription, how would someone be able to verify whether Impact Factor information is accurate and real? 

The Journal Citation Report is subscription access, but our journal coverage listing is not. In addition to our recent upgrades, we now feed JCR and Impact Factor data directly into the Web of Science.

I have been watching publishers and editors tweet about their Impact Factors and some are quite interesting. Some are using half their allotted characters to proclaim great victory at their new numbers and use the rest of the tweet to criticize the metric. What to you make of that? Are publishers and editors trying to hedge their bets?

Many years ago, Dr. Eugene Garfield wrote a piece entitled “The Agony and the Ecstasy: The History and Meaning of the Journal Impact Factor” for a print publication, Current Contents.

The venue may have shifted to Twitter but the sentiment is the same. I can’t speculate about hedging, but I see it more of a recognition of the Impact Factor’s place. The community is more aware today about the fact that it’s a journal metric for journal evaluation or comparison in the context of journal categories and the commentary I’ve seen has been more about application and appropriate use.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

14 Thoughts on "Interview with Thomson Reuters: InCites Platform Offers New Analytics and Transparency"

This is not a new concept; already there are various altmetrics available which analyses and consolidates the research output – and these have reached way beyond the IF, which is usually released 1-2years from the publication year and by then the topics are outdated. These altmetrics are settings trends over the new-era expectations by tracking every resource.

To clarify, what InCites provides cannot be described as altmetrics. In fact, they make it clear in the interview above that they are not assigning any metric based on the “news feed” feature. The Recorded Futures module of the platform is a whole separate “interesting to see” feature but really adds nothing to the current metrics.

The analytics on collaboration are impressive mostly due to the fact that TR seems to have spent a lot of time disambiguating the data. This makes perfect sense for them given that they have another arm of the company that sells lists of authors for marketing purposes. That said, the data pool is limited to the content that is indexed in Web of Science, which is incomplete.

This is a great interview with a lot of helpful information. I have a question about the “journal coverage listing” that Ms. Brennan mentions. My understanding and experience is that this list is not updated throughout the year as Thomson Reuters drops coverage of a journal.

I am aware of the Thomson Reuters Master List but have found this database to be unreliable and difficult to search.

Scholars need an easy way to know when a journal has lost its impact factor. I guess one can search the journal coverage listing and see if a journal is there and then check the master list and see if it is NOT there, and this may indicate whether a journal has been dropped from coverage from the last time the annual journal coverage listing was released.

I think it would be better if TR made available an updated list throughout the year of journals that had been dropped from coverage, and therefore impact factor. This would help researchers avoid submitting papers to journal that they believe have impact factors but really do not.

One example is Marsland Press’ Life Sciences Journal, which I blogged about recently. It lost its impact factor during the past year sometime, but there was no public notification of this. The coverage list continued to indicate an impact factor. Hundreds of scholars continued to submit papers to the journal thinking it still had an impact factor, when in fact it didn’t.

I also question the data quality in the TR Master list. If you search for the serial (actually monographic series) called CAHIERS DES SCIENCES NATURELLES, ISSN 1420-4223 in the TR Master List, the entry gives a link to a hijacked version of the journal. I reported this over a month ago.

The new products look interesting, but TR needs to improve how it supplies basic information to researchers.

Jeffrey
Thank you for the feedback and points on the journal list. We will take this into consideration and would like speak in more detail about your suggestion.

I’ll echo Jeffrey’s comment–I’d love to know more from TR about where the line is drawn as far as journals getting put in “time out” for gaming the Impact Factor. For example, from this year’s list, it looks like the lowest self-citation rate that for a journal censored for too much self-citation was 58%. Is there a specific limit for this type of behavior? I know of at least one journal where 47% of its IF citations are self-cites, which seems to be okay under the rules.

Beall,
I support your request for more transparency to TR. It is very much required for any influential list. I have some suggestions

1. It would have been better if TR starts to give (may be one line) reason for every ‘dropped’ and every ‘Newly Added’ journal with ‘time stamp’ in this link: http://ip-science.thomsonreuters.com/cgi-bin/jrnlst/jlcovchanges.cgi?PC=MASTER.

It will be an interesting guideline for all publishers who want to comply with TR guidelines.

2. It seems TR is having an established guideline for evaluation of journal. But it seems it is just for public viewing. But what happens after the application is a mystery. It is a complete black-box. For me a journal evaluation is like peer review of a manuscript based on established editorial guideline. Authors must know the review comments (if it is accepted or rejected or require revision, etc). I think much more transparency is required in these cases. Is it possible for TR to start giving ‘review comments’ for journal evaluation to make the process transparent?

Interesting and good news to hear that TR is providing more transparancy on the calculation of IFs, especially the denominator of that metric. Finally we get to see the list of citable documents. I hope that is also retrospective.
However, the drawback is that as I understand from this interview, you will only get this information if you suscribe to the full InCites suite. Thus TR is profiting from their own failure to provide transparancy. It would be not more than reasonable to provide this information to any institution subscribung to JCR and WoS.

Jeroen, the list of cited items goes back to 2004. You are correct that the database is only available via a subscription to InCites or Web of Science. This information is not available with a subscription to just the Journal Citation Reports.

There are many more problems with TR than meets the eye, all related to the serious lack of transparency. I believe that these new metrics are a sly way for TR to dodge the critics and evolve a new system that continues the “game”. TR knows very well that the IF is gamed around the world, that scientists from dozens of countries are financially rewarded for their IF scores, in some cases directly, such as in Iran and China. These scientists, in turn, continue to support IF-based journals, and the cycle of somewhat “academic dishonesty” continues. These new metrics are additional noise factors that do nothing for science; they simply add noise. It would be nice if TR would sepnd some time to answer the grass-root critics like myself who have been waiting for 19 months now for a formal response to queries sent formally to TR and then published* to evidence the hipocrisy of this corporation. To critics like me, we are treated as nothing, but to big movements lie CORA, they feel the pinch and then make a big PR show to buffer the negative effects. Unfortunately, I believe that scientists, who join in the game, are the other half of the problem. The IF and all other associated TR-reated metrics should be banned from science. While “interesting” they are nothing more to science than an app is to a smart phone. You may read my critiques here:
* The Thomson Reuters Impact Factor: Critical Questions that Scientists Should be Asking
http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)81-83o.pdf

I checked the full listing of ‘Suppressed Titles’, for 2013, and did not find any that are on Beall’s List. The 39 … 2013 Title Suppression list journals were published by:

Universities – 7
Springer – 6
Taylor & Francis – 5
Elsevier – 5
Societies – 4
Wiley – 3
BMC – 1
Hindawi – 1
Inderscience – 1
World Scientific -1
Thieme – 1
Emerald – 1
Sage – 1
Verlag Hans Huber – 1
Vandenhoeck & Ruprecht – 1

Journals are suppressed either for excessive ‘self-citation’ (>59%) or for ‘citation stacking’ (recipient and donor journal pairs, with excessive citing)

How many of the journals on Beall’s list are indexed by WoS and have an Impact Factor? You can’t suppress something you don’t index.

Given the exacting requirements for inclusion in WoS, I would suspect virtually none. I just checked the ‘A’ publishers on Beall’s List and found none listed in the JCR.

Comments are closed.