Thomson Reuters launched a new platform called InCites last week. The platform combines Journal Citation Reports and Journal Impact Factor information with the Essential Science Indicators, which tracks trends and highly cited authors. The platform allows for some interesting analysis including looking at collaborations between individuals and institutions. The dashboard allows for a personalized experience with saved reports and “tiles” containing data points and visualizations of interest. Thomson Reuters has also included a full listing of the “citable” documents for each journal. It has always been a bit of a mystery for users to really understand the denominator of the famous Impact Factor equation and the full list adds much needed transparency.
I interviewed Patricia Brennan, vice president of Product and Market Strategy at Thomson Reuters, about the new platform and changes made in response to user feedback and the San Francisco Declaration on Research Assessment (DORA).
For those who have not seen it, what is InCites and what does it offer that the Journal Citation Reports site does not?
InCites is a platform for research analytics. At its core, it provides an array of article-level citation indicators based on the Web of Science, allowing a user to view performance at an organizational, regional, individual or journal level. It supports the filtering of data with options such as time, organization type, collaborations and document type (review article or book). Users can create their own reports, tiles, and dashboards, which can then be saved or shared with others. As an analytics platform InCites contains focused content sets, including the Journal Citation Reports (JCR) and Essential Science Indicators (ESI), which covers the top and highly cited literature by region, journal, and organization.
JCR remains focused on journals and specifically the aggregation of journal information at a specific point in time and to specific parameters. In terms of what InCites provides beyond the JCR, as a research analytics platform it offers a comprehensive view of research performance by assessing multiple factors, while the JCR—just one of the modules within InCites—only looks at the journal level. InCites users can now go beyond the global influence of a specific journal with new visualizations and analysis of Journal Impact Factors that directly link to article level data for open, transparent analysis. Users can also view data on top research/researchers in ESI. It also enables one to explore more entities including proceedings, books, people, and organizations. Its metrics are more dynamic as they are first calculated at the article-level and refreshed with much greater frequency than the annual JCR compilation.
How would someone get access to the InCites platform?
One can get a web-based product subscription from Thomson Reuters or in some cases organizations and partners have access to an API that provides specific feeds of Web of Science articles.
The press release that accompanied the launch of InCites mentions “benchmarking articles, journals, institutions, and people.” This seems quite a departure for the traditional JCR metrics that focus on journal level metrics. Are you introducing new metrics with this platform?
This is not really a departure, though we are pleased to be able to provide this breadth and depth of analysis in one, integrated environment. We are widely known for the JCR, but we’ve actually provided normalized article-level datasets for over 30 years. This integration allows us to unify this data all on one single access point. We are introducing some newer metrics around international and industry collaborations, percentage of documents in the top 1.10 percent by citation, average percentile, and normalized citation. Some of these metrics were a part of first generation InCites, but this will be the first time they will be connected to our other research analytics solutions on one platform.
Your release mentions that a third-party is supplying media monitoring and integration with ranking providers. Can you be more specific about your partners, what they are supplying, and how you are using that data?
There are really two different types of partnerships, though related in that a fundamental principle of our InCites platform, which is based on maintaining collaboration with leading organizations within the scholarly community. We are currently partnering with a number of data providers. Now with the unified platform, we are beginning to support our partners by distributing their content to our customers directly through InCites.
By integrating data from ranking providers we are able to extend existing partnerships. For example, Thomson Reuter’s data is used by a number of partners who create rankings of universities. We now include a filter so that one can look at institutional data based on how they appear in a ranking. We have also included a filter based on the DOAJ open access status.
For media monitoring we are tracking activity and information outside of scholarly published literation. These feeds look at what is trending in social media, traditional media, and the open web, providing a view into technology hotspot topics and activity.
Mid to late June is the typical release time for Journal Impact Factors. This year, the information was delayed by a month. What was the reason for the delay?
This past year has been one of significant change for Thomson Reuters. We have implemented new systems and new platforms across all our products from content management, metrics production, as well as our product interface and user experience. In some cases we have kept classic and next gen products as we transition functionality to the new environments. Many of our customers have workflows built around our existing products so, we will be providing both versions of the JCR on InCites and the model through Web of Science for a while. Because of these infrastructure changes and publishing to new environments we took a little longer this year to compile and review data.
Some publishers are reporting problems with the information in InCites—missing articles counts and other anomalies. It appears that there may be some missing information in the legacy JCR platform as well. I’ve seen a lot of corrected Impact Factors posted by TR on Twitter this week. Are there widespread issues we need to be aware of? Should we expect more corrections?
The Twitter posts you are referring to are directly from our Notices file within the JCR. We post JCR additions and adjustments on a weekly basis through September. However, there are no widespread issues. We strive for accuracy and do extensive data review and validation in our data processing.
I don’t need to tell you that Journal Impact Factors are quite controversial. Some argue about the misuse of the Journal Impact Factors in assessing quality of individual papers and researchers. TR has always provided disclaimers regarding how the JIF should be used. But other concerns have been raised about the fairness of the metric. The San Francisco Declaration on Impact Assessment, or DORA, laid out several points of which they take issue with the metric. They recently posted a letter sent to TR in April of 2013, which they claim went unanswered. TR has promised some transparency regarding the Impact Factor calculations. Can you respond to the three main points from the DORA letter here?
That’s correct, and in addition to the document you reference above we also have guidelines. Nonetheless we know that this education is ongoing and one we take seriously. With regard to transparency, this year we introduced a link in next generation InCites that directly connects the JCR to the citable item count from Web of Science. Our goal with this step is to reduce the mystery about the path from the source content to the calculated metric.
We did receive a letter from DORA in 2013 and had posted our response. We have updated it based on recent inquiries and changes. I hope that we can have a follow-up discussion with the DORA coalition soon to get feedback on the changes we have made and are planning as we evolve InCites and JCR.
Some view Impact Factors as an old fashioned metric and there is great interest in metrics that track social media and blog activity as well as page ranks and mass media attention. Are there plans to move in new directions that might be more inclusive in assessing impact?
Impact Factor is two things: simple and specific. The simplicity is in the calculation and the specificity is in the parameters of the JCR data. The base of calculation is the number of citations in the current JCR year to items published in the previous two years, divided by the total number of scholarly citable items published in those same two years. InCites contains many more indicators that are fine-tuned and normalized, but the consistency of the Impact Factor provides a great longitudinal dataset that researchers and others in this field find invaluable. In terms of interest in metrics, there are opportunities beyond the realm of scholarly citation through experimentation and creativity. We are looking to eventually go beyond the citations with the introduction of the Recorded Future tiles in InCites.
The Recorded Futures tiles do show promise in that it ranks “hot topics” and then creates visualizations of mass and social media mentions. I see some Twitter and blog mentions as well as Eureka Alert press releases. This is a neat visualization tool but, will this remain an independent feature of InCites or will these citations to scholarly work be included in future metrics?
We currently monitor 500,000 web sources for content. These range from mainstream media, blogs, niche publications, journal abstracts to university press sites, which generate the bulk of data in the tiles. We also perform social media harvesting of Twitter and Facebook, but for a variety of reasons, that content is unlikely to appear in the tiles.
At this point we will not mingle scholarly citations with the open web citations simply because it is a case of apples and oranges. They are different venues and we would be measuring different attributes. Like others within the scholarly community we are looking at the best ways to measure and report these activities in the context of the core citation indicators.
There are a lot of knock-offs to the Journal Impact Factors. There is now a cottage industry of predatory publishers and services waiting to take advantage of APC-paying authors. Predatory journals are using fake metrics or just fabricating Impact Factors to encourage submissions. Given that the JCR database is only available by subscription, how would someone be able to verify whether Impact Factor information is accurate and real?
The Journal Citation Report is subscription access, but our journal coverage listing is not. In addition to our recent upgrades, we now feed JCR and Impact Factor data directly into the Web of Science.
I have been watching publishers and editors tweet about their Impact Factors and some are quite interesting. Some are using half their allotted characters to proclaim great victory at their new numbers and use the rest of the tweet to criticize the metric. What to you make of that? Are publishers and editors trying to hedge their bets?
Many years ago, Dr. Eugene Garfield wrote a piece entitled “The Agony and the Ecstasy: The History and Meaning of the Journal Impact Factor” for a print publication, Current Contents.
The venue may have shifted to Twitter but the sentiment is the same. I can’t speculate about hedging, but I see it more of a recognition of the Impact Factor’s place. The community is more aware today about the fact that it’s a journal metric for journal evaluation or comparison in the context of journal categories and the commentary I’ve seen has been more about application and appropriate use.