Editor’s Note: This post is by Anne Stone, Senior Manager, TBI Communications. Anne joined TBI in 2016 as a marketing consultant serving the scholarly and academic ecosystem. She worked in marketing and publishing for over 20 years at organizations including Wiley, Blackwell, Pearson, and Constant Contact.

As research output continues to expand, publishers are increasingly challenged to attract the attention of individual readers. At the same time, researchers are under pressure to keep up with the latest developments in their field — and adjacent fields. In a recent AAP/PSP webinar, “The Changing Discovery Landscape,” Professor David McCandlish noted, “Not only am I trying to keep up with developments in my specialty, but I’m also trying to find cool ideas from other areas where I’m not an expert.” As a quantitative evolutionary biologist at Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, he monitors the literature in his own “neighborhood” of evolutionary biology and genetics, but also roams more widely across other fields, such as math, physics, and engineering.

Both publishers and libraries are looking for new, effective, and efficient ways to connect researchers to the most relevant content and meet these user demands. How can discoverability be optimized when a researcher hasn’t even formed a thought about the next ‘cool idea’ to look out for? What’s the best way to help our readers parachute into the research landscape to inspire and advance ideas?

Young adult male with hands raised

The Scholarly Kitchen Chefs recently offered an anecdotal flavor of the many approaches one can use to stay informed about scholarly communications. “The face-to-face conversations in an environment of trust, you just can’t beat that,” offered David Smith as he lamented changes to Twitter. While conversations will always be highly valued, they are not always possible. Researchers are looking for solutions that scale across the vast, online research landscape.

Tools and platforms to increase the visibility of research offer new solutions, enabling peer-to-peer, human-generated recommendations, as well as ‘automagically’ generated links to relevant research, with a bit of serendipity built in.  But we are only beginning to build an understanding of the impacts of recommendation engines.

Unlike recommendations found on Facebook or Twitter, publishers have direct control over how article recommendations work on their primary platforms. For example, each article selected on Elsevier’s ScienceDirect platform includes three related articles, leading to hundreds of other recommended articles appearing on the platform. In these cases, article recommendations are contextualized within the research activity the user is actively engaged in – reading an article, not socializing – which can better fit into their immediate research workflow.

This preference for in-context recommendations is reflected in the most recent data from Gardner and Inger, who have been tracking research discovery trends since 2005. The latest report, published in 2016, provides evidence that publishers have made strides in improving the awareness and value of discoverability within their own platforms.

Across all disciplines, Gardner and Inger found that publishers’ websites had become a more important starting point for research compared to prior years. Researchers were also asked to rate the usefulness of platform features. The analysis highlighted that, “Related-articles is now the most popular feature of a publisher’s website.” Also, features such as reference linking, cited-by and forward citation linking, and saved searches were also highly rated for usefulness – reinforcing that the reader seeks efficiency in finding relevant articles.

These findings from Gardner and Inger are reflected in trends found in the consumer sector, where users have come to expect both peer and auto-generated recommendations. As an example of the influence of recommendations on behavior, an e-commerce study published in Journal of Retailing found that subjects using recommendations selected the suggested product twice as often as subjects who did not consult recommendations. The latest Gardner and Inger survey is currently underway and their 2018 report will show whether the demand for related article linking continues to grow. Researchers are invited to participate through April 14, 2018.

In 2016, SAGE Publishing shared the outcomes of  a multi-pronged market research program to support the development of platform content features in a white paper, “Expecting the Unexpected: Serendipity, Discovery, and the Scholarly Research Process”. The goal was to inform the design of the “SAGE Recommends” feature, intended to address the researcher’s challenge of their “vague, fuzzy, or even unspoken information needs when users don’t quite know what they are looking for.” Maloney and Conrad found that the majority of undergraduates (78%) and faculty members (91%) are inclined to click on links to recommended or related content during their online research, illustrating that recommendations are a useful alternative to search and citation-based discovery.

The authors also reflect on the recursive effect of citation-based discovery. If a paper is discovered, and then cited again because it has already been cited, then its popularity and success is self-perpetuating. Another weakness of search and citation-based discovery is there is some requirement of a priori knowledge of relevant keywords, journals, and trusted sources of information. For those in emerging fields of study or multi-disciplinary fields, and early-stage researchers, this approach may not be efficient.

What motivates a reader to click on a recommendation? Users trust their own counsel most. The article title and the user’s assessment of its relevance to their work or field of study are the most important factors, according to the SAGE research.

Serendipity enters into the decision to click with the next most popular response: “the title looks interesting/compelling.” Maloney and Conrad offer a ‘brief history’ of serendipity with an exploration of analogous content platforms, including Spotify and Netflix, and their approaches to increase content consumption. They consider the risks of over-personalization, also known as “the filter bubble,” which can be antithetical to serendipitous discovery.  “Serendipitous discovery should be of particular interest to information providers precisely because there is so little precedent; there is still tremendous scope for individual organizations to bring their own priorities and values to bear on how they recommend or otherwise help researchers discover their content.” The SAGE report encourages publishers to use their Web analytics and user behavior data to uncover the number of searches performed before the information need is satisfied, and perhaps how they refine their searches as they progress their inquiry.

“Serendipitous discovery should be of particular interest to information providers precisely because there is so little precedent; there is still tremendous scope for individual organizations to bring their own priorities and values to bear on how they recommend or otherwise help researchers discover their content.”

A recently published, controlled study, conducted by Kudlow, Cockerill, Toccalino, et al., explores the value of recommendations across a wider, multi-publisher network based on user behavior. The study was the first randomized, controlled trial that offers evidence for the impact of article recommendations delivered using an online, cross-publisher distribution channel, TrendMD. Founded in 2014, TrendMD offers scholarly publishers a recommendations widget that leverages both article relatedness and behavioral data, a technology called “collaborative filtering,” similar to Amazon’s product recommendations. Recommended articles in the TrendMD widget include titles from a publisher’s own content as well as from a network of 4,000 participating publisher sites, generating 100 million visits per month. A compelling benefit of this model is that a single article may be delivered to a wider audience and can help attract new readers to the publisher’s platform.

The study, published in Scientometrics late last year, evaluated the difference in behavior among users arriving at an article from a TrendMD recommendation compared to organic traffic. The goal was to better understand whether recommendations, and the additional article traffic, generate higher user engagement and whether recommended articles were ultimately more likely to be cited by researchers. The study offers evidence for the effectiveness of the recommended article distribution tactic.

During the four-week study, total pageviews increased by 95% when the reader visited an article from a recommendation appearing in the TrendMD widget (197 randomly selected articles in Journal of Medical Internet Research) compared to the control group (198 randomly selected articles in the same journal receiving organic traffic). The result was statistically significant with a mean difference of 17.5 pageviews. The effect on driving total pageviews was moderate-to-large. Pages per session also demonstrated a significant increase – a mean of 4.82 for TrendMD visitors v. 2.35 for organic visitors to article pages in the control group. In an earlier controlled study, published in Learned Publishing, visitors from the TrendMD arm were also found to have a lower bounce rate, the percentage of visitors who navigate away from the site after viewing only one page. This user behavior data, especially the pages per session increase, demonstrates that the users arriving from recommendations are finding content they value.

To explore the influence on citations, the research built on previous studies which have shown that the user behavior of saving articles to Mendeley, a reference manager and academic social network, correlates to future citations (Ebrahimy, et al., 2016; Thelwall and Wilson, 2016). Using the Mendeley API, the study evaluated the influence of recommendations on article saves to Mendeley. There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley. The number of Mendeley saves over 4 weeks was 5 for the TrendMD arm vs 2 for organic traffic, a 77% increase. The authors conclude that while replication and further study are needed, these data suggest that cross-publisher article recommendations may enhance citations.

The rapid growth of publisher recommendation tools and participation in services like the TrendMD network demonstrates that publishers are actively exploring growth opportunities for discovery, using recommendations.  Kudlow, Cockerill, Toccalino et al. show that the investment in enhancing recommendations on publishers’ platforms has a measurable impact on discoverability and user engagement. Gardner and Inger’s 2018 report will soon tell us how researchers today are rating the usefulness of publisher platform and recommended articles functionality.

Whatever the market tells us, publishers recognize their challenge is to innovate faster as diverse repositories of research expand. In 2024, should Gardner and Inger continue their longitudinal study, respondents will include the post-Millennial generation. Their expectations for contextual, relevant recommendations and other research practices will undoubtedly evolve — especially with the rise of video search and voice-command technologies, demanding new strategies. How will these approaches find their way into researcher workflows? The opportunities presented by advancements in text and data mining and metadata may help publishers meet expectations for the next generation faster as they continue to invest in platform development to meet the evolving demand for efficient and effective discovery and dissemination tools.

Acknowledgements: Thank you to Lettie Conrad, Scholarly Kitchen Chef, and Paul Kudlow and Bert Carelli of TrendMD for their input on this post.

Anne Stone

Anne Stone is owner of Stone Strategic Marketing Services serving the associations, publishers and organizations in the research ecosystem. She has worked in marketing and publishing for over 20 years at organizations including TBI Communications, Wiley, Blackwell, Pearson, and Constant Contact.

Discussion

16 Thoughts on "Guest Post: Do Journal Article Recommendation Features Change Reader Behavior?"

Very insightful article. Did anyone else notice that there is no mention of Google Scholar despite the fact that most of the traffic to all major publishers’ content comes from there?

Thank you, Desieditor. Yes, every publisher will look to their own analytics to understand sources of organic traffic. This article was focused on user behavior and attitudes around the recommendation user experience, as distinct from a “search” user experience. I also excluded research around user behavior related to recommendations via word of mouth (social media). I hope to address as a separate post in the near future.

I don’t know but the scientists I knew and published and worked with did research on very narrow areas and seemed to know who was doing what and what had been published in their area of expertise and research. I don’t know of any who just googled a word or topic and then attempted to parse articles of interest. Many I knew used critical review articles to help separate the wheat from the chaff. Also, in the reviewing process both informal and formal if an article was missed it was usually pointed out.
I guess my question is: is all this effort on the behalf of publishers to try to direct a researcher to a journal or article rather futile. My experience reminds me of a comment by I don’t remember who – who said: A STM publisher is like the Pacos River s/he is a mile wide and an inch deep!

Harvey –
“the scientists I knew and published and worked with did research on very narrow areas and seemed to know who was doing what and what had been published in their area of expertise and research”

My question for you–do you think this behaviour is helpful or harmful?

Often the biggest breakthroughs come from interdisciplinary research. Example – Barry Marshall’s Nobel Prize winning research on H. Pylori and ulcers spanned many disciplines of infectious disease, molecular biology, psychiatry, GI, etc. Academic disciplines are social constructs; knowledge has no boundaries. Why should we limit ourselves to man made arbitrary constructs/disciplines?

Article recommendations help to drive serendipity and interdisciplinary awareness; these factors are crucial to moving humanity’s knowledge forward in a positive and impactful direction.

Here are some articles you may find of interest:
https://onlinelibrary.wiley.com/doi/full/10.1002/asi.23273
https://www.tandfonline.com/doi/abs/10.1080/10400419.2014.873653

I always wonder about any study that looks at internet behavior by researchers who study internet behavior (the TrendMD study based on The Journal of Internet Medical Research). Do people who spend their lives looking at online stuff act differently than a researcher that spends their time in the field doing research? Who knows, but it always seems odd to me to pick a study subject that may very well skew the results.

What can get lost in these discussions is that so many of these publications are *society* publications. That is, they are embedded in a social context, which includes colleagues located near and far. Digital discovery tools (which can be immensely useful) can serve to dissolve the society bonds.

Another dimension of the social context Joe mentions may be generational; there is at least anecdotal evidence to suggest that a younger generation of researchers are more comfortable using discovery tools like website recommendations. In the SSP webinar mentioned in the article, David McCandlish said he clicks on those article recommendations all the time, and he expects this is true of many of his peers. In his words, “Researchers love to click!”

David –
In the published study, JMIR received 1,054 clicks from recommendations seen on journals like The BMJ, Pediatrics, PNAS, etc — see Table 4 in the Scientometrics study – https://cl.ly/162Z1f0l2X2Q

Bottom line, this list of referring journals suggest that these findings are generalizable to readers who consume a broad array of research content. There is no evidence to suggest that the readers referred by TrendMD article recommendations ‘[spend] their lives looking at online stuff.’ Unless, of course you believe that readers of journals like The BMJ, Pediatrics, PNAS, or other journals that TrendMD referred readers from don’t spend their time in the field doing research…

Thanks Paul. I guess my question is whether people use the internet differently to look at information about the internet as opposed to other areas of research. If you’re interested enough to read an article about Internet Research, does this mean that you’re someone that is well-engaged with how the internet is used, and does that create a sample bias as compared with physicists who want to read articles about physics for example? Are you more likely to click on a recommendation link if you’re someone really interested in how the internet gets used (even if you are a reader of BMJ or PNAS)? Would the results be different for non-internet-related research recommended in those same journals?

Thanks David. According the Scope notes online about the journal – https://www.jmir.org/about/editorialPolicies#focusAndScope – ‘The “Journal of Medical Internet Research” is a leading health informatics and health services/health policy journal (ranked first by impact factor in these disciplines) focussing on emerging technologies in health, medicine, and biomedical research.’ Out of the ~200 articles included in the intervention arm of the study many of the articles actually had very little do w/ internet research per se. So despite the name, the data suggest that J Med Internet Res publishes a somewhat broad array of content.

Additionally, TrendMD has published data in different fields which suggest the results are similar. Here is an example from BMJ Open showing similar increases in traffic – https://www.trendmd.com/blog/trendmd-drives-28-increase-in-weekly-pageviews-for-bmj-open-articles/

Lastly, use of the TrendMD article recommendations widget spans academic disciplines – https://www.trendmd.com/customers — results are the same across the board: researchers engage with article recommendations.

Thanks Paul, good to know. I always raise a red flag when I see people assuming that those who research the internet interact with the internet in the same manner as all other types of researchers. Good to know here there’s data from different fields that is more representative.

It is my hope that with the broad scope of journals on any single platform, and the presence of many differently-developed recommendation engines, publishers might endeavor to reproduce this study, using another single title or across many. It would be most valuable if the findings were then published, not kept between office walls as proprietary findings.

This is all very interesting. But of those page views, how many led to downloads.

And perhaps of more interest, how many of those downloaded papers were actually read. Do we have a handle, as an industry, on how good a proxy for real use the download actually is. When talking to some researchers recently I heard 50% of downloaded papers were actually read (and I don’t know if this is in depth or just skimmed).

The citation is one example of real use, of course. But reading, learning, applying seems to be the missing link between downloading and citing.

Does anyone have any data on this?

Hi Martin –
Unfortunately, we didn’t collect PDF article download metrics in the study; these data were not collected by the journal included in the study. Mendeley saves however are a type of article download (articles are being saved for future reading in Mendeley libraries). Typically the Mendeley save rate is lower than the PDF article download rate.

If we’re looking at the relationship to citations, Mendeley saves has a stronger predictive value to citations than PDF article downloads. Here are two papers: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3698439/
https://link.springer.com/article/10.1007/s11192-018-2715-9

You make a great point regarding readership. You’ll note in in Table 5 of our study – https://cl.ly/2b0c0X380b0t that we collected session duration and bounce rate metrics. Though certainly not perfect, metrics like bounce rate and session duration give some insight into whether people referred to the articles may have read them. As the old saying goes, ‘you can lead a horse to water.’

Lastly, I don’t believe there are data that shows whether PDFs downloaded are actually read — so if you care mostly about readership, PDF article downloads is an unproven proxy.

Comments are closed.