Prologue: A very brief history of media effects

Even before the development of the Internet and social media tools, the association between media promotion and article performance was well documented.1, 2, 3, 4 What was not fully understood, however, was the underlying cause of this association. Editors and journalists tend to promote what they view as the most important and novel papers. As a result, it is difficult to disambiguate selection effects from dissemination and amplification effects, especially from uncontrolled observational studies. Likely, multiple effects operate in concert. If we want to isolate these effects, we need to rely on a more rigorous methodology–the randomized controlled trial.

typing on laptop

Summarizing some key studies:

Attempting to isolate media effects from everything else, a randomized controlled trial of social media exposure involving Facebook, blog, and Twitter promotion in Circulation reported no difference in median 30-day page views.5 Responding to criticism of their experimental design, the study was rerun with a larger population and a stronger social media intervention,6 again revealing no effect. A similar randomized trial on papers published in the International Journal of Public Health revealed no difference in either downloads or future citations between the social media intervention and control group.7 A Twitter intervention of older papers published in Academic Medicine, revealed a small increase in full text HTML views (about 5 additional views) within 30 days, but no differences in PDF downloads.8  Last, an uncontrolled social media campaign in pediatric medicine reported an increase in Twitter followers and website views but no increase in Cochrane systematic review downloads.9

Adding more context:

Not surprisingly, it matters who does the tweeting. A randomized controlled trial on papers published in the Journal of the American College of Radiology using two intervention arms (promotion from the journal’s Twitter account versus the Editors’ accounts) reported a significant increase in journal web page visits for the editorial arm, but not the journal arm, in the first 7 days of promotion.10  For articles-in-press, a pre-post intervention study reported that Twitter was responsible for drawing readers to new papers.11 Yet, overall monthly page views were no different for each group, suggesting that Twitter was simply drawing potential readers that would have been reached using other promotional campaigns.

What about social media effects on citations?

Studies measuring tweet frequency and future citation performance have reported positive associations in different biomedical fields.12, 13, 14 Because these studies were observational in nature, lacked appropriate controls and exclusion criteria, it is difficult to conclude that the primary cause of citation was Twitter promotion and not editorial selection or other confounding effects. One of these studies drew intense criticism over its methods, editorial and commercial conflicts of interest.15

Tweeting caution:

Twitter accounts are not all alike. Some accounts are set to automatically tweet new papers or the tweets of other accounts. Some tweets can be modified or deleted. Some researchers16 question whether it is possible to use tweet numbers as valid and reliable performance indicators given the pervasiveness of “bots” and other algorithmically-generated accounts on Twitter. In other words, tweets may be measuring the efficacy of the marketing and promotion of an article and not the reception and interest of readers.

While adoption rates of Twitter are increasing among scientists, uptake is still low.17 The most current survey of researchers, conducted by the Nature Publishing Group, reported 26% of respondents used the service for professional purposes (most researchers use Twitter for personal purposes).18 For papers published between 2010 and 2012 and indexed in PubMed, just 10% of articles were mentioned on Twitter.19

Overstated effects but likely not harmful.

While there are many studies exploring the relationships among indicators, most are methodologically weak and may suffer from confounding causes and effects. More rigorous trials, summarized above, report little, if any, effect between social media interventions and readership. Nevertheless, whereas social media campaigns may have limited effect within the research community, they may provide other ancillary benefits to a journal, such as providing outreach to healthcare professionals, communicating directly with the general public, and increasing brand recognition.20

Should we expect anything more?

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


18 Thoughts on "Can Twitter, Facebook, and Other Social Media Drive Downloads, Citations?"

Really interesting post (and so many linked studies to read), thank you!

I think one thing that’s not mentioned here (and I would be surprised if it’s been studied in any detail) is end user search behaviour. I’ve heard discussions* that some researchers are using social platforms as their starting point for browsing papers. It might be the case that any publication without a social media presence could start to lose out on downloads and citations from end users who are heading in this direction.

*I fully appreciate that this isn’t data, merely idle speculation that would never pass peer review!

Thanks for the comment! If social media plays a key role in the discovery process, these RTCs should show a large effect on HTML views and PDF downloads. I don’t doubt that some researchers are using social media platforms for discovery purposes. The data just don’t support this approach as widespread.

Great post! We noticed that our journal’s recent increased social media efforts led to a change in terms of website traffic. and now rank among the top referral sites to our journal’s website. It’s unclear, of course, if this relates in any way to citations or downloads, but it was encouraging.

Meghan, it is very interesting to look at the next level down from there. Discover see which articles are getting the views. There can be some gems that are worth repromoting. Most often its the used community that drives the engagement more than the publisher’s own channel. Also worth checking what YouTube is driving people to and what might be the cause. When the video description includes a reference to a research article you may see a spike and a long tail of readers, especially if the YouTube channel has a large following, the ‘influencers’ out there.

This article is interesting as it goes against what we want to see–that social media can influence readership and increase downloads and dissemination of research. I liked all the links too and agree that it matters who does the tweeting and the nature of the tweet (visual). Did you happen to see the prospective case-control study in Annals of Surgery from Ibrahim that described a 2.7 increase in article visits?

The study concluded that tweeting Visual Abstracts was more effective than tweeting simple article titles. It did not include a non-twitter control, so all we can conclude from this study was that if a journal publisher decides to use Twitter as a marketing and promotional tool, it is better to use visual abstracts over plain text. The article did end with a hesitation that this added another layer of work for the publisher and that some authors were concerned that a simple visual abstract may oversimplify their results.

Yes. That is a good conclusion. Marketers and authors who wish to reach more potential readers can use social media. Many different approaches work. A visual element will impact reach. Visual abstracts, video, animated gifs, and more will work differently depending on the channel. The people at 2 Minute Medicine may offer a compelling case for visual abstracts. New England Journal of Medicine has seen explosive growth in its new Instagram channel. They pair striking images with mini-cases and a link to articles. Thinking of the current generation of pre-med and med students, if publishers and authors aren’t thinking about a social strategy, they will lose awareness among a cohort that likes to share and recommend. What is the best channel and method?
We consistently find across market research projects that the younger cohorts seek information across many channels. The approach must suit the medium…and the goal. For example, providing education and awareness using social media regarding the science of vaccinations among a general audience won’t lead to citations, but does support publishers’ mission of disseminating research to impact society. There is no doubt social media takes effort. Ensuring that goals are defined informs decisions on what metrics matter. More abstract views, new visitors, repeat visitors, new subscribers, submissions, new ealert subscribers, longer time on site could be more appropriate metrics for social channels than citations since it is really used at the top of the marketing funnel: awareness.

In many of my interactions with researchers in Korea and South East Asia, it seems research promotion and engagement are increasingly becoming a need for magnifying research impact and awareness. The impact of this on citations seems to have some correlation. Many studies show that there is an interplay of social media, citations, journal impact factor, and subject area:
Of course more research needs to be done on how these factors play out, but there seems to be no denying the correlation of social media and citations.

I don’t deny there is a correlation. What we need to discern is whether this association is causal and the direction of causation. Ice cream sales are strongly correlated with the U.S. murder rate. Does this mean that ice cream causes violence, or that a third cause–heat–increases irritability AND ice cream sales. Without stronger methods (i.e. randomized controlled trial), we cannot hope to make a strong causal claim between social media and citations.

Yet another article without a single reference to studies or samples which cover the Humanities and Social Sciences. The journal “Scientometrics” is also ignored.
Sorry, but equal representation is needed in the Kitchen also in terms of the subjects which are considered for such articles.

I’ve attempted to cover key rigorous trials in this blog post. If you can find other similar rigorous trials from the HSS literature or from Scientometrics, please chime in, otherwise, I can’t take your allegations of bias seriously.

Scholarly fields are communities and these communities has their own lines of communication. If you working in a particular scholarly field, you know how the people working in the field communicate and you probably follow at least some of those communication channels. It is not whether Twitter, Facebook etc. are an effective way to drive dissemination but using the ones that researcher/scholars in that field actually monitor and or participate. For example, I bet if one of the “chefs” on this blog wrote a post about an article that was published in Learned Publishing, Wiley would see a spike in the downloads of that article over the few days after it was posted. Why, because the Scholarly Kitchen is one of those communication channels that people working in academic publishing follow.

I started an electronic journal with a few colleagues in the mid 1990s that eventually become a well established journal in the field. When we started we had no funding and the only real marketing we did was post a notice on a listserv that a new article was published in the journal with the title, authors, abstract and a link to the article. It worked great. We would get anywhere from a few dozen to several hundred downloads of the article over the next few days after we posted the notice on the listerv. It worked because at the time, that listserv was one of the main communication channels in our field.

Hi David. I don’t doubt that there are cases where social media is a dominant factor in the dissemination of research findings and plays a role in the discovery process. The problem is that this link between social media and article-level performance may have been overgeneralized and its effects overstated. Obviously, there are commercial and marketing reasons to promote the social media–performance relationship as strong and causal.

Like the early claims between open access and article performance, it takes years to begin to contextualize and properly measure the relationship. At least with these studies, it’s fair to conclude that if the relationship exists, it is small and may be limited in its scope and application. These results don’t surprise me. Research on mass media throughout history (newspapers, radio, television, etc.) have concluded the same. I’d be more surprised if social media was any different.

There are several good papers in JMIR and Scientometrics! Desieditor has thrown down the challenge of creating a bibliography! Added to my goals for the summer!

Surprisingly most of the above mentioned randomized controlled trials are not really looking at the effects of the original social media treatment on the other social media activities, which might be significant in some cases. This might raise another question: are these really randomized controlled trials? It seems like that the treatment group gets a treatment, and the control group does not get it, but afterwards both groups can get the treatment: social media attention from other actors. The conclusions in most of the studies are only coupled to the original treatment, and not corrected to the other social media attentions. I am pretty sure that there is a higher correlation between the total social media attention and the download numbers/citations/page views, and I do not really see how it is possible to separate the effect of the original treatment from the other social media attentions.

No RTC is perfect. However, they provide much stronger methods over retrospective observational studies, especially in cases where there are known to be multiple and interacting causes. Given that these RTCs report little (if any) social media effects under various conditions, we should be able to conclude that the first observational studies were likely overstating the effects of social media, especially in poorly-designed studies like this one (

Dear Phil,

I have done research on this subject, using a set of open access books, compared to a set of books that were not published in OA.

Maybe useful?

Revisiting an open access monograph experiment: measuring citations and tweets 5 years later; Scientometrics
DOI: 10.1007/s11192-016-2160-6

Very thought provoking piece. Often in news we find that the consumption of headlines goes up, the number or retweets go up, without a corresponding increase in the amount of news consumed on the page. There is marginal increase, but not as much as the number or likes and shares suggest there should be. However, this leads to a fair amount of visibility for the handle that is sharing the report, which in turn improves google rankings for the publication. I am guessing, from the data that you have referred, it is the same for scholarly publishing.

However, given the way google calculates page rank – and that now includes social media sharing – there would be some positive benefit for the paper – albeit during the longtail. .

Comments are closed.