Prologue: A very brief history of media effects
Even before the development of the Internet and social media tools, the association between media promotion and article performance was well documented.1, 2, 3, 4 What was not fully understood, however, was the underlying cause of this association. Editors and journalists tend to promote what they view as the most important and novel papers. As a result, it is difficult to disambiguate selection effects from dissemination and amplification effects, especially from uncontrolled observational studies. Likely, multiple effects operate in concert. If we want to isolate these effects, we need to rely on a more rigorous methodology–the randomized controlled trial.
Summarizing some key studies:
Attempting to isolate media effects from everything else, a randomized controlled trial of social media exposure involving Facebook, blog, and Twitter promotion in Circulation reported no difference in median 30-day page views.5 Responding to criticism of their experimental design, the study was rerun with a larger population and a stronger social media intervention,6 again revealing no effect. A similar randomized trial on papers published in the International Journal of Public Health revealed no difference in either downloads or future citations between the social media intervention and control group.7 A Twitter intervention of older papers published in Academic Medicine, revealed a small increase in full text HTML views (about 5 additional views) within 30 days, but no differences in PDF downloads.8 Last, an uncontrolled social media campaign in pediatric medicine reported an increase in Twitter followers and website views but no increase in Cochrane systematic review downloads.9
Adding more context:
Not surprisingly, it matters who does the tweeting. A randomized controlled trial on papers published in the Journal of the American College of Radiology using two intervention arms (promotion from the journal’s Twitter account versus the Editors’ accounts) reported a significant increase in journal web page visits for the editorial arm, but not the journal arm, in the first 7 days of promotion.10 For articles-in-press, a pre-post intervention study reported that Twitter was responsible for drawing readers to new papers.11 Yet, overall monthly page views were no different for each group, suggesting that Twitter was simply drawing potential readers that would have been reached using other promotional campaigns.
What about social media effects on citations?
Studies measuring tweet frequency and future citation performance have reported positive associations in different biomedical fields.12, 13, 14 Because these studies were observational in nature, lacked appropriate controls and exclusion criteria, it is difficult to conclude that the primary cause of citation was Twitter promotion and not editorial selection or other confounding effects. One of these studies drew intense criticism over its methods, editorial and commercial conflicts of interest.15
Twitter accounts are not all alike. Some accounts are set to automatically tweet new papers or the tweets of other accounts. Some tweets can be modified or deleted. Some researchers16 question whether it is possible to use tweet numbers as valid and reliable performance indicators given the pervasiveness of “bots” and other algorithmically-generated accounts on Twitter. In other words, tweets may be measuring the efficacy of the marketing and promotion of an article and not the reception and interest of readers.
While adoption rates of Twitter are increasing among scientists, uptake is still low.17 The most current survey of researchers, conducted by the Nature Publishing Group, reported 26% of respondents used the service for professional purposes (most researchers use Twitter for personal purposes).18 For papers published between 2010 and 2012 and indexed in PubMed, just 10% of articles were mentioned on Twitter.19
Overstated effects but likely not harmful.
While there are many studies exploring the relationships among indicators, most are methodologically weak and may suffer from confounding causes and effects. More rigorous trials, summarized above, report little, if any, effect between social media interventions and readership. Nevertheless, whereas social media campaigns may have limited effect within the research community, they may provide other ancillary benefits to a journal, such as providing outreach to healthcare professionals, communicating directly with the general public, and increasing brand recognition.20
Should we expect anything more?