Amid the cacophony of a year-end media blitz that bombards us with listicles of the greatest scientific discoveries, the top papers, and the cutest animal stories, we seemed to have missed an important and serious contribution to our understanding of social media and its relationship to scientific publishing.
I was alerted to this paper the old-fashioned way–in conversation with a cardiovascular researcher–which I found rather odd, because research on social media typically attracts the attention of those who promote social media. Indeed, I can find no mention of it among those who follow bibliometric and sociometric research. Those #altmetrics hash-taggers seemed to have missed this paper, as did readers of the SIGMETRICS listserv.
The paper that slipped my attention was “A Randomized Trial of Social Media From Circulation,” which appeared online November 18, 2014 in the journal, Circulation.
In order to understand the effect of social media on the readership of their journal, the researchers (all members of the Circulation editorial board) designed a rigorous scientific trial in which half of Circulation‘s original research articles were randomly assigned, upon publication, to be promoted in a social media campaign.
The campaign consisted of a Circulation Facebook blog post and Tweet, both of which contained the main point of the article, a key figure, and a toll-free link to the full-text version of the article. Articles in the control arm of the study didn’t receive any of these interventions.
The researchers measured the total number of page views (abstract + full text + PDF) for each article over 30 days. Did their promotion via social media have any effect? They write:
There was no difference in median 30-day page views (409 [social media] versus 392 [control], P=0.80). No differences were observed by article type (clinical, population, or basic science; P=0.19), whether an article had an editorial (P=0.87), or whether the corresponding author was from the United States (P=0.73).
While their results were resoundingly negative, they are still interesting, as prior studies have all reported positive results. I should note that prior studies were observational in nature, in which researchers searched for relationships between social media attention and some form of impact (downloads or citations). The problem with studying social media this way is that it becomes very difficult to understand what is causing what: Social media may increase the impact of a journal article; alternatively, important articles may simply attract a lot of social media attention.
If this problem sounds familiar, you’ll find a methodological critique of the RIN/Nature study on the relationship between Open Access and citations.
By randomly assigning articles into the social media and control arm of the experiment, the researchers can be pretty sure that the two groups are similar, in all respects, at the start of the experiment. If differences are observed at the end of the experiment, they can be pretty sure that they were caused by social media.
The fact that the researchers were unable to detect a difference in article downloads between the social media and control group suggests that their social media campaign had little (if any) effect. It also questions whether prior studies were successful in isolating and measuring the effects of social media. The true effects of social media may be much smaller than previously reported.
Like a study worthy of publication in a top medical journal, the researchers were careful about overgeneralizing their study. Cardiovascular researchers (and other bench and clinical researchers) are very different than computational biologists, social media researchers, and those who spend their days glued to their chairs and computers. While Circulation had more than 28,000 Facebook followers and nearly 5,000 Twitter followers, these online followers were qualitatively different (younger and predominantly male) than Circulation‘s traditional readers. This does not say that medical journals can categorically ignore this group of potential online readers, only that they may not be reflective of Circulation’s readership as a whole.
Last week, I asked a room full of cardiovascular researchers if they used Twitter. One man in the audience put up his hand, but qualified that it was to keep up with his favorite soccer teams. When I asked about Facebook, two researchers put up their hands, but explained that it was part of their editorial duties to promote their journal.
Now I understand why I learned of this study by word-of-mouth.
31 Thoughts on "Social Media And Its Impact on Medical Research"
Very interesting, Phil. Were Facebook and Twitter the only social media sites used? What I find interesting about this is that social media has come to be defined these mainstream huge platforms (also including Pinterest, Instagram, etc.), but what about social networks that are more geared to medical or scientific professionals (Doximity, Sermo, ResearchGate, Mendeley….)? I wonder if there would be any difference if these networks were considered instead or in addition to the main stream consumer-oriented sites.
Nature reported on a large survey of academics on their use of social media and broke responses down by tool and behavioral use, see Richard van Noorden’s (@Richvn) piece “Online collaboration: Scientists and the social network” http://www.nature.com/news/online-collaboration-scientists-and-the-social-network-1.15711
We did a similar study at Neurology (results soon to be published in Science Editor) and found the same thing. We actively promote our papers using Facebook, Twitter, Google + and YouTube, and it hasn’t done much to increase our page views. We also found that authors were not big on promoting their own work through their personal social media channels.
Apologies- the Neurology article was published! Science Editor vol 31;3:77-78
I’m interested in this article, but as access to recent Science Editor articles is only available to CSE members, I can’t see the outcomes 🙁
Hi Morgan, Are you sure the article you reference in Science Editor is the Neurology article? It appears to be a different topic then what was described. http://www.councilscienceeditors.org/wp-content/uploads/v31n3p078-080.pdf Thanks for clarifying.
Hi Ken, it hasn’t posted online on the CSE’s site yet. Please contact me at email@example.com and I can send you a copy!
Who has time for social media except those who need to fill time?
I am not a scientist but how many on this site who are assign a time slot for reading social media?
How many who are await an article on Keats?
You do realize that blogs, such as The Scholarly Kitchen, and their comments sections are considered a form of social media, right?
And most of our authors on this site all use social media extensively as was discussed here:
Thanks Phil for pointing out this interesting and well done study.
This is strictly person experience. The one social media (I guess you can call it that) I have found extremely effective in disseminating information about a new publication are listservs. There are a few listservs that many people in the field subscribe at least in the fields I publish. I’ve found putting out notices on these listservs to be extremely effective with very large bumps in downloads when the notices go out. Part of the reason may be that it is a push technology and it is to email which virtually all professionals monitor pretty regularly.
Interesting piece. You’re right to draw attention to non extrapolation to other disciplines. I wonder if this, and the neurology data, is a symptom that medics tend not to use Twitter, Facebook etc as it can open all kinds of patient cans of worms. This might not be true for other research professions.
A study conducted by Dominique Brossard, Professor and Chair in the Department of Life Sciences Communication at the University of Wisconsin-Madison, and her team present findings of an effect of social media on impact: Liang et al. Building Buzz: (Scientists) Communicating Science in New Media Environments. Journalism & Mass Communication Quarterly December 2014 vol. 91 no. 4772-791. http://jmq.sagepub.com/content/91/4/772
Looks like an area ripe for additional study.
Not sure we are saying this right. What the study seems to show is that social media campaigns cannot create more interest than naturally exists. But I will wager that an article that gets 6000 tweets also gets a lot of downloads.
This article is fascinating to me, especially when I reflect on my own use of social media. I use FB a lot – but my use is mainly “social” (following friends, family and hobbies) with a smattering of “peer support” for completing my phd. It had never occurred to me to “Like” the journals I usually read as well. To be honest I think it would make my FB feeds even longer and more disparate they already are. It would blur the lines between my work and personal lives even more than they are at present, and I’m not convinced that’s a good thing. Twitter is slightly different for me, since I mainly read Tweets and rarely post. Still, following only a small number of handles can quickly make for long, sometimes overwhelming feed. It’s then very easy to miss individual posts / tweets.
To describe what was done as a “social media strategy” is laughable. One tweet and one FB post (which, on average, is only seen by 7% of the people who “like” a page)? Perhaps they should try hiring someone who actually understand how to use social media and then measure the results.
David B, that would be awesome! I know my journal would benefit from having someone who really understood this. When can we expect to receive a check from you to cover the salary and benefits of this new colleague?
Just saying, we’re doing the best we can with what we’ve got.
I think you are overestimating the costs involved. While it would be optimal to a full-time person on staff to handle all social media, this can be accomplished by spending several thousand dollars on a consultant to help put together a strategy and implementing that strategy with existing staff. The folks at Circulation wasted someone’s time on a poor strategy. Why not utilize that time on something that could make a difference? I’m currently posting content on Pinterest 3 times a week. It takes less than 5 minutes per post. It took 3 of us approximately one hour each to prepare 3 months worth of content. If it results in a significant increase in traffic to the site, it will be a success with very little impact on anyone’s time.
While this is a useful study in at least beginning to measure the impact of approaches to social media promotion in a rigorous way, to echo David B.’s point above, I’m not sure how much measuring readership of articles that have appeared in the journal’s Twitter/Facebook feeds really tells about the effectiveness of social media writ large. The reason for this is that social media is, well, social. There is a difference between a journal highlighting its own articles in its social media feed and, say, an eminent cardiologist or the Editor-in-Chief of Circulation or a prominent author or the president of the AHA or a well-known cardiology blogger writing in his/her own feed that a certain article is particularly interesting or important and why. This study confirms what many folks in marketing departments at publishers have been saying for some time: treating social media as a glorified RSS feed is not terribly effective. The real use case of social media is getting a conversation going, which requires a bit more than was done here.
A better question would be, how does one consistently foster conversation on social media and once a conversation has started, does THAT lead to a significant uptick in readership?
Also, thanks for this write-up Phil. You have, by way of this blog post (a social media channel), started a conversation around this Circulation article, not only prompting some discussion but also increased readership (of at least one additional reader, anyway, as I clicked through and read the article).
I agree that the way Social Media was operationalized in this study may not reflect what an ideal SM campaign may look like, and therefore may underestimate what “true” SM campaigns can do. However, I think it would be a non-starter to ask the editors to write impassioned editorials about large numbers of random papers about which they may have little interest and knowledge. Tweets and short blog posts may have been the extent to which this group could achieve. Without this, we are left with observational studies, in which the best and most interesting studies attract the most social media AND the most readership, and we are left with being unable to disentangle SM from article quality.
Publishers have invested a lot of staff time and resources into SM and this paper suggests that if conducted in a similar way, SM may result in little pay-off.
“Publishers have invested a lot of staff time and resources into SM and this paper suggests that if conducted in a similar way, SM may result in little pay-off.”
Medical journals are an interesting category of content with regard to social media. Unlike a general interest magazine, physicians have long-established mechanisms for “social” dissemination of research. There are myriad actual, physical meetings where physicians discuss papers in person. Societies often have well-read newsletters or member magazines (print and digital) that alert members to important research in the society’s journals. Many medical societies also have extensive public relations departments that work directly with journalists who cover medical science in newspapers, magazines, radio, and television. There are CME workshops and other educational activities that often discuss current research. There are journal clubs, formal and informal. There are journal reviews like JournalWatch. And, relative to other scientists, physicians tend to read more deeply from a narrower list of journals (which is to say, they may not require social medial to “discover” an article from a journal they are regularly scanning anyway). Given medical journals have so many well-established discovery channels relative to other content types, it is perhaps not surprising that social media (used in the way described in this paper) has not made a bigger impact. A journal with the prominence of Circulation long ago cracked the basic “discovery” problem.
It would be interesting to see a similar study conducted by a broad-coverage “mega” journal, where articles do not always benefit from the same discovery apparatus (and relatively well-defined audience) that Circulation and the AHA have long had in place.
There still seems to be a confusion between SM and SM campaigns here. This study is not about the impact of SM, despite the title of this post, nor can a journal control SM. As for quality, the recent SK post on the Altmetric top 100 indicated that quality is not the primary driver of SM, rather human interest is. If journals want to influence SM they should think about how SM actually works and work with it.
This is an interesting post and I’m looking forward to reading the research when I have some time.
It’s not all about downloads, though.
Since launching CQG+ (http://cqgplus.com) last year, readers have reported that the site adds a ‘human element’ to the journal that was lacking before (and posts with pictures are read much more often than those without).
Many of the authors who had papers promoted on CQG+ have returned to the journal and submitted more high quality work.
What if one of the articles being tested happened to have a video associated with it that demonstrates some principal of animal behavior or physics. Let’s say the video shows a cat that is surprised by some event and jumps onto the back of a dog and then the two go running around in circles. Let’s say this comical video goes viral and gets several million hits.
Does this prove that using social media is the new direction for hard science?
It’s an obvious distortion, I know – one end of a continuum of possibilities. Should scientists try to make articles more appealing to the general public to drive up hits? Does an increase in social media traffic only reflect the degree of fickle popular interest in a topic. I question the value of all this.
Craig, I am doing research on the value(s) of SM to science and I tend to agree that spreading hard science is generally not one of them. After all there are just a few million scientists but seven billion people, many of whom are online. SM is a means to spread appreciation and understanding of science to nonscientists. That should be important to science but SM is not a simple extension of scholarly publishing. In particular it is not a way to get the IF up.