The National Academy of Sciences recently held their semi-biennial “E-Journal Summit,” which featured a spirited discussion from attendees representing a wide variety of organizations involved in science publishing.
Those interested can get an overview of what was discussed via the the Twitter commentary sent out by a few attendees (though one has to wonder, what’s the etiquette of tweeting from an invitation-only meeting where the host has directly requested that the discussion not be broadcast in this manner?). The thread does show some of the shortcomings of Twitter as a reporting tool. While the reports from the meeting are accurate, there’s so much they missed, either because it wasn’t interesting to that particular Twitter user, or because they were too busy tweeting to catch any follow-up comments. The famed 140-character limit leads to interesting soundbites but misses out on providing context.
One pre-conceived opinion tweeted by a non-attendee is contradictory to most published studies, so I thought I’d post my talk here, as I’d rather hear commentary on what I actually said rather than what I was assumed to say.
I spoke in a section on “Social Media in STM: How Are the New Tools Being Used?” With five minutes to speak to the publishers of many of the top science journals, I tried to distill the lessons learned about scientists and social media to a short set of “rules of thumb.” I was not surprised at all when the panel of scientists at the end of the day agreed with my conclusions 100%, but was a bit surprised to see how negatively they see Web 2.0 tools. If these up and coming lab-heads are typical, then social media has a major image problem to conquer, as they saw it as a distraction and detriment for their students, slowing progress on their research. And that was the main focus of my talk — trying to find more effective approaches that truly serve the needs of the community, rather than the usual gung-ho cheerleading paid to Web 2.0 by online evangelists. My slides are online at Slideshare, and embedded here as well.
Slide 1 was an introduction to who I am and where to find me.
Social Media in Science: Rules of Thumb for a Skeptical Science Publisher
As science publishers, we hear a lot about the potential for new technologies. Often this comes in the form of a pitch from someone looking to sell you on either the technology they’re offering or on their expertise. I want to try to give a brief presentation from the other side of things, from the point of view of a buyer, rather than of a seller, taking a more measured and practical real-world approach.
Scholarly publishing has suddenly become a hot commodity in some ways. There are hordes of startups and established companies that are looking for high value online content to exploit. Scholarly publishing is one of the few areas that has made a successful transition from print to online without completely destroying our business model. For the moment at least, readers are still willing to pay for access to our material and that creates a strong draw for Web 2.0 companies. But there are a lot of parallels to the dot com era, and we need to carefully examine and understand the behaviors and needs of our market in order to assess which of these offerings are useful and worth pursuing.
In the age of Web 2.0, this can be difficult, as it is an age of self-promotion and salesmanship. It’s important to remember that social networks and media have much in common with pyramid schemes. Both require a threshold level of membership before they become valuable. If you’re selling one, or are a member of one, you have a vested interest in convincing other people to join.
So I’d like to present some skeptical rules of thumb for thinking about social media and its integration in the scientific community.
Social is Not Always the Answer
While there is great value in many social media pursuits, social is not always the answer. There are times where taking direct action may be preferable rather than relying on serendipity. As an example, in our online biology methods product, we figured we’d crowdsource technical expertise, and set things up so if a reader had a problem with a technique, they could leave a question. Presumably, another more knowledgeable reader would provide advice and an answer. These discussion panels have barely been used at all. Thinking about it more, it makes sense: if you’re funded by a grant that will run out at some point, using expensive enzymes that might go bad, paying for daily animal housing charges and with a tenure/thesis committee breathing down your neck, can you afford to sit and wait for months for someone with the answer to stumble across your question? Can you trust their answer with your very valuable reagents and time? Why not just directly contact someone who has a track record of published results using the same method and get your answer in a few minutes?
Understand Your Culture and Create Appropriate Tools
Too many social media endeavors for science are built because the technology to build them exists, rather than because they fill a need. Too many are based on tools created for different situations and cultures. Sites declaring themselves “Myspace for scientists” quickly became “Facebook for scientists”, but they’ve still failed to catch on. Scientists don’t interact in the same way a band interacts with its fans, or how teenagers experiment with socialization or how grandparents show off pictures of their grandchildren, why should the same toolsets work? Scientists need specialized tools created for their culture and their needs, you can’t hope to shoehorn tools for other cultures in.
Tools must fit the needs of the community, rather than asking the community to change its culture to fit the tool. Filling an already existing need is a much more likely path to success than hoping that your new tool is so cool that it will change everything. That’s a pretty rare event. The most successful social tools so far have been community-driven, things like Wormbase/book/atlas, Flybase, structural databases. Consider starting within a community and working to meet their needs rather than starting with a technology and trying to convince a community they need it.
Also, I’m not so sure there is any monolithic definition of “science” as a culture. Each subcommunity has its own culture, its own needs. Some specialties of MD’s seem to have taken strongly to social networking. Perhaps some of this is cultural–they’re more likely to work in isolation than a postdoc who spends his days in a crowded lab, that’s part of a department and part of a university where there’s ample opportunity for discussion with peers. MD’s are under pressure to cure their patients, they’re not under the same pressure as a scientist to be the very first to publish an observation so there are different driving forces. As another example, computational research lends itself much better to online collaboration than doing wet-bench chemistry. So know your community, what works for some will not work for others.
Listen to Your Users, but Really Listen to Those Not Using Your Product
The NSF says there are 5.5 million working scientists in the US alone, plus 16 million more with science degrees working in related fields. For simplicity’s sake, if we ignore those 16 million, and everyone outside of the US, and your social network has 100,000 users, then at best you’re failing to serve 98% of US scientists. Your users are likely to be outliers, early adopters who are often people very interested in using new technologies because they like new technologies. They may not accurately represent the needs of the greater scientific community.
Science blogs, as one example, are dominated by advocates pushing a particular cause, things like defending the teaching of evolution, climate change, open access or open notebook science. The general science community may not be in agreement or have the same level of commitment as these advocates. If you look at the most used papers on both Mendeley and CiteULike, there’s a bias toward computer science and computational biology. These fields seem more comfortable and more interested in using online reference managers. If you start designing more and more to the needs of this small percentage of scientists while ignoring the needs of the general community you’re trying to serve, you may unnecessarily pigeon-hole your tool to a limited set of users.
Perhaps the most important rule of all:
Create Efficiencies, Not Timesinks
The one common thing across all branches of science is that I’ve never met a successful scientist with a lot of spare time.
It’s important to remember that the primary job of scientists is doing science, performing experiments, discovering new things. Most social tools for scientists are, by contrast, designed for communication, for talking about science. No matter how great such a tool is, using it is never going to be as important as doing their “real” work. Scientist learn very early in their training what activities advance their career, and they’re very good at focusing their energy on doing those things.
The best social tools are yet to come, and they’re more likely to be directed more toward the actual performance of research, tools for the analysis, aggregation and interpretation of data, rather than for chatting.
The ideal tool either improves the user’s ability to do research, or streamlines the time a scientist needs to spend doing things other than research. Asking someone to devote hours every day to commenting, rating or tagging is a non-starter.
That said, publishers are in the business of science communication, so the way we may want to use these tools ourselves has great potential but is something completely different from expecting our readers to use them in the same way.
Our Business Model
A deliberately blank slide, and nearly always the elephant in the room when it comes to social media. Monetizing social media is often a difficult process.
Is the implementation of this tool really going to lead to increased revenue?
If not, how much effort and money are you willing to spend on it?
That’s where the time ran out and the talk ended. It stimulated a good round of discussion at the meeting, and hopefully will do so in the comments below as well.
27 Thoughts on "Rules of Thumb for Social Media in Science"
Good stuff David. What emerges is a need for market research. At OSTI I describe my little research program as “How do scientists use information?” For example, your point about different communities having different needs. One can already see that different communities use the Web in different ways. Some post most papers while others post almost none. Particle physics versus geology, for example. Why? is a scientific question. So is who needs social media?
Also, can you elaborate on the negative perceptions you heard?
One scientist railed against his younger students and how they’re constantly distracted from doing their work by their Facebook pages and such. Another compared each student having a computer on their desk using these networks as the equivalent of each person in the lab having a television blaring a soap opera constantly.
Hilarious. There is an element of truth in the soap opera analogy, but the drama tends to be that of debate. Someone should point out that social media sharpen reasoning skills, which is the hot button skill in science education these days. For science maybe we need some success stories, where someone make a breakthrough because of socializing. But breakthroughs tend to take years to be recognized.
Been there. Had that argument in the 1980s with “We can’t have computers in the lab because people might play games on them”. Seems silly now, doesn’t it, painting a technology as bad because of what could go wrong.
I wasted a boatload of time in graduate school playing Solitaire and Earl Weaver Baseball. Luckily for me, we only had 2 computers in the lab so access was limited.
None of the scientists in question are suggesting that computers be eliminated from their labs. They’re just concerned about the detraction from focusing on their jobs that they provide. Productivity is important for a graduate student, and if they spend a lot of time fiddling around with something other than their work, no matter what it is, that’s not going to help their academic career.
Computer games are still just as distracting by the way, and it seems perfectly reasonable to me if a PI wants to ban game playing from the laboratory. I believe one panelist mentioned that their institution (NHGRI?) blocks access to Facebook.
You missed my point David. The discussion was not about banning game playing but about banning computers. Now the discussion is about banning social media because people MIGHT waste time. Silly, isn’t it? And of course, doomed to fail. Provide good tools which add value you lazy publishers.
I didn’t miss your point, I just thought it was ridiculous. No one, other than you, has ever mentioned the idea of banning computers from the lab. It’s too absurd to even consider and is a weak strawman.
However, there are institutions and many companies that block access to Facebook, Myspace, Twitter and other social media. This is something that is actually happening because they feel their workers are distracted and wasting time on such things. You may think it’s silly, but the workplace has requirements. Is it okay for a graduate student to spend hours on the phone chatting with friends all day in the lab? Which is more important for a young scientist’s career, getting experiments done or blogging?
As for lazy publishers, why is it solely our responsibility to solve the very difficult problem of creating social media that catches on with scientists? What about you lazy academics?
It is interesting you mention Wormbase and Wormbook – two community-driven initiatives. I’ve noted much ongoing activity in the former but very little in the latter. This may tell us a lot about the sort of online environments scientists wish to ‘interact’ in, as opposed to simply contribute to formally.
In this context, perhaps the most informative part of the NASEJ meeting was a scientist’s response to the question “Do you read blogs?”
His answer: “Yes I read hockey blogs, NY Times blogs, things like that – I never read science blogs…”
As David has pointed out, what we call science blogs are mostly public policy debates over science related issues, like climate change or teaching evolution. The number of people who can debate a specific scientific topic is quite small, like the number of eligible reviewers for a paper on it. Moreover, those myriad little groups already have email listservs, which have at least an element of privacy. Blogs may replace listservs in science but it may take a decade or more, because the “installed base” is huge. Social media types seem oblivious to this email medium, which is already doing what they want for science.
David, I enjoyed your talk at the summit. Just a few notes on your comments about the drawbacks of twitter reporting.
FWIW I understood that tweeters were asked not to tweet the discussion, for the reasons you mentioned. Perhaps I was wrong, but I thought the rules allowed tweeting the presentations themselves. Under these rules, of course there would be gaping holes in the coverage, as the meeting is designed to foster a conversation rather than feed a series of presentations.
BTW Others who were tweeting felt that it was acceptable to tweet the general discussion, but without attributing comments.
For other reason, I actually think relying on a tweetstream for an event presents lots of problems.
It does raise all sorts of interesting questions. What exactly was meant when we were asked that the discussions remain private? Does one interact with a meeting host the way one interacts as a guest in someone else’s home? Is there an implied level of respect that needs to be honored or does the current state of the connected world make that archaic and unrealistic?
Hi David! Thanks for your nice comments about Mendeley, and apologies for the “tweckling” 😉 I always wanted to do that.
I agree with most of your points, particularly the one about how “Facebook for Scientists” is a failed model that won’t work. I expect that it will continue to be tried and the failures will continue to be used as proof that scientists aren’t into social networking, until someone gets the formula right and it does work.
What I wasn’t able to fit into 140 characters back then was that your link above is not to “most published studies”, but to one study, which used “snowball sampling” to get the results. What this means is that they sent some links to their friends at elite universities, and they forwarded them to their friends. While interesting in a descriptive sense, the results are neither representative of scientists as a whole nor reproducible.
That’s why I was so miffed at your tweet, knowing the content of my talk, I figured you and I were pretty much in agreement on everything I was saying.
I won’t argue the specific merits of the Berkeley study with you, statistical analysis of sociological studies is not my area of expertise. But their conclusions seem much in line with my anecdotal experience and with the opinions nearly every scientist I’ve asked responds with. Your tweet did mention that the notion that scientists aren’t professionally using social media has been “disproven over and over”, so I’m curious where that has happened, where to find that proof.
I did agree with most, but what I don’t agree with is the idea that a sampling of academics from elite universities is representative of scientists as a whole. Pardon me for saying so, but it’s not only the Berkeley study that is advancing this idea, but you yourself, every time you talk about your anecdotal experiences and conversations. In many ways, CSHL represents the cream of the crop, but being in such a position does lead you to see the world through an ivory lens, and I think perhaps that’s why you’re so dismissive of the tens of thousands of scientists using Mendeley, thousands of life scientists on Friendfeed, thousands more on Twitter (of which David Bradley’s curated a selection), and countless others who are active in their own communities on the web.
When I suggest that your experience is perhaps skewed due to your exposure to elite universities, I’m doing so in the context of your and my several year long running conversation, which I provided summary links to in a previous post. There has to be an explanation for why you and I, both smart people with knowledge of our field, have come to a different understanding of the dynamics of social networking among scientists. Is it at all plausible that those who’ve historically not had access to the level of exposure provided by elite status are disproportionally making use of social media as a way around their previous marginalization?
William, continuing down here to avoid things getting scrunched….
Okay, good, this is helpful. First, the Berkeley study was not mentioned in my talk at all, just in my write-up reacting to your tweet.
Second, I’m not talking so much about scientists working at CSHL (though they are included in my sample set). We have thousands of scientists who come through CSHL every year to go to the 25-30 meetings and take the 25-30 courses offered (plus the smaller meetings at our Banbury Center). They come from all walks of life, all levels of biology. I also attend multiple off-site meetings throughout the year, ranging from the enormous ones like SFN to more intimate Gordon Conferences. As the editor of a journal, I interact with an international cast of researchers on a daily basis, again, publishing articles from everywhere from Harvard to the World Vegetable Center in Tanzania. I’m also concurrently editing around a dozen books, which range from having single authors to having hundreds of authors. So I’m okay with the breadth of my sampling, though you’re right, it does lean toward high quality scientists doing excellent research (though it’s not limited to just elite institutions). Then again, the activities of the best, most productive researchers are a lot more interesting to me. I want to know what the leaders of science are doing, the people making the big breakthroughs and doing the most interesting work. I’m less interested in the activities of scientists who are less productive, who don’t add as much to our knowledge of the world. I’m looking for best-case behaviors to emulate, trying to understand the workflows of the most productive scientists.
More importantly, I think where our opinions differ is in the scale of what the numbers mean. Let’s look at your examples:
Mendeley: At the meeting, Victor noted they have around 250,000 users. I’m not sure if this means active users, or if it includes idle accounts, spam accounts, etc.
Friendfeed: 1,340 subscribers
Twitter: not sure how to quantify, Bradley lists some 600.
I’ll throw in one more number for you, Technorati lists around 900 science blogs.
Where we differ is in the significance of these numbers. Let’s assume for the sake of argument that all the numbers above are accurate, and that none are abandoned or shell accounts. The NSF, as noted in the talk, lists 5.5 million working scientists in the US alone, with 16 million with science degrees working in related fields. I’ve been given a number of 7.5 million working biologists worldwide by another editor, not sure where it derives from or how accurate it is. Using those numbers, you get a sense of how miniscule buy-in is from scientists to these social networks. If Mendeley is the leading light and we ignore the rest of the world (and the related fields), then at best they’re not being used by 95.5% of US scientists. Factoring in the rest of the world and the numbers get even tinier. I’d be willing to wager that there are at least 10-fold more scientists who knit than scientists who blog, tweet and FriendFeed combined, but I’m not sure I’d say that knitting has seen massive uptake by scientists.
So if anything, these numbers back up the idea that social networking is still a fringe activity for scientists. Even if one posits that my personal impressions are skewed by not seeing the disenfranchised, the reported numbers are hard to argue against. I’d also point out that people who are deeply involved in these networks, who spend lots of time on them, who regularly interact with a set of colleagues who all use them may have a skewed vision of their own. How many scientists do you interact with on a regular basis completely outside of the internet?
Perhaps the difference really is in how much uptake something has to have to be useful. I’m not making any claims about what the totality of scientists are doing, so I don’t think sampling bias is relevant to my observations. They’re simply statements of fact.
What I’m more interested in is why “Scientists aren’t using social networks”, for your or my definition of scientist, is considered a worthwhile or useful message. To whom is this information useful? Is there any action or inaction one could reasonably assume the audience for this message will take?
I’d argue that “scientists are not using social media” is indeed a worthwhile and useful message for many different reasons. By itself, it’s perhaps something of a dead end, but combined with some analysis of why it’s happening and what things would likely help change the situation, it becomes tremendously valuable.
First, it provides a more accurate picture of what’s really happening. The problem with gauging online activity by scientists is that you don’t see the people who aren’t doing it. If you use Google as your research tool, as many seem to do, you see the voices of the 900 science bloggers, the 1300 Friendfeeders, but you don’t see the millions and millions who aren’t active in these ways. I was interviewed a little while back for an article on open notebook science, and I was the closest thing to an opponent the author could find (for the record: I think it’s a superb idea, really the way science should be done, but I do think it will be very difficult to implement with the way our current career structure is built). At one point she asked, “where are the people against open science, why don’t they blog?” Why would they blog? Who wants to write a blog about something you’re not interested in, something you don’t think is valuable? Understanding the actual level of participation is a useful reality check.
That reality check is important because there are lots and lots of companies trying to profit from this market, and lots of potential in the tools themselves for advancing the course of science. For the companies, which range from publishing giants to tiny startups, having a clear picture of user behavior is necessary for wise investment of resources. Every company has to run on a budget, and money spent repeating the same mistakes where others have already failed is money wasted. For the advancement of science, aren’t we better off if companies change their strategies away from proven failures (Myspace for Science) and instead build better, new types of tools that are actually useful?
Accurately seeing the levels of participation is particularly important in this arena because social media requires scale in order to work at all, and it vastly improves as that scale increases. Would Facebook be useful if no one you know was using it? Would Craigslist be useful if there were only 10 ads running?
You ask the question, “Is there any action or inaction one could reasonably assume the audience for this message will take?”
The answer is in the talk above. The action is to stop wasting money on the same, tired, failed pursuits and try new directions. The answer is to actually try to understand the culture of science and scientists and to work within that culture, rather than to try to dictate to it. So many of the companies involved here are run by people from other industries and there’s rarely anyone with a long history of immersion in the actual culture of being a scientist on board. That’s why there are so many tools that are absurdly inappropriate. You’ve said you generally agree with the suggestions I made in the talk above, but why would anyone implement them if they thought there was already great buy-in the way things are? The first step in solving a problem is admitting that the problem exists.