The National Academy of Sciences recently held their semi-biennial “E-Journal Summit,” which featured a spirited discussion from attendees representing a wide variety of organizations involved in science publishing.
Those interested can get an overview of what was discussed via the the Twitter commentary sent out by a few attendees (though one has to wonder, what’s the etiquette of tweeting from an invitation-only meeting where the host has directly requested that the discussion not be broadcast in this manner?). The thread does show some of the shortcomings of Twitter as a reporting tool. While the reports from the meeting are accurate, there’s so much they missed, either because it wasn’t interesting to that particular Twitter user, or because they were too busy tweeting to catch any follow-up comments. The famed 140-character limit leads to interesting soundbites but misses out on providing context.
One pre-conceived opinion tweeted by a non-attendee is contradictory to most published studies, so I thought I’d post my talk here, as I’d rather hear commentary on what I actually said rather than what I was assumed to say.
I spoke in a section on “Social Media in STM: How Are the New Tools Being Used?” With five minutes to speak to the publishers of many of the top science journals, I tried to distill the lessons learned about scientists and social media to a short set of “rules of thumb.” I was not surprised at all when the panel of scientists at the end of the day agreed with my conclusions 100%, but was a bit surprised to see how negatively they see Web 2.0 tools. If these up and coming lab-heads are typical, then social media has a major image problem to conquer, as they saw it as a distraction and detriment for their students, slowing progress on their research. And that was the main focus of my talk — trying to find more effective approaches that truly serve the needs of the community, rather than the usual gung-ho cheerleading paid to Web 2.0 by online evangelists. My slides are online at Slideshare, and embedded here as well.
Slide 1 was an introduction to who I am and where to find me.
Social Media in Science: Rules of Thumb for a Skeptical Science Publisher
As science publishers, we hear a lot about the potential for new technologies. Often this comes in the form of a pitch from someone looking to sell you on either the technology they’re offering or on their expertise. I want to try to give a brief presentation from the other side of things, from the point of view of a buyer, rather than of a seller, taking a more measured and practical real-world approach.
Scholarly publishing has suddenly become a hot commodity in some ways. There are hordes of startups and established companies that are looking for high value online content to exploit. Scholarly publishing is one of the few areas that has made a successful transition from print to online without completely destroying our business model. For the moment at least, readers are still willing to pay for access to our material and that creates a strong draw for Web 2.0 companies. But there are a lot of parallels to the dot com era, and we need to carefully examine and understand the behaviors and needs of our market in order to assess which of these offerings are useful and worth pursuing.
In the age of Web 2.0, this can be difficult, as it is an age of self-promotion and salesmanship. It’s important to remember that social networks and media have much in common with pyramid schemes. Both require a threshold level of membership before they become valuable. If you’re selling one, or are a member of one, you have a vested interest in convincing other people to join.
So I’d like to present some skeptical rules of thumb for thinking about social media and its integration in the scientific community.
Social is Not Always the Answer
While there is great value in many social media pursuits, social is not always the answer. There are times where taking direct action may be preferable rather than relying on serendipity. As an example, in our online biology methods product, we figured we’d crowdsource technical expertise, and set things up so if a reader had a problem with a technique, they could leave a question. Presumably, another more knowledgeable reader would provide advice and an answer. These discussion panels have barely been used at all. Thinking about it more, it makes sense: if you’re funded by a grant that will run out at some point, using expensive enzymes that might go bad, paying for daily animal housing charges and with a tenure/thesis committee breathing down your neck, can you afford to sit and wait for months for someone with the answer to stumble across your question? Can you trust their answer with your very valuable reagents and time? Why not just directly contact someone who has a track record of published results using the same method and get your answer in a few minutes?
Understand Your Culture and Create Appropriate Tools
Too many social media endeavors for science are built because the technology to build them exists, rather than because they fill a need. Too many are based on tools created for different situations and cultures. Sites declaring themselves “Myspace for scientists” quickly became “Facebook for scientists”, but they’ve still failed to catch on. Scientists don’t interact in the same way a band interacts with its fans, or how teenagers experiment with socialization or how grandparents show off pictures of their grandchildren, why should the same toolsets work? Scientists need specialized tools created for their culture and their needs, you can’t hope to shoehorn tools for other cultures in.
Tools must fit the needs of the community, rather than asking the community to change its culture to fit the tool. Filling an already existing need is a much more likely path to success than hoping that your new tool is so cool that it will change everything. That’s a pretty rare event. The most successful social tools so far have been community-driven, things like Wormbase/book/atlas, Flybase, structural databases. Consider starting within a community and working to meet their needs rather than starting with a technology and trying to convince a community they need it.
Also, I’m not so sure there is any monolithic definition of “science” as a culture. Each subcommunity has its own culture, its own needs. Some specialties of MD’s seem to have taken strongly to social networking. Perhaps some of this is cultural–they’re more likely to work in isolation than a postdoc who spends his days in a crowded lab, that’s part of a department and part of a university where there’s ample opportunity for discussion with peers. MD’s are under pressure to cure their patients, they’re not under the same pressure as a scientist to be the very first to publish an observation so there are different driving forces. As another example, computational research lends itself much better to online collaboration than doing wet-bench chemistry. So know your community, what works for some will not work for others.
Listen to Your Users, but Really Listen to Those Not Using Your Product
The NSF says there are 5.5 million working scientists in the US alone, plus 16 million more with science degrees working in related fields. For simplicity’s sake, if we ignore those 16 million, and everyone outside of the US, and your social network has 100,000 users, then at best you’re failing to serve 98% of US scientists. Your users are likely to be outliers, early adopters who are often people very interested in using new technologies because they like new technologies. They may not accurately represent the needs of the greater scientific community.
Science blogs, as one example, are dominated by advocates pushing a particular cause, things like defending the teaching of evolution, climate change, open access or open notebook science. The general science community may not be in agreement or have the same level of commitment as these advocates. If you look at the most used papers on both Mendeley and CiteULike, there’s a bias toward computer science and computational biology. These fields seem more comfortable and more interested in using online reference managers. If you start designing more and more to the needs of this small percentage of scientists while ignoring the needs of the general community you’re trying to serve, you may unnecessarily pigeon-hole your tool to a limited set of users.
Perhaps the most important rule of all:
Create Efficiencies, Not Timesinks
The one common thing across all branches of science is that I’ve never met a successful scientist with a lot of spare time.
It’s important to remember that the primary job of scientists is doing science, performing experiments, discovering new things. Most social tools for scientists are, by contrast, designed for communication, for talking about science. No matter how great such a tool is, using it is never going to be as important as doing their “real” work. Scientist learn very early in their training what activities advance their career, and they’re very good at focusing their energy on doing those things.
The best social tools are yet to come, and they’re more likely to be directed more toward the actual performance of research, tools for the analysis, aggregation and interpretation of data, rather than for chatting.
The ideal tool either improves the user’s ability to do research, or streamlines the time a scientist needs to spend doing things other than research. Asking someone to devote hours every day to commenting, rating or tagging is a non-starter.
That said, publishers are in the business of science communication, so the way we may want to use these tools ourselves has great potential but is something completely different from expecting our readers to use them in the same way.
Our Business Model
A deliberately blank slide, and nearly always the elephant in the room when it comes to social media. Monetizing social media is often a difficult process.
Is the implementation of this tool really going to lead to increased revenue?
If not, how much effort and money are you willing to spend on it?
That’s where the time ran out and the talk ended. It stimulated a good round of discussion at the meeting, and hopefully will do so in the comments below as well.