A pair of recent articles (here and here) on the Scholarly Kitchen stirred up some controversy and debate regarding high-volume, author-pays publications. The articles were inspired by a publication in PLoS One, a paper which some here saw as deeply flawed, and comments on the problems with the methodology and approaches taken in that paper were part of the conversation.
Several commentators suggested that we should also post our criticisms directly on the paper in question.
There’s a certain irony in proponents of decentralization, of getting rid of authoritative sources and tearing down the journal system requesting that commentary be located at one centralized authoritative location.
If we are to scatter research reports to the wind, why not do the same with comments and let the same (not-yet-developed) automated and crowdsource-driven systems bring them to our attention as well?
But beyond the obvious contradictions, the near-complete lack of interest from the commentators in repeating their statements at PLoS One points out a fundamental problem at the heart of article commenting, and for social media in science altogether — namely, Why bother?
Where’s the incentive for participating?
From Kent or Phil’s point of view, they have a vested interest in not publishing their comments on the PLoS One site. They’re both writers for this blog, and are interested in seeing it succeed, in increasing the blog’s traffic and influence. Asking them to drive readership away from their own project is against their own interests. And we don’t even sell advertising on the Scholarly Kitchen — imagine if we did have some income riding on our traffic levels. Then, the idea of giving away work to another website becomes even less likely.
The paper in question has, at the time of this writing, received only two ratings on the PLoS One site (4 stars and 5 stars respectively), and it may end up serving as a poster child for why the article level metrics that PLoS has chosen to measure are unlikely to be meaningful. It’s a paper that’s been blogged about and that’s probably been downloaded and read fairly often. But much of this attention is due to its problematic nature, not because it’s of significantly higher quality than other papers in PLoS One. Should a flawed paper benefit because it’s in dispute?
Also, how meaningful is a ratings system that only attracts gladhanders but not critical thinkers (a quick review of PLoS’ data from March shows 25 times as many articles rated with 4 or 5 stars than 0 to 2 stars, which further reinforces the inherent flaws of such rating systems).
As I’ve often said, science is a demanding profession, one that leaves the successful researcher with little spare time. Yet we see venture after venture like this, which demands scientists donate time and effort for no actual return.
The ideal system is one that offers immediate benefit to the user (something like LabLife, which offers a quick way to streamline the organization of a lab’s grocery list). But what about the more esoteric social activities, things like discussions and critiques of a paper that don’t offer immediate tangible benefits to the user? Are there ways of driving participation, of making it valuable enough to include in one’s busy schedule?
Often such discussions lead to the idea of a “karma” or “whuffie” system, where your level of online participation and the quality of your reputation is rated and displayed. These sorts of systems have many technological hurdles (How can participation across a vast variety of sites be coordinated into one rating? How does one factor in activity in different types of forums like FriendFeed and Twitter?), and are ripe for gaming and manipulation. But more importantly, they don’t serve to answer the question of motivation. Why should anyone care what your karma rating is? Does a number like this mean anything to anyone other than social media aficionados?
It’s just yet another inwardly gazing solution, an attempt to add meaning to social participation by creating an imaginary number that reflect social participation.
If you want to create a meaningful karma system, or drive participation in general, then there must be a direct link between performance and reward. It’s unlikely that the real drivers of the tangible reward systems in science, funding agencies and tenure/hiring committees would officially back any sort of social participatory karma system. If anything, the goals of these sorts of agencies and committees are not compatible with a heavy time investment in social networking. Funders and institutions want researchers to make discoveries, to perform fruitful experiments. But the higher your karma rating, the less time you’re probably spending doing the thing they’ve funded or hired you to do. Very few researchers have received grants or tenure for the incisive comments they’ve made about the work of others.
Doing science is more important than talking about science.
What about real world tangible incentives? I spoke with a professor last week who refuses to write any letters of recommendation for her students/postdocs until they’ve fully documented all of their protocols and reagents on the lab’s page on the social network site that they use. This certainly gets the job done, but it’s unlikely that it’s something that could widely scale for use by any given site. It requires buy-in from professors, something outside of the control of those running the actual social network and, again, something that requires motivation. In this particular case, the professor is deeply involved as part of the team creating and running the social site in use, so there’s clear incentive there. But for the lab next door, or any other lab, asking that the needs of a website be placed above the needs of the lab members is unlikely to succeed. Asking mentors to commit what amounts to extortion over their students is going to be a tough sell.
Which brings us to the bottom line, pun intended — financial rewards. Since most social media for scientists are being run as for-profit businesses, why not offer a revenue share for participation? You’re asking researchers to do extra work to benefit your company, why not pay them for this extra work? ScienceBlogs pays their bloggers “a modest monthly compensation based solely on the volume of blog pageviews.” From what I’ve read, the payments are indeed pretty modest, but the bloggers seem very appreciative, and the payments add to their loyalty and sense of partnership in the enterprise. This minor investment has paid off in furthering to the sense of community for those creating content, which in turn helps keep them actively involved in the venture.
Paying scientists a small fee for doing work that’s beyond their normal activities is something we’ve been experimenting with at Cold Spring Harbor Protocols, as well. Many attempts to create online repositories of biology methods have failed because it’s tremendously difficult to get top scientists to take the time to formally write up their protocols. The reward system for scientists is based (and rightly should be based) around data, rather than the techniques used to produce that data. There are some labs who specialize in driving methodology, but a robust resource needs participation from the whole of the field, not just the technologists. We’ve spent the last few years honing the project to make it more and more rewarding, and hopefully more and more attractive to authors.
We started CSH Protocols as a database, but quickly realized that we were better served by turning the project into a formal journal, one that’s listed in PubMed since adding a peer-reviewed, PubMed-indexed paper to one’s CV means more to our potential authors than being a contributor to a database. On top of that, we’ve chosen to share revenue with our authors as an additional incentive. Each year a portion of subscription revenue is set aside and distributed to paper authors based on the usage of their articles. We’re just wrapping up the processing of payments for 2009. Our payments this year for individual articles range from $10 (anything under $10 is rolled over to the next year) to $988. The mean payment is around $142, though this number is skewed higher by the presence of a small number of highly popular protocols. The median is around $80.
That may not sound like much, but to a graduate student living on a stipend, a check for $80 is a very welcome sight, particularly if it comes in addition to the career benefit of publishing a paper. It’s unclear how much of a motivation these payments are providing, as we have yet to complete our formal surveying and analysis. My gut feeling is that the journal’s lack of page charges or publication fees may play a larger role in drawing in first-time authors. When we start mailing out checks each spring, many authors are unaware that they’re receiving a royalty, at least for their first year’s payout. But we do receive great feedback once the checks are sent out, and we’re starting to see an increasing number of repeat authors.
Direct payments like this may not be feasible for everyone. Shareholders may not be keen on revenue going anywhere other than their pockets. And if you’re like most social networking sites, and you’re still searching for a business model, there’s likely no revenue to share. But if your social media is tied to an actual revenue source, it’s worth considering. For journals trying to push participation, there are incentives that can be offered without direct outlay that may pay off in areas beyond your social network. If you want participation in your article level metrics, why not create a karma system that rewards users with discounted fees for author-pays articles? If you want more conversation and blogs on your journal-associated network, why not offer reduced page charges for active members of the community? This not only encourages activity in your networks, it also gives users more incentive to selectively publish their papers in your journals.
While many still want to claim that it’s too early in the process and that scientists are slow on the uptake of new technologies, I think we’re far enough along to realize that what’s been done so far has failed to work.
There’s not a scientist out there who hasn’t heard of Facebook. They know what a social network is. The problem is not awareness or hesitancy, the problem is that that what’s been offered has been judged to be of insufficient value to warrant the extra effort it requires. Rather than continuing to run into the same brick wall, isn’t it time to think instead of strategies for making participation more attractive?
Discussion
15 Thoughts on "Creating an Incentive: Can Social Media Offer Enough Carrots to Entice Scientists?"
The real solution is to figure out how social media can contribute to scientific discovery, enough to justify and reward the effort of participation. From what I can tell few of the players in the game are interested in solving this problem. Some embrace social media on ideological grounds, as a new form of egalitarianism. To them science is simply organized wrong, so looking at how science works today is irrelevant. The culture is corrupt and must change.
Others see the big numbers and think there must be money in it. Their view of science is superficial so they clone social models from other realms, such as adolescence.
Both groups then blame science for stubbornly refusing to be distracted. The word “culture” is used pejoratively in this context, as though it were a disposable veneer, when it is actually how the job is done.
Paying people to blog might increase traffic but it is still moonlighting. The question is where and how does science need social media? The answer may be not much, but it is too soon to tell.
As usual, you’ve cut to the heart of the matter. I’m working on a blog posting along the same lines, on the compatibility of science, which is basically a meritocracy, with social media, which instead selects for consensus and popularity. Stay tuned.
This post though, skirts around the edge of that question and assumes the person creating the network has a reason they think people should be participating (even if that reason is simply personal profit). Can you get participation by offering incentive rather than just (sorry to invoke “Field of Dreams” again) building it and assuming everyone will come flocking.
My example of CSH Protocols does speak to your question though. While we originally wanted a wild, free-form, new type of database, the more we made it conform to what our users wanted (which was more along the lines of traditional science publishing) the better it’s performed.
I totally agree with David: “People learn in response to need. When people cannot see the need for what’s being taught, they ignore it, reject it, or fail to assimilate it in any meaningful way.” http://miningdrugs.blogspot.com/2009/12/social-media-needs-to-support-different.html
In other words, incentives might encourage people for the wrong reasons, which might not support what they actually need. As soon as the incentives stop, they might stop contributing. For making it even worse, people might expect more and more incentives over time. This is for me an unmaintanable encouragement model.
Instead of expecting that scientists contribute to any “science 2.0” service the key question for me is always “what do you need?” and would you be willingly to contribute sharing your needs and soluations with others?
I guess the question is, “what are the wrong reasons”? Scientists don’t want to write grants but regularly do so for financial compensation. Scientists often receive an honorarium for agreeing to speak at an event or writing a chapter for a book. Are these actions problematic as well?
Also, if the Science 2.0 enterprise is for-profit, as most are, and participation leads to revenue, why would the incentives have to stop?
As with any market research, one needs to get very specific about the need and who needs it, but science is not all that well understood. Your CSH Protocol work sounds like a specific feature of science (and hence a need) that I call “leaping concepts:”
http://www.osti.gov/ostiblog/home/entry/leaping_concepts_and_global_discovery
Certain things, like methods (protocols?) and math, cut across disciplines. This creates a findability problem that is very different from keeping up with one’s local community peers. It is very much a non-experts, looking for experts in strange, distant communities situation, which may lend itself to social media.
Full text search is making cross cutting methods much more findable. Cross cutting, methods focused journals and discussion groups may be the natural next step. Science’s new Translational Medicine journal may be another case in point.
There’s another irony about Kent’s earlier post.
It’s surely one of the “most read” and “most commented on” Scholarly Kitchen articles. Judging from the content of at least half the comments, however, I very much doubt their authors believe these metrics in any way correlate with the quality of the article…
I think that we are all in agreement that popularity and usage are valuable measurements but to not connote quality. The JCRs are interesting in that they denote acceptance and relevance in connection with the imprimateur of experts and brand authority. It’s the difference between being the Homecoming Queen and President of the AP Physics Club.
You may be on to something Alix. Blog comments are often about disagreements, but disagreements are very hard to see in the journal literature, which is polite to a fault. A journal article almost never says that someone else is wrong, even when that is the clear implication of the results. Public disagreement only happens in the discussion period after a conference presentation, when things can get pretty hot.
Science is full of disputes at the frontier, as it has to be. Knowing where the big fights are in science, and what they are about, could be very useful. Perhaps a so-called “dialog” (or dispute) blog format is needed, with opposing authors. The incentive would be to garner followers for one’s side.
It may be that ‘scientists’ is too broad a term. In my experience, different scientific disciplines have different cultures around sharing information, especially in arenas as open as the Internet. Consider physicists and physicians, for starters, and I think we’ll find a wide range in their disciplines’ respective tendencies to share information.
Yes Ruth, good point. I tend to use the generic “scientists” more often than I should, lazy writing on my part. Different subcultures use different tools in different ways.
Agreed: “Please let us not forget that there are different ‘scientific personalities'[1] as there exist different ‘information management personalities'[2]”
For references [1] and [2] see http://miningdrugs.blogspot.com/2009/12/social-media-needs-to-support-different.html
It is important to map and understand these differences if we are going to build useful tools. I have done some simple research here, looking at differences among disciplines in authors posting their journal articles on their Websites, as opposed to merely listing them.
The differences can be dramatic. Particle physicists post 60-80% while doctors and geologists post less than 10%. This may be a transient pattern due to the fact that the Web started in particle physics, or it may reflect important differences in how the disciplines operate. We don’t know at this point.
(The reason for the research is that OSTI has a product that aggregates 35,000 author publication pages:
http://www.osti.gov/eprints/)
I think science does need social media but the reasons and requirements are not so clearly defined that anyone could create the perfect platform yet. Social media works in mysterious ways – look at the way the independent music industry has embraced MySpace, and the same has happened with Friendfeed for the scientific community.
The key is to keep creating solutions, then sit back and wait for the latent problems to define themselves.