A pair of recent articles (here and here) on the Scholarly Kitchen stirred up some controversy and debate regarding high-volume, author-pays publications. The articles were inspired by a publication in PLoS One, a paper which some here saw as deeply flawed, and comments on the problems with the methodology and approaches taken in that paper were part of the conversation.
Several commentators suggested that we should also post our criticisms directly on the paper in question.
There’s a certain irony in proponents of decentralization, of getting rid of authoritative sources and tearing down the journal system requesting that commentary be located at one centralized authoritative location.
If we are to scatter research reports to the wind, why not do the same with comments and let the same (not-yet-developed) automated and crowdsource-driven systems bring them to our attention as well?
But beyond the obvious contradictions, the near-complete lack of interest from the commentators in repeating their statements at PLoS One points out a fundamental problem at the heart of article commenting, and for social media in science altogether — namely, Why bother?
Where’s the incentive for participating?
From Kent or Phil’s point of view, they have a vested interest in not publishing their comments on the PLoS One site. They’re both writers for this blog, and are interested in seeing it succeed, in increasing the blog’s traffic and influence. Asking them to drive readership away from their own project is against their own interests. And we don’t even sell advertising on the Scholarly Kitchen — imagine if we did have some income riding on our traffic levels. Then, the idea of giving away work to another website becomes even less likely.
The paper in question has, at the time of this writing, received only two ratings on the PLoS One site (4 stars and 5 stars respectively), and it may end up serving as a poster child for why the article level metrics that PLoS has chosen to measure are unlikely to be meaningful. It’s a paper that’s been blogged about and that’s probably been downloaded and read fairly often. But much of this attention is due to its problematic nature, not because it’s of significantly higher quality than other papers in PLoS One. Should a flawed paper benefit because it’s in dispute?
Also, how meaningful is a ratings system that only attracts gladhanders but not critical thinkers (a quick review of PLoS’ data from March shows 25 times as many articles rated with 4 or 5 stars than 0 to 2 stars, which further reinforces the inherent flaws of such rating systems).
As I’ve often said, science is a demanding profession, one that leaves the successful researcher with little spare time. Yet we see venture after venture like this, which demands scientists donate time and effort for no actual return.
The ideal system is one that offers immediate benefit to the user (something like LabLife, which offers a quick way to streamline the organization of a lab’s grocery list). But what about the more esoteric social activities, things like discussions and critiques of a paper that don’t offer immediate tangible benefits to the user? Are there ways of driving participation, of making it valuable enough to include in one’s busy schedule?
Often such discussions lead to the idea of a “karma” or “whuffie” system, where your level of online participation and the quality of your reputation is rated and displayed. These sorts of systems have many technological hurdles (How can participation across a vast variety of sites be coordinated into one rating? How does one factor in activity in different types of forums like FriendFeed and Twitter?), and are ripe for gaming and manipulation. But more importantly, they don’t serve to answer the question of motivation. Why should anyone care what your karma rating is? Does a number like this mean anything to anyone other than social media aficionados?
It’s just yet another inwardly gazing solution, an attempt to add meaning to social participation by creating an imaginary number that reflect social participation.
If you want to create a meaningful karma system, or drive participation in general, then there must be a direct link between performance and reward. It’s unlikely that the real drivers of the tangible reward systems in science, funding agencies and tenure/hiring committees would officially back any sort of social participatory karma system. If anything, the goals of these sorts of agencies and committees are not compatible with a heavy time investment in social networking. Funders and institutions want researchers to make discoveries, to perform fruitful experiments. But the higher your karma rating, the less time you’re probably spending doing the thing they’ve funded or hired you to do. Very few researchers have received grants or tenure for the incisive comments they’ve made about the work of others.
What about real world tangible incentives? I spoke with a professor last week who refuses to write any letters of recommendation for her students/postdocs until they’ve fully documented all of their protocols and reagents on the lab’s page on the social network site that they use. This certainly gets the job done, but it’s unlikely that it’s something that could widely scale for use by any given site. It requires buy-in from professors, something outside of the control of those running the actual social network and, again, something that requires motivation. In this particular case, the professor is deeply involved as part of the team creating and running the social site in use, so there’s clear incentive there. But for the lab next door, or any other lab, asking that the needs of a website be placed above the needs of the lab members is unlikely to succeed. Asking mentors to commit what amounts to extortion over their students is going to be a tough sell.
Which brings us to the bottom line, pun intended — financial rewards. Since most social media for scientists are being run as for-profit businesses, why not offer a revenue share for participation? You’re asking researchers to do extra work to benefit your company, why not pay them for this extra work? ScienceBlogs pays their bloggers “a modest monthly compensation based solely on the volume of blog pageviews.” From what I’ve read, the payments are indeed pretty modest, but the bloggers seem very appreciative, and the payments add to their loyalty and sense of partnership in the enterprise. This minor investment has paid off in furthering to the sense of community for those creating content, which in turn helps keep them actively involved in the venture.
Paying scientists a small fee for doing work that’s beyond their normal activities is something we’ve been experimenting with at Cold Spring Harbor Protocols, as well. Many attempts to create online repositories of biology methods have failed because it’s tremendously difficult to get top scientists to take the time to formally write up their protocols. The reward system for scientists is based (and rightly should be based) around data, rather than the techniques used to produce that data. There are some labs who specialize in driving methodology, but a robust resource needs participation from the whole of the field, not just the technologists. We’ve spent the last few years honing the project to make it more and more rewarding, and hopefully more and more attractive to authors.
We started CSH Protocols as a database, but quickly realized that we were better served by turning the project into a formal journal, one that’s listed in PubMed since adding a peer-reviewed, PubMed-indexed paper to one’s CV means more to our potential authors than being a contributor to a database. On top of that, we’ve chosen to share revenue with our authors as an additional incentive. Each year a portion of subscription revenue is set aside and distributed to paper authors based on the usage of their articles. We’re just wrapping up the processing of payments for 2009. Our payments this year for individual articles range from $10 (anything under $10 is rolled over to the next year) to $988. The mean payment is around $142, though this number is skewed higher by the presence of a small number of highly popular protocols. The median is around $80.
That may not sound like much, but to a graduate student living on a stipend, a check for $80 is a very welcome sight, particularly if it comes in addition to the career benefit of publishing a paper. It’s unclear how much of a motivation these payments are providing, as we have yet to complete our formal surveying and analysis. My gut feeling is that the journal’s lack of page charges or publication fees may play a larger role in drawing in first-time authors. When we start mailing out checks each spring, many authors are unaware that they’re receiving a royalty, at least for their first year’s payout. But we do receive great feedback once the checks are sent out, and we’re starting to see an increasing number of repeat authors.
Direct payments like this may not be feasible for everyone. Shareholders may not be keen on revenue going anywhere other than their pockets. And if you’re like most social networking sites, and you’re still searching for a business model, there’s likely no revenue to share. But if your social media is tied to an actual revenue source, it’s worth considering. For journals trying to push participation, there are incentives that can be offered without direct outlay that may pay off in areas beyond your social network. If you want participation in your article level metrics, why not create a karma system that rewards users with discounted fees for author-pays articles? If you want more conversation and blogs on your journal-associated network, why not offer reduced page charges for active members of the community? This not only encourages activity in your networks, it also gives users more incentive to selectively publish their papers in your journals.
While many still want to claim that it’s too early in the process and that scientists are slow on the uptake of new technologies, I think we’re far enough along to realize that what’s been done so far has failed to work.
There’s not a scientist out there who hasn’t heard of Facebook. They know what a social network is. The problem is not awareness or hesitancy, the problem is that that what’s been offered has been judged to be of insufficient value to warrant the extra effort it requires. Rather than continuing to run into the same brick wall, isn’t it time to think instead of strategies for making participation more attractive?