Jeremy Bentham, by Henry William Pickersgill (...
Image via Wikipedia

Crowdsourcing is supposed to provide a virtually  no-cost way for your audience to do work that would be very difficult to accomplish through a centralized approach.

But what happens when centralized curation is still in the mix?

In September 2010, the University College London’s Transcribe Bentham project launched with the goal of crowdsourcing the transcription of nearly 40,000 of Jeremy Bentham’s manuscripts, many of which have never been transcribed. Just six months later, and the project is scaling back, its grant having ended. It appears to have transcribed to a satisfactory level perhaps 600 of the 40,000 manuscripts (1.5%).

And what did this grant pay for?

. . . for computer programming, photography, and research associates who vet the quality of volunteers’ submissions.

The project launched last fall with much fanfare, but there’s nary a hint in the early coverage that expenses might be its undoing. Instead, there was the normal bluster about amateurs vs. experts, quality controls, and the like.

Financial concerns may not be the first things that occur to journalists covering social media. That probably should change.

The Chronicle of Higher Education interviewed Philip Schofield, the project’s director, who said:

I don’t envisage Transcribe Bentham ever disappearing from the Web. It’s the backup we can give it which is in danger of disappearing toward the end of the year—that active involvement and relationship with users which the research staff has built up.

The project overall probably succeeded in propelling Bentham scholarship forward a good distance in a short period of time. It’s to be commended for that, at the very least.

But here is more proof that technology initiatives are always significantly about people and the expenses they create — expertise, craft, skill, time, and attention are all valuable commodities that need to be paid for if the results of an effort are going to be worth consuming.

It’s another reminder that even when the labor is free, the expenses incurred to coordinate and manage it well can be significant. And it’s a bitter testament to the fact that grant money comes and goes, so is not a dependable source of revenue for an ongoing concern.

(Thanks to DC for the pointer.)

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson


14 Thoughts on "Even Crowdsourcing Can Get Too Expensive"

I think situations like this present an tremendous potential opportunity for scholarly publishers. It seems that money is readily available for starting up community projects like this. However, there seems to be almost no support for allowing them to continue once they’ve reached some level of success. There’s also the problem that projects like this need dedicated employees to drive participation–there’s usually a flush of excitement at the beginning, but interest wanes over time.

Scholarly publishing houses have the infrastructure necessary to support such projects already in place. We have hosting platforms and talented editorial and production staffs who can take on many of the tasks necessary. That means an enormous cost-savings over starting from scratch. It also means that the project can live on after the initial funding runs out, as our platforms will continue to exist to support our other businesses, so no need to take things down.

The question for publisher then is how to monetize the process. It could be done merely as a community-building exercise around a journal or book project, presumably with grant funding to cover costs, a marketing exercise that is effective yet costs nothing. Or there could be other less obvious business models to make these sorts of resources pay for themselves (selling print editions of the material that’s created as one example).

The ultimate solution here is to raise enough money to create an endowment, the way the Stanford Encyclopedia of Philosophy is doing. It is indeed much easier to get initial grant funding to launch a project than it is to find the money to sustain it over time. Just look at the record of Mellon-funded startups, some of which have proved sustainable (Muse, JSTOR) while others (Gutenberg-e) have not.

“It’s another reminder that even when the labor is free, the expenses incurred to coordinate and manage it well can be significant.”

I would have thought that the “even” in the above sentence would better be replaced with “especially”.

Let’s not forget that the Bentham grant covered not merely the crowdsourcing project itself, but also software development to build the tool, scanning of the remaining manuscripts, and traditional editorial work (albeit using the transcriptions as inputs) as well. Without a more detailed accounting, we can’t really judge how expensive the community management/moderation portion of the Bentham project was.

Agreed, but that’s what’s being shifted away. Obviously, it wasn’t sustainable at no cost.

You’re right, but in order for that crowdsourcing to work, the infrastructure had to be built to support it. That’s why I suggested above, that there’s a role for publishers in these sorts of projects as we already own these sorts of infrastructures which saves greatly on having to build them from scratch. More of the grant could then go to the actual activity rather than infrastructure for supporting the activity.

Or, in the case of cash-poor university presses, the infrastructures exist in libraries, and it makes more sense to team up with your campus library than to build one at the press.

Again, I feel like we need more information on the interplay between the terms of the grant and the long-term needs of the project. The original announcement put the grant amount at £262,673.00, which seems like it should cover website hosting and occasional check-ins by a volunteer liason. On the other hand, a fixed, single-year term seems entirely inadequate for a project of this sort — which one would expect to grow organically as passionate volunteers discover the tool.

I’m entirely sympathetic to the idea that large-scale crowdsourcing projects can be well managed in libraries or presses (or outside the academy entirely in the case of my own work). However, I still think that it’s far too soon and our information is too incomplete to declare the downfall of the Bentham project, much less identify its causes.

Sorry, I don’t follow (perhaps because grants and their proposals are foreign to me) — what do you see as being confirmed?

Comments are closed.