Entrance to Field of Dreams Movie Site
Image by J. Stephen Conn via Flickr

The UK’s Research Information Network (RIN) recently released a report they commissioned on the use and relevance of  Web 2.0 for researchers.

Our regular readers will be amused to note that the report’s title employs the standard misquote from the movie Field Of Dreams, asking the question, “If You Build It, Will They Come? How Researchers Perceive and Use Web 2.0.”  The report’s answer to that question agrees with other recent studies:

. . . frequent or intensive use is rare. . . . We found that current levels of take-up are relatively low . . . while there are some variations between disciplines, web 2.0 tools are for the most part not considered to be particularly important. . . . Overall, there is little evidence at present to suggest that Web 2.0 will prompt in the short or medium term the kinds of radical changes in scholarly communications advocated by the open research community . . . for most researchers, the established channels of information exchange work well; and, critically, they are entrenched within the systems for evaluating and rewarding researchers for their work.

If you’ve been paying attention to actual researcher behavior rather than listening to the online hype, this should come as no surprise.  The results of the study are well in-line with everything we’ve seen for the last few years (see previous columns here, here, here, and here).  Predictions that new online technologies will revolutionize the way scientists and other researchers work have so far failed to come true.  If the landscape is likely to change over time, then a new toolset is needed to spur that change, as the current offerings have been judged as uninteresting or of low value to the academic community.  Creating the right toolset is apparently a higher hurdle than was expected.

RIN’s report was commissioned from a research group at the University of Manchester and the University of Edinburgh.

It’s worth reading, though I do have some concerns about the methodology that was used.

Their first round of data collection came from an online survey, followed by in-depth interviews with those surveyed and a set of case studies looking at various websites.

In my mind, the use of an online survey likely biases the results towards respondents comfortable with, and willing to use online resources.  Those who didn’t respond to the survey may represent an unseen “luddite” fraction of the population, so their numbers for usage, which are low, may in fact be over-inflated.  The case studies would also have been much more informative had they included actual user demographics and usage data, rather than a broad overview of the sites’ offerings.

Respondents ranged from a variety of research fields in the UK, including Medical Sciences, Biological Sciences, Physical Sciences, Computer Science & Maths, Engineering, Economics & Social Sciences and Arts & Humanities.

With those caveats, a quick summary of their results:

  • 13% of respondents use Web 2.0 tools in their work frequently, 45% occasionally, and 39% not at all.
  • Frequent usage is much higher among computer scientists and mathematicians and much lower in the medical and life sciences.
  • One major set of barriers to uptake is difficulty in keeping up with frequently changing tools and the fragmentation of the market resulting in an overload of sites offering similar services. This results in advantages for late movers willing to wait, rather than early adopters.
  • The other major barrier revolves around perceptions of trust and quality:

. . . they do not trust what has not been subject to formal peer review . . .” reader comments and ratings would be so open to abuse that it’s hard to imagine that people would interpret them as a valid [indicator] of the paper’s worth”.

  • Traditional journals are still considered the most important route of dissemination (70% thought so, though there’s a surprising dropoff of 14% between print-based journals and online-only journals, which are less valued).
  • Blogs and wikis are generally seen as a waste of time or even “dangerous”:

“[Blogs] are not taken very seriously, even blogs based on Nature.  [Colleagues] find it time consuming and not very credible.  Interesting, yes, but . . . as a piece of entertainment first and potentially useful almost serendipitously.”

  • Doing science is more important than talking about science:

“I’d rather spend the time thinking about what I’m going to do next rather than spend it telling others what I’m doing.”

  • Only 20% of respondents expect open access author-pays models to predominate.
  • Very few researchers (5% of respondents) are active in “open research” efforts, publishing their outputs and work in progress openly using blogs and other tools, while others consider such practices a waste of time, or even that they risk bringing “anarchy in science.”

None of this is particularly shocking.  But there were a few interesting results to come out of the study, starting with their look at participation and interest in new tools based on the age of the respondent.  We’ve been told ad infinitum that there’s some mythical group of “millennials” or “digital natives” who have grown up with online tools and that they’re going to dominate the world and demand that everything work like the social networks they used in high school.  RIN’s report provides further evidence that this is simply not true:

. . . while there are some statistically-significant variations between different demographic groups, high usage is positively associated with older age groups and those in more senior positions, not with their younger or more junior colleagues.

Digital anthropologist Dana Boyd hit the nail on the head when she disputed the notion of the “digital native,” noting that we’re all digital natives, coping with new technologies and finding new ways to do things. Young people are no better at this than older people.

There is no hostility toward Web 2.0 in the research community — 86% of respondents were neutral or enthusiastic toward the use of new technologies.  Their failure so far is that they don’t offer clear benefits to the user and the cost of adoption is far too high.  Researchers “do not see them as comparable or substitutes for other channels and means of communication, but as having their own distinctive role for specific purposes and at particular stages of research.”

The report also notes the danger of spending too much time focusing on the early-adopting heavy users:

. . . dependence for feedback and ideas on a small number of heavy users can create tension between serving what might be the complex and sophisticated needs of core enthusiasts, and engaging with occasional users (and potentially new ones), who might have different needs.

These are key lessons for publishers and anyone else interested in building social media for scientists or integrating its functionality into publications.  Make the advantages of use clear and easy to understand, make the barriers of entry nonexistent.  Don’t try to replace an existing system with a redundant one that doesn’t offer the same level of accepted career benefit. Offer something new, something that saves time and effort rather than demands it.

Most importantly, as seen in this clip around the 4:28 mark, “a man who made a baseball pitch in his garden for ghosts” is probably not a good role model for an aspiring business.

Enhanced by Zemanta
David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

11 Thoughts on "The RIN Report on Researchers and Web 2.0: If You Build It . . . Well, You Know the Rest"

Make the advantages of use clear and easy to understand, make the barriers of entry nonexistent.

This does not sit very easily with web 2.0 product development though.

It is well known that the web 2.0 landscape relies on bringing products to market as quickly as possible and ensuring that a significant number of early adopters guide the continuing development in the right direction. As the product evolves, core functionality is built upon, not removed.

Previous innovations such as Friend feed didn’t show any real career benefit at the outset and demanded significant investment, but as the number of users reached a critical mass, the market started to realise that there were other measures of usefulness than time saving or career advancement.

It does seem like a conundrum. The standard way of building a web 2.0 product is indeed iterative. Build it, fail, re-build it better. But does this work for an audience that’s hyper-busy and extremely over-scheduled? My impression is that most scientists seem to judge a tool quickly and either accept it or move on, never to return.

It’s also fairly evident that scientists are extremely good at focusing their professional efforts solely on activities that do provide career advancement. How many scientists use FriendFeed? What percentage of the total number of scientists is that? Without any career advancement benefit, can one expect to see mainstream use of it as a tool?

If you’re an entrepreneur looking to make a living from running that tool, or a publisher looking for ROI on building a big site, you need that mainstream buy-in.

True, but on the other hand, web development is not really about complete controlling your product anymore. As Twitter have shown, create a useful API and the developers will certainly come.

This modular approach makes it very cheap and easy to experiment and see what might work – and if it doesn’t, well who cares anyway. It only took you a few hours to write and the knowledge gained was worth that time.

As the business owner you are free to focus on the core product and generate a ROI somehow, and let the developer community create applications that are useful to niche markets.

I think that approach could certainly work if enough core STM firms were to open up their services with APIs.

Attracting developers and attracting users are different issues. Crowdsourcing that kind of labor is indeed cheaper than doing it yourself, and can lead to new directions you wouldn’t have come up with on your own. On the flipside, not controlling your own product can lead to issues as well (see Twitter’s recent efforts to regain control of many of the variants that have been created in order to facilitate their own monetization efforts). It’s not a reliable business model though, as you have to sit around and hope someone turns your efforts into a viable product. If you have a vision, you may be better driving it yourself.

Regardless, the developers still need to come up with something useful, something with concrete benefits for the users. So far that’s been the sticking point: lots of science Web 2.0 products have open API’s and people are playing around with them. But none have immediate appeal to the mainstream of science, at least so far.

“As the business owner you are free to focus on the core product and generate a ROI somehow”

That “somehow” is also a major unanswered question.

If you have a vision, you may be better driving it yourself.

I agree that this is probably the most reliable approach for a mainstream product, but science is so granular these days – do you think the one size fits all tool is going to work?

No, absolutely not. But a vision can be aimed at a smaller community, and can be customized for each community interested. COB’s The Node is better than the Nature Network because it is focused on the developmental biology community rather than all of science. I’d take it a step further though, focus on the sub-communities within that smaller community. But that can be done by the owner of the site if the vision is in line with what the various communities need. What’s nice about opening things up though, is that the communities might provide an answer you wouldn’t have come to on your own.

This issue cries out for some serious modeling and measurement. For example, among the 13% heavy users, what activities have they given up in order to use web 2.0 tools? What is the replacement function? Then too, these tools perform very different functions, from discussion to news (and publicity). Which functions are being used how much?

By the same token, the revolutionary hype is based on a simplistic model of science, which sees communication as an undifferentiated activity. In fact there are many different flows of information and ideas in the science system, each with its own dynamics. Some are very slow moving, not because of slow technology but intrinsically. A given result may not become important for years. The real time efficiency of web 2.0 is of no help in such cases.

More specifically, if the primary functions of web 2.0 tools are discussion of, and news about, journal articles, then these tools are not going to replace those articles. But on the other hand, it often takes decades for a new technology to take over and web 2.0 is very new by that metric. One wants to know not just what the distribution of usage is, but how it is changing? Is web 2.0 gaining on us?

Comments are closed.