Culture trumps technology. This is main message of a five-year study into the values, motivations, and communication behaviors of scholars and their associates at research institutions across the United States.
“Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines,” is the result of 160 in-depth interviews across 45 U.S. research institutions, focusing on seven academic fields: archaeology, astrophysics, biology, economics, history, music, and political science.
The 728-page report was released last month and is freely available from the project website. Supported by a grant by the Andrew W. Mellon foundation, the report was authored by Diane Harley, Sophia Krzys Acord, Sarah Earl-Novell, Shannon Lawrence, and C. Judson King, all from the Center for Studies in Higher Education at UC Berkeley.
The reoccurring theme in the report is that academia is a highly conservative system, largely determined by disciplinary norms and organized around external peer-review and assessment. Starting from this premise, the resultant lack of scholarly engagement in radically new forms of publishing should not be that surprising.
The advice given to pre-tenure scholars was consistent across all fields: focus on publishing in the right venues and avoid spending too much time on public engagement, committee work, writing op-ed pieces, developing websites, blogging, and other non-traditional forms of electronic dissemination (including online course activities). (p.ii)
What is surprising is why so many publishers and new commercial venues have jumped into the Web 2.0 space hoping they could do for scholars what Facebook did to teenagers and relying on a Zeitgeist of “build it and they will come.” Yes, teenagers become adults, but they often drop their teenage habits through the socialization of the classroom and the enculturation of academic culture. Harley writes:
There is ample evidence that, once initiated into the profession, newer scholars—be they graduate students, postdoctoral scholars, or assistant professors—adopt the behaviors, norms, and recommendations of their mentors in order to advance their careers. Of course, teenagers eventually develop into adults. (p.iii)
Yet, the report stops short of suggesting that scholarly communication is fixed and immutable. Academic culture changes–albeit slowly–and scholarship evolves with it. New technologies come to market, although most fail or are largely ignored. A few survive because they address a disciplinary need, manage to attract enough early adopters and, through a combination of persistence, marketing, and sheer luck, end up becoming a standard practice.
It’s not an easy road to success. What’s more, new tools that ignore the core values of disciplines and the reward systems embedded within can start rethinking their product before they burn through all their venture capital.
Experiments in new genres of scholarship and dissemination are occurring in every field, but they are taking place within the context of relatively conservative value and reward systems that have the practice of peer review at their core. (p. iv – v)
Harley and her co-authors employ a grounded theory approach to their work, listening to academics, administrators, and librarians until common patterns emerge and general statements can be made. It’s the approach of an anthropologist, which is not that surprising, given that this is Harley’s background.
Few readers will take the time to plow through the entire 728 pages of this report. Luckily, it’s prefaced with a full Executive Summary. Chapter 1 contains a good description of the methodology, main findings, and conclusions, followed by seven detailed case studies, one for each discipline. The last chapter is reserved for a bibliography of relevant literature.
While the authors demonstrate a clear understanding of the relevant literature, reaching beyond journal articles to include reports, position papers, conferences, newspapers, and blogs, the report conspicuously lacks the core literature from the sociology of science–a field dedicated to understand science as a social system.
Nevertheless, this report deserves to sit on the shelf alongside similar important works of academic sociology such as Warren Hagstrom’s The scientific community (1965), Robert Merton’s Sociology of Science (1973), Diana Crane’s Invisible colleges (1972), Jonathan and Stephen Cole’s Social stratification in science (1973), Bruno Latour’s Laboratory life (1986), and Science in action (1987).
Assessing the Future Landscape of Scholarly Communication is a landmark work deserving of scholarly as well as professional recognition. The fact that it was self-published online rather than by, say, the University of Chicago Press, is somewhat telling.
Perhaps scholarly communication is changing . . . at least a bit.
19 Thoughts on "Culture Trumps Technology: The UC Berkeley Scholarly Communication Report"
The academic community may be conservative by Berkeley standards, but it is perfectly capable of rapid technological change. The Web went from invention to saturation (everyone uses it)in a decade, which is extremely fast for such a fundamental change.
This list from the report provides a better explanation: “public engagement, committee work, writing op-ed pieces, developing websites, blogging, and other non-traditional forms of electronic dissemination (including online course activities.” Blogging, etc., are activities, not technologies. These activities do not do the job that journal publishing does so they cannot replace journal publishing. And since journal publishing is the core of the academic evaluation system they cannot compete with it either.
The mistake seems to have been thinking that journal publishing is just another form of information dissemination. But the content of journal articles is highly specialized, lengthy and detailed, and citation is the basis of evaluation.
In fact, social networking and blogging have succeeded where they are needed, in large part for purposes of evaluation. Teenagers are evaluated on the basis of the groups they belong to. Pundits are evaluated on the basis of the attention they can evoke (like this comment).
In addition some people seem to have thought, or hoped, that evaluation would go away. This is unrealistic.
Much of the push for Web 2.0 technologies in science is something of an attempt to change the ground rules for evaluation and success. Science, as it is, selects for achievement, though this is often controversial and difficult to determine. While much of what is being suggested is an attempt to improve this evaluation, a lot of it misses the mark. Much of it wants to rank by selecting for social networking skills rather than actual scientific achievement. If we’re giving career credit for how well one gets along with friends online and how big of a loyal network one builds, then we’re no longer measuring you on the actual discoveries your lab is turning up.
Much is well-minded, searching for methods that are more meaningful than the impact factor, but most of the suggestions miss the mark. Although I do suspect that there’s at least a small contingent attempting to do an end run around having to actually produce to climb the ladder, to convince everyone that the thing that they’re good at is what matters most.
What I find surprising is that in fields where it is compulsory to share data as soon as they are available (I’m thinking structural genomics here but there must be other large consortia with similar open access values) that they aren’t using a web 2.0 approach to flag up the information. Papers, if they are written at all, tend to follow a while after the data are made public. In this case the constraint isn’t one of secrecy but of a culture where communication isn’t seen to be as important as generating the data in the first place. There are places where this information could be made available – Protopedia for example – but the take-up seems low.
This sounds promising, but what do you mean by “flag up the information”? If the content is similar to that in the journal article there could be an IP problem. Some journals (unfortunately) do not want article content posted before they publish. On the other hand if, as you suggest, communication is not part of the process then there is no problem to be solved.
I think you’re right that there are two issues here. One, that the intention to write a journal article at some point would put people off from annotating their data completely, whether in a wiki, database or pre-print server. Funding agencies can make you deposit your data, but they can’t make you deposit your ideas. Two, I agree that the lack of take-up of web 2.0 could come down to scientists seeing no reason to communicate their results.
There’s a difference though, between collecting data and doing an experiment. Sequencing a genome is not an experiment, there’s no hypothesis being tested. It’s in the analysis of that sequence data that you’re actually doing the experiment and coming up with the publishable results. That’s probably why you don’t see a lot of that type of analysis on deposited sequence data before the paper is published. Publication is still the measure of success in many ways, and there are many reasons why, in a competitive field, one would want to keep that analysis close to the vest, at least until after publication.
—Two, I agree that the lack of take-up of web 2.0 could come down to scientists seeing no reason to communicate their results.—
I don’t think it’s that they see no reason to communicate, it’s that they see no reason to communicate in this manner. The Web 2.0 tools don’t provide the career-building rewards that other methods do, yet are often more time-consuming. Also, in many fields, so few people are using Web 2.0 tools, so more traditional methods can be more efficient to reach the audience you want to reach.
When making the comparison with Web 2.0 services, such as Flickr, YouTube, Slideshare, etc., a factor often overlooked and seriously underestimated is that these provide services for content for which there was previously no outlet. That is not the case for scholarly papers. So services that seek to enhance scholarly communication – and in this respect I am a great supporter of open access, for example – must recognise that they are marginal (even if by a large margin) in terms of additional dissemination, and make the case accordingly. This would explain why on scholarly communication the conservatism of the academic system identified in this UCal report prevails and it isn’t easily persuaded for change. The case for change has to be better and more focussed, and where that has happened, e.g. open access policies, it has been more successful.
I read into the beginning of the methodology and was stopped by the description of how they sourced their interviews. They used “snowball sampling”, which essentially means they got initial contacts to forward them on to others. What this means in terms of the results is that they have a snapshot of what elite academics thought a few years back.
Aren’t elite academics most likely to embrace the traditional culture in which their ensconced and eschew new technology?
Yes, their report did focus on academics from “elite” institutions, although that does not limit the interpretation of their findings, because:
1) Researchers at these elite institutions generate the vast majority of the published literature. This means that academic publishing is largely driven by the needs and desires of this group, and
2) Second-tier institutions often follow the same reward models (albeit a more relaxed version) of these elite institutions.