This month, rather than a specific topic, the question for the Chefs was a bit more of a Rorschach test:
“What’s the biggest elephant in the room?”
An “elephant in the room” is an obvious truth or condition that is being ignored or not addressed, or a risk nobody wants to discuss. We dance around these elephants, try to avoid them, and negotiate uncomfortably in their presence, but they never quite go away.
By asking a question to which there is no wrong answer, I wanted to elicit the free-floating anxiety each Chef is sensing in the community, in the midst of our ideas, or feeling personally, and get them to talk about those things that aren’t talked about enough.
Joe Esposito: The question is, What is the biggest elephant in the room? This implies many elephants, a veritable herd, which seems right. There are so many things that should compel our attention right now, but we are distracted by such relatively minor things like SOPA, RWA, and FRPPA. The biggest elephant, of greater significance than open access mandates and library budgets, is consumerization. For those involved with scholarly communications, this seems like an irrelevancy, as the academy is its own place, with its own mission and network of vendors and institutions. But increasingly the devices people use to read scholarly content are from such consumer tech companies as Apple and Google, discovery takes place on Facebook and Twitter, and purchasing is the realm of Amazon. This in itself would not be so bad except that these companies impose harsh restrictions on how their devices and platforms can be used. Try to sell books through Apple’s iBookstore or develop a strategy for e-books that does not involve Amazon’s proprietary Kindle, and you will soon discover that it is hard for scholarly publishers to control their own operations. Nor can these consumer companies be ignored. What does a librarian do when a patron wants to read a monograph on a Barnes & Noble Nook? How does a journals publisher fit a PDF on the screen of an iPhone? The core problem facing scholarly publishing today is that these highly influential consumer companies have inordinate influence over the sale and distribution of scholarly content, but they have little interest in the concerns, not to mention the content, of scholarly publishers.
Rick Anderson: The higher education establishment enjoys many advantages, perhaps chief among them a halo effect that has, historically, proven to be nearly untarnishable. Yes, the system has its critics, and always has: commentators like Allan Bloom, Richard Arum, Josipa Roksa, and, more recently, Benjamin Ginsberg and Stefan Collini have all criticized various aspects of higher education from a range of social and political perspectives in recent decades, and those are only a few of the more influential and most recent commentators. But none of their attacks has seriously undermined the magical, almost incantatory effect of the phrase “a college education” among the general public. But these writers have mostly been arguing that we aren’t doing higher education the right way. What is new — or at least seems to me to be increasing in intensity — is the amount of energy and print now being invested in the argument that college may not actually be worth the cost and the trouble, even if done “right.” To some degree, such talk is the natural result of financial crisis: as resources dwindle we start asking harder questions about our spending priorities. The most obvious and proximate cause of the incredibly rancorous debate about the future of higher education in Great Britain, for example, is fiscal pressure; that country can simply no longer afford to do higher education the way it always has. But the debate — both there and in the U.S. — is no longer limited to issues of resource allocation or campus radicalism or proper pedagogical philosophy. There now seems to be a growing and increasingly serious debate about whether or not we need traditional institutions of higher education at all. As publishers, scholars, librarians, students, researchers, or simply people with a vested interest in the future of the scholarly enterprise, I think we ignore this particular elephant at our peril.
Tim Vines: The Journal of Irreproducible Results is classic science ephemera and a good wheeze. Yet by rights, the JIR should be the biggest journal of them all — the results in most scientific papers cannot be repeated because the authors supply none of their data, and even when the data are there, it seems that the results in the paper cannot readily be re-obtained in a significant proportion of cases (~30% in a small study I’m involved with). So, the biggest elephant in the room is that most of the papers that underlie modern science cannot currently be reproduced, even though the gold standard for scientific progress is that science is a body of knowledge that is built on stable and reproducible results.
Does this mean that we’re currently in a Wile E. Coyote off-the-cliff phase, where we’ve been running on thin air for a while? Are we in for a very hard landing when we realise that much of our current science isn’t actually correct? This has been the case in a few fields, where retractions of key publications have invalidated much of the subsequent work. However, this isn’t broadly true, for the simple reason that “scientific knowledge” isn’t either wrong or right. In fact, it exists as a huge number of bits of data and a series of ever broader extrapolations from those data. The most narrow extrapolations are very likely to be correct, and the broadest least likely to be correct, particularly if they contradict current understanding. This current understanding itself is the product of many many congruent observations and thus is not likely to be wrong (current understanding is certainly incomplete, but that’s why we keep doing science). Unimportant papers are the ones that safely establish narrow extrapolations, the bad ones those that claim to support broad conclusions but don’t, and the truly great papers rigorously support far-reaching conclusions that change how we think about the world.
Even if science is robust to the results of individual papers not being reproducible, we should still be striving to making research open to being verified and retested by archiving the underlying data. Achieving near universal data archiving is going to require a concerted effort by funding agencies, researchers and a broad swathe of scientific journals, but hopefully the momentum is now building to get it done.
Kent Anderson: To me, the biggest elephant in the room is what I am beginning to call “the new serials crisis,” which is far different than the serials crisis of old. The first serials crisis was a crisis in an era of scarcity and intermediaries, so it affected the purchasers of scholarly material, mainly librarians who found the combination of more titles, higher prices, and shrinking budgets justifiably frustrating. (The elephant in the room regarding the first serials crisis was that university administrators, while courting more researchers and science funding, were simultaneously reducing the share of budget devoted to library resources — but that’s an elephant of old.) Today, the serials crisis isn’t really a purchaser/scarcity crisis but a user/abundance crisis.
The flood of papers into the literature is simply overwhelming. MEDLINE-indexed journals published between 2000 and 2010 approximately as many papers as had been published in the previous 30 years (1970-2000). In the past few years, more open access and mega-journal experiments have increased the number of papers published even more severely. Meanwhile, tools to sort them have not been nearly as robust, the latency of citations hasn’t improved, commenting and rating tools are anemic at best, and authors are using the publication process to cynically buff their h-indices. We are creating a literature brimming with unread, uncited papers. The incentives have shifted so strongly that frequent publication is the game. For publishers, the focus is shifting to satisfying authors by providing author services, while readers are being overwhelmed and under-served.
In dealing with the serials crisis of old, a new author-pays funding model emerged. This model is creating a new serials crisis, but this time, it’s about the utility, reliability, and robustness of the scientific record. It is arguably a more profound and intractable crisis than the purchaser’s crisis of old.
David Wojick: The biggest elephant in the room is the prospect of change. The room is full of blind gurus who are feeling up the elephant. Each guru then reports its findings to the waiting throngs outside. No two reports agree. The people seem not to notice this, as some rush back and forth in all directions, while others are immobilized. Moral: decide what to do, and then do it, and ignore the elephant. The elephant is a Siren.
David Crotty: Really, we’re dealing with a herd of elephants filling a mansion full of rooms. But I’ll choose one that’s been on my mind as of late — intellectual property.
The Bayh-Dole Act, adopted in 1980, has had a profound impact on the way we do science, and in particular, the way researchers and institutions are rewarded for discovery. Essentially the Act gives US researchers and their institutions intellectual property rights to any discoveries that may arise from federally-funded research. The Act has been widely praised, with the Economist calling it, “perhaps the most inspired piece of legislation to be enacted in America over the past half-century. . . . More than anything, this single policy measure helped to reverse America’s precipitous slide into industrial irrelevance.”
But as we move further into an era where data sharing and open access are being proposed as the new norm, this act creates some worrisome contradictions, both philosophical and legal.
We regularly hear arguments that taxpayers own everything their taxes fund, hence it is inherently wrong to put access to scholarly papers behind subscription paywalls. But no one seems to follow the logic of that argument any further, to the often highly-profitable IP generated from taxpayer-funded research. If the taxpayers own the paper that the research generated, what about the actual results themselves?
Unsurprisingly, researchers and university administrators never seem to mention freeing up these resources when they talk about science progress and the taxpayers’ rights. The University of California system alone made more than $90 million from technology transfer in 2010. Most researchers dream about making a significant breakthrough, one that will lead to a spin-off company that will bring in more funds (both for research and personal gain). If we ask institutions to do without these funds, and strictly limit the reward offered to researchers, what kind of negative impact does that have on research and recruitment?
But the problem goes further than these abstract philosophical arguments about what the taxpayer may or may not “deserve.”
If the researcher and institution fully own the IP from the work, can the federal government (or a publisher for that matter) demand the full release of that IP through data deposit mandates? This seems a matter that a court would have to decide, but problems will arise depending on the decision. If the IP must be released, does this destroy the progress and incentive created by Bayh-Dole? If the courts rule that researcher can’t be compelled to give up their profitable trade secrets and IP, then is any federal data mandate toothless and doomed to a lack of compliance?
Another interesting question that comes up is whether the research paper itself should be considered IP resulting from federally-funded research? If the researcher owns and can assign copyright, then where do open access and PubMed Central deposit mandates fit into the legal scheme of things?
If these mandates are indeed the future of research, then it seems that further legislation is going to be needed to clarify these questions. There are clear cases where release of data is necessary for the public good. But at the same time, we want to continue to offer a high level of incentive and reward to researchers. This may prove a difficult balance to achieve.
Really this is a part of a larger elephant that’s also worth considering. Our society places an apparent low value on scientific research, and the career path offered has become increasingly less appealing. Does a future of sharing, of altruism, and working toward the public good conflict with the notion of science as a compelling career choice? If we continue to take away rewards and recognition for individual achievement, does that further drive the best minds away from academia?
David Smith: What are we for? Just what purpose is publishing journal articles serving at this time and into the future? I keep coming back to these questions mostly due to the whole OA debate. The thing that really gets me about it is why publishers are on the receiving end of so much of the vitriol. Consider this workflow:
- Scholar gets funding for research
- Scholar does research
- Scholar undertakes a process whereby they attempt to maximise the value of the research they’ve done by attempting to get as many papers out as possible, whilst simultaneously getting as much tenure/funding credit as possible for the same body of work (these things tend to trend against each other and you’ll note that there are two different definitions of value wrapped up there)
- Scholar selects journals in which to publish the work
- Publisher places successful works out for greater dissemination
- Fortune and glory follows (or not).
Now if you don’t like this system, note that the publisher is right at the end of that chain. Publishers don’t control impact factor and its use; scholars do — specifically the ones that sit on tenure committees and research grant awarding bodies. The thing is, the business of getting research money and tenure seems to be decoupling from the business of sharing research. The purpose of the article as a mechanism for conveying information appears to have been subverted. It may be anecdote, but I keep hearing from colleagues still in the research business (so to speak) that they publish in the best journals they possibly can, but don’t rely on the best journals to keep up with the latest developments in the field. Situational awareness — of who’s got a good line of research going or who is one the same discovery track but further ahead — seems to be happening in other venues. And that strikes me as a powerful disconnect in terms of what our wares are being used for. Or maybe that’s just the company I keep. Right now we are selling our wares; we are making money. So it’s certainly not the End of Days. But . . .
We are also starting to see some interest in other uses of journal articles. Data mining springs to mind. The article is frankly a very poor container for data mining purposes regardless of what fancy format you want to put it out in. I saw a hilarious talk recently where the speaker (a scholar) observed that the databases of research pre-publication and post data-mining didn’t match up very well. He wanted to improve the mining process. . . . I was thinking of a more obvious solution (cough — publish the data). If you want to go data-mining, then you want to be using a data-friendly search and discovery environment and a data-friendly container for the things you want to make use of. And that just isn’t the article. So you’ve got novel uses for research outputs which likely are best served by new containers at least alongside the version of record, and you’ve also got a system whereby at least some researchers are not finding the current system much use in helping them keep on top of the information they know is out there. But they are locked into it because they understand all too well the downside to choosing an alternative dissemination system (and there’s nothing like the threat of poverty to focus the mind). Understandably, that leads to a fair amount of frustration (and opinion pieces in major international newspapers). But here’s the thing: we aren’t therapists. We are publishers, and unlike Clay Shirky and others, I firmly believe that we serve a much-needed function. But I do think we need to climb into the skins of our many users and walk around for a while. We do that, we’ll be OK. We don’t? Well, one person’s threat is another’s opportunity.
Michael Clarke: The elephant in the room is print-centric thinking. Publishers continue to approach their content and manage their portfolios within the context of print paradigms.
One example of this phenomenon is the millions upon millions of dollars that STM and scholarly publishers continue to spend on printing journals. Only a small handful of clinical medical journals that continue to boast strong print advertising programs make any money on printed journals. Even the online editions of journals and books continue to be a simulacrum of print. Journals by and large remain PDF delivery services — essentially digital replicas of the print issue. While some organizations are experimenting with online-centric functionality such as semantically generated content relationships, article-level metrics, and data visualization and integration, such initiatives remain the exception and not the norm.
There are many reasons for this state of affairs, including the inherent conservatism of the academic community and the role of traditional scholarly publications in career advancement, which creates strong headwinds for innovation. A less discussed factor, however, is talent. Executive leadership and boards of directors are often composed of individuals that built their careers developing print product lines or publishing in print books and journals. They are often not heavy users of online and mobile applications and do not have digital application development as a primary orientation. This print orientation manifests itself in the structure of organizations they run, with divisions organized around product silos (how many organizations still have “journals” divisions and “books” divisions?) as opposed to around customer needs. It also manifests itself in terms of who gets hired, promoted, and otherwise rewarded and for what activates. What skill sets are being recruited? How many digital product development staff does the typical society publisher have? How many organizations can answer the question, “What is in your digital product development pipeline?” How many can answer the same question without referring to a new book or journal that happens to have an online edition?
Success in the next decade is dependent on delivering increasingly sophisticated digital products that meet the ever more complex needs of the professionals we collectively serve – how many organizations are truly prepared for this challenge?