Elephant in the room
Elephant in the room (Photo credit: Wikipedia)

This month, rather than a specific topic, the question for the Chefs was a bit more of a Rorschach test:

“What’s the biggest elephant in the room?”

An “elephant in the room” is an obvious truth or condition that is being ignored or not addressed, or a risk nobody wants to discuss. We dance around these elephants, try to avoid them, and negotiate uncomfortably in their presence, but they never quite go away.

By asking a question to which there is no wrong answer, I wanted to elicit the free-floating anxiety each Chef is sensing in the community, in the midst of our ideas, or feeling personally, and get them to talk about those things that aren’t talked about enough.

Joe Esposito: The question is, What is the biggest elephant in the room?  This implies many elephants, a veritable herd, which seems right. There are so many things that should compel our attention right now, but we are distracted by such relatively minor things like SOPA, RWA, and FRPPA. The biggest elephant, of greater significance than open access mandates and library budgets, is consumerization. For those involved with scholarly communications, this seems like an irrelevancy, as the academy is its own place, with its own mission and network of vendors and institutions. But increasingly the devices people use to read scholarly content are from such consumer tech companies as Apple and Google, discovery takes place on Facebook and Twitter, and purchasing is the realm of Amazon. This in itself would not be so bad except that these companies impose harsh restrictions on how their devices and platforms can be used. Try to sell books through Apple’s iBookstore or develop a strategy for e-books that does not involve Amazon’s proprietary Kindle, and you will soon discover that it is hard for scholarly publishers to control their own operations. Nor can these consumer companies be ignored. What does a librarian do when a patron wants to read a monograph on a Barnes & Noble Nook? How does a journals publisher fit a PDF on the screen of an iPhone? The core problem facing scholarly publishing today is that these highly influential consumer companies have inordinate influence over the sale and distribution of scholarly content, but they have little interest in the concerns, not to mention the content, of scholarly publishers.

Rick Anderson: The higher education establishment enjoys many advantages, perhaps chief among them a halo effect that has, historically, proven to be nearly untarnishable. Yes, the system has its critics, and always has: commentators like Allan Bloom, Richard Arum, Josipa Roksa, and, more recently, Benjamin Ginsberg and Stefan Collini have all criticized various aspects of higher education from a range of social and political perspectives in recent decades, and those are only a few of the more influential and most recent commentators. But none of their attacks has seriously undermined the magical, almost incantatory effect of the phrase “a college education” among the general public. But these writers have mostly been arguing that we aren’t doing higher education the right way. What is new — or at least seems to me to be increasing in intensity — is the amount of energy and print now being invested in the argument that college may not actually be worth the cost and the trouble, even if done “right.” To some degree, such talk is the natural result of financial crisis: as resources dwindle we start asking harder questions about our spending priorities. The most obvious and proximate cause of the incredibly rancorous debate about the future of higher education in Great Britain, for example, is fiscal pressure; that country can simply no longer afford to do higher education the way it always has. But the debate — both there and in the U.S. — is no longer limited to issues of resource allocation or campus radicalism or proper pedagogical philosophy. There now seems to be a growing and increasingly serious debate about whether or not we need traditional institutions of higher education at all. As publishers, scholars, librarians, students, researchers, or simply people with a vested interest in the future of the scholarly enterprise, I think we ignore this particular elephant at our peril.

Tim Vines: The Journal of Irreproducible Results is classic science ephemera and a good wheeze. Yet by rights, the JIR should be the biggest journal of them all — the results in most scientific papers cannot be repeated because the authors supply none of their data, and even when the data are there, it seems that the results in the paper cannot readily be re-obtained in a significant proportion of cases (~30% in a small study I’m involved with). So, the biggest elephant in the room is that most of the papers that underlie modern science cannot currently be reproduced, even though the gold standard for scientific progress is that science is a body of knowledge that is built on stable and reproducible results.

Does this mean that we’re currently in a Wile E. Coyote off-the-cliff phase, where we’ve been running on thin air for a while? Are we in for a very hard landing when we realise that much of our current science isn’t actually correct? This has been the case in a few fields, where retractions of key publications have invalidated much of the subsequent work. However, this isn’t broadly true, for the simple reason that “scientific knowledge” isn’t either wrong or right. In fact, it exists as a huge number of bits of data and a series of ever broader extrapolations from those data. The most narrow extrapolations are very likely to be correct, and the broadest least likely to be correct, particularly if they contradict current understanding. This current understanding itself is the product of many many congruent observations and thus is not likely to be wrong (current understanding is certainly incomplete, but that’s why we keep doing science). Unimportant papers are the ones that safely establish narrow extrapolations, the bad ones those that claim to support broad conclusions but don’t, and the truly great papers rigorously support far-reaching conclusions that change how we think about the world.

Even if science is robust to the results of individual papers not being reproducible, we should still be striving to making research open to being verified and retested by archiving the underlying data. Achieving near universal data archiving is going to require a concerted effort by funding agencies, researchers and a broad swathe of scientific journals, but hopefully the momentum is now building to get it done.

Kent Anderson: To me, the biggest elephant in the room is what I am beginning to call “the new serials crisis,” which is far different than the serials crisis of old. The first serials crisis was a crisis in an era of scarcity and intermediaries, so it affected the purchasers of scholarly material, mainly librarians who found the combination of more titles, higher prices, and shrinking budgets justifiably frustrating. (The elephant in the room regarding the first serials crisis was that university administrators, while courting more researchers and science funding, were simultaneously reducing the share of budget devoted to library resources — but that’s an elephant of old.) Today, the serials crisis isn’t really a purchaser/scarcity crisis but a user/abundance crisis.

The flood of papers into the literature is simply overwhelming. MEDLINE-indexed journals published between 2000 and 2010 approximately as many papers as had been published in the previous 30 years (1970-2000). In the past few years, more open access and mega-journal experiments have increased the number of papers published even more severely. Meanwhile, tools to sort them have not been nearly as robust, the latency of citations hasn’t improved, commenting and rating tools are anemic at best, and authors are using the publication process to cynically buff their h-indices. We are creating a literature brimming with unread, uncited papers. The incentives have shifted so strongly that frequent publication is the game. For publishers, the focus is shifting to satisfying authors by providing author services, while readers are being overwhelmed and under-served.

In dealing with the serials crisis of old, a new author-pays funding model emerged. This model is creating a new serials crisis, but this time, it’s about the utility, reliability, and robustness of the scientific record. It is arguably a more profound and intractable crisis than the purchaser’s crisis of old.

David Wojick: The biggest elephant in the room is the prospect of change. The room is full of blind gurus who are feeling up the elephant. Each guru then reports its findings to the waiting throngs outside. No two reports agree. The people seem not to notice this, as some rush back and forth in all directions, while others are immobilized. Moral: decide what to do, and then do it, and ignore the elephant. The elephant is a Siren.

David Crotty: Really, we’re dealing with a herd of elephants filling a mansion full of rooms. But I’ll choose one that’s been on my mind as of late — intellectual property.

The Bayh-Dole Act, adopted in 1980, has had a profound impact on the way we do science, and in particular, the way researchers and institutions are rewarded for discovery. Essentially the Act gives US researchers and their institutions intellectual property rights to any discoveries that may arise from federally-funded research. The Act has been widely praised, with the Economist calling it, “perhaps the most inspired piece of legislation to be enacted in America over the past half-century. . . . More than anything, this single policy measure helped to reverse America’s precipitous slide into industrial irrelevance.”

But as we move further into an era where data sharing and open access are being proposed as the new norm, this act creates some worrisome contradictions, both philosophical and legal.

We regularly hear arguments that taxpayers own everything their taxes fund, hence it is inherently wrong to put access to scholarly papers behind subscription paywalls. But no one seems to follow the logic of that argument any further, to the often highly-profitable IP generated from taxpayer-funded research. If the taxpayers own the paper that the research generated, what about the actual results themselves?

Unsurprisingly, researchers and university administrators never seem to mention freeing up these resources when they talk about science progress and the taxpayers’ rights. The University of California system alone made more than $90 million from technology transfer in 2010. Most researchers dream about making a significant breakthrough, one that will lead to a spin-off company that will bring in more funds (both for research and personal gain). If we ask institutions to do without these funds, and strictly limit the reward offered to researchers, what kind of negative impact does that have on research and recruitment?

But the problem goes further than these abstract philosophical arguments about what the taxpayer may or may not “deserve.”

If the researcher and institution fully own the IP from the work, can the federal government (or a publisher for that matter) demand the full release of that IP through data deposit mandates? This seems a matter that a court would have to decide, but problems will arise depending on the decision. If the IP must be released, does this destroy the progress and incentive created by Bayh-Dole? If the courts rule that researcher can’t be compelled to give up their profitable trade secrets and IP, then is any federal data mandate toothless and doomed to a lack of compliance?

Another interesting question that comes up is whether the research paper itself should be considered IP resulting from federally-funded research? If the researcher owns and can assign copyright, then where do open access and PubMed Central deposit mandates fit into the legal scheme of things?

If these mandates are indeed the future of research, then it seems that further legislation is going to be needed to clarify these questions. There are clear cases where release of data is necessary for the public good. But at the same time, we want to continue to offer a high level of incentive and reward to researchers. This may prove a difficult balance to achieve.

Really this is a part of a larger elephant that’s also worth considering. Our society places an apparent low value on scientific research, and the career path offered has become increasingly less appealing. Does a future of sharing, of altruism, and working toward the public good conflict with the notion of science as a compelling career choice? If we continue to take away rewards and recognition for individual achievement, does that further drive the best minds away from academia?

David Smith: What are we for? Just what purpose is publishing journal articles serving at this time and into the future? I keep coming back to these questions mostly due to the whole OA debate. The thing that really gets me about it is why publishers are on the receiving end of so much of the vitriol. Consider this workflow:

  1. Scholar gets funding for research
  2. Scholar does research
  3. Scholar undertakes a process whereby they attempt to maximise the value of the research they’ve done by attempting to get as many papers out as possible, whilst simultaneously getting as much tenure/funding credit as possible for the same body of work (these things tend to trend against each other and you’ll note that there are two different definitions of value wrapped up there)
  4. Scholar selects journals in which to publish the work
  5. Publisher places successful works out for greater dissemination
  6. Fortune and glory follows (or not).

Now if you don’t like this system, note that the publisher is right at the end of that chain. Publishers don’t control impact factor and its use; scholars do — specifically the ones that sit on tenure committees and research grant awarding bodies. The thing is, the business of getting research money and tenure seems to be decoupling from the business of sharing research. The purpose of the article as a mechanism for conveying information appears to have been subverted. It may be anecdote, but I keep hearing from colleagues still in the research business (so to speak) that they publish in the best journals they possibly can, but don’t rely on the best journals to keep up with the latest developments in the field. Situational awareness — of who’s got a good line of research going or who is one the same discovery track but further ahead — seems to be happening in other venues. And that strikes me as a powerful disconnect in terms of what our wares are being used for. Or maybe that’s just the company I keep. Right now we are selling our wares; we are making money. So it’s certainly not the End of Days. But . . .

We are also starting to see some interest in other uses of journal articles. Data mining springs to mind. The article is frankly a very poor container for data mining purposes regardless of what fancy format you want to put it out in. I saw a hilarious talk recently where the speaker (a scholar) observed that the databases of research pre-publication and post data-mining didn’t match up very well. He wanted to improve the mining process. . . . I was thinking of a more obvious solution (cough — publish the data). If you want to go data-mining, then you want to be using a data-friendly search and discovery environment and a data-friendly container for the things you want to make use of. And that just isn’t the article. So you’ve got novel uses for research outputs which likely are best served by new containers at least alongside the version of record, and you’ve also got a system whereby at least some researchers are not finding the current system much use in helping them keep on top of the information they know is out there. But they are locked into it because they understand all too well the downside to choosing an alternative dissemination system (and there’s nothing like the threat of poverty to focus the mind). Understandably, that leads to a fair amount of frustration (and opinion pieces in major international newspapers). But here’s the thing: we aren’t therapists. We are publishers, and unlike Clay Shirky and others, I firmly believe that we serve a much-needed function. But I do think we need to climb into the skins of our many users and walk around for a while. We do that, we’ll be OK. We don’t? Well, one person’s threat is another’s opportunity.

Michael Clarke: The elephant in the room is print-centric thinking. Publishers continue to approach their content and manage their portfolios within the context of print paradigms.

One example of this phenomenon is the millions upon millions of dollars that STM and scholarly publishers continue to spend on printing journals. Only a small handful of clinical medical journals that continue to boast strong print advertising programs make any money on printed journals. Even the online editions of journals and books continue to be a simulacrum of print. Journals by and large remain PDF delivery services — essentially digital replicas of the print issue. While some organizations are experimenting with online-centric functionality such as semantically generated content relationships, article-level metrics, and data visualization and integration, such initiatives remain the exception and not the norm.

There are many reasons for this state of affairs, including the inherent conservatism of the academic community and the role of traditional scholarly publications in career advancement, which creates strong headwinds for innovation. A less discussed factor, however, is talent. Executive leadership and boards of directors are often composed of individuals that built their careers developing print product lines or publishing in print books and journals. They are often not heavy users of online and mobile applications and do not have digital application development as a primary orientation. This print orientation manifests itself in the structure of organizations they run, with divisions organized around product silos (how many organizations still have “journals” divisions and “books” divisions?) as opposed to around customer needs. It also manifests itself in terms of who gets hired, promoted, and otherwise rewarded and for what activates.  What skill sets are being recruited? How many digital product development staff does the typical society publisher have? How many organizations can answer the question, “What is in your digital product development pipeline?” How many can answer the same question without referring to a new book or journal that happens to have an online edition?

Success in the next decade is dependent on delivering increasingly sophisticated digital products that meet the ever more complex needs of the professionals we collectively serve – how many organizations are truly prepared for this challenge?

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

21 Thoughts on "Ask the Chefs: "What's the Biggest Elephant in the Room?""

Many thanks for this — a fascinating selection of insights.

Rick Anderson wrote: “The most obvious and proximate cause of the incredibly rancorous debate about the future of higher education in Great Britain, for example, is fiscal pressure; that country can simply no longer afford to do higher education the way it always has.”

Actually, I think we in Britain can afford to do HE the way we always have. The problem is the expectation that an every increasing proportion of kids will go to university. When half of all age-eighteen school-leavers expect to go to university rather then a quarter of them, obviously the cost will double. So while 30 years ago university entry in Britain was determined almost exclusively by academic ability, now there’s a much stronger ability-to-pay component. I am not a fan of this change.

Kent Anderson wrote: “For publishers, the focus is shifting to satisfying authors by providing author services.”

Actually, that’s always been what scholarly publishing has been about. But the once-ubiquitous subscription model concealed the reality of what was going on. One of the many benefits of Gold OA is that it places the revenue right where the costs are, and yields a more transparent, efficient market.

David Crotty wrote: “But no one seems to follow the logic of that argument any further, to the often highly-profitable IP generated from taxpayer-funded research. If the taxpayers own the paper that the research generated, what about the actual results themselves?”

Agreed. I would be in favour of this, and really can’t see what the supposed advantage of the Bayh-Dole Act might be. If it’s intended to act as an incentive for people to go into the academic sciences, then I don’t see the necessity: David says that “our society places an apparent low value on scientific research, and the career path offered has become increasingly less appealing”, yet we continue to mint many more Ph.Ds than there are academic jobs for. And I doubt many people go into academic science research for the financial rewards!

David also wrote: “… whether the research paper itself should be considered IP resulting from federally-funded research? If the researcher owns and can assign copyright, then where do open access and PubMed Central deposit mandates fit into the legal scheme of things?”

I don’t see any problem here. Accepting a grant is like any other contract: you get something, and in exchange you give something. When an OA mandate is in force, part of what a researcher gives in exchange for the money is the right not to post to PubMed Central. Researchers who object to that requirement (if we can imagine such an animal) are quite at liberty to reject the contract, refuse the federal money, and seek funding elsewhere.

David Smith writes: “The business of getting research money and tenure seems to be decoupling from the business of sharing research.”

That is a very apposite and troubling observation.

But what I found most fascinating in this article was the things that were NOT mentioned. Eight opinions here on what the elephants in the room are for scholarly publishing, and not one mentioned any of the following: arXiv, PLoS, F1000 Research, FigShare, or even blogs. These are things that are eating the scholarly-publishing lunch; a little of it yesterday, more today, and all of it tomorrow. I understand that the SSP doesn’t have an actual plan for dealing with these things, but I am pretty shocked that they’re not even on the table as things that need discussing. Is the elephant too big to see?

Mike,

The main response is to your point that “SSP doesn’t have an actual plan for dealing with these things.” Nor will it ever. SSP is not a planning organization in that regard, providing strategic planning services to its members or anything like that. It’s an organization that helps its members learn, share, and network. Or, to quote its mission statement, displayed on this blog and elsewhere, “[t]o advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking.”

There are member of SSP who work at PLoS, run blogs (hello?), work at F1000, etc. All of these things are part of SSP’s ecosystem as a networking organization. I think those weren’t named because we talk about them a lot. They are known to be in the room, and aren’t the elephants.

I’ll now respond to your “take” on my elephant. Having worked at two major journals during the emergence of OA, I can tell you that subscription publishers of major journals were not focused on authors. They were focused on the market, on subscriptions (readers), and on strategies to maximize both. They still are, but now are more focused on authors as a supply-side requirement feeding the other two, not as a central service tenet. I would counter that one of the risks of Gold OA (in many ways — COI, sustainability, scalability) is precisely what you say — it places the revenues where the costs are. That has not yielded a more transparent system, either, as plenty of companies use Gold OA to seed the literature with less disclosure than they’d have to make otherwise.

As Kent has mentioned, an “elephant in the room” is something huge that goes unmentioned. The things you’re describing are all regularly mentioned, discussed and dissected throughout the publishing world.

To answer your comments on my portion:

I would be in favour of this, and really can’t see what the supposed advantage of the Bayh-Dole Act might be. If it’s intended to act as an incentive for people to go into the academic sciences, then I don’t see the necessity

You might feel differently if technology transfer revenue was making up a significant portion of your institution’s budget, and if getting rid of it meant massive cuts in staffing and services offered, making it much harder to do your job. You might also feel differently about the incentives offered if the revenue from your patents or your startup company was paying your mortgage or for your children to go to school.

yet we continue to mint many more Ph.Ds than there are academic jobs for. And I doubt many people go into academic science research for the financial rewards!

And you don’t see either of these situations as problematic? We’re training generation after generation of scientists then throwing them away, rather than letting them continue to contribute to scientific research. Is it better for society for someone with a PhD in Bioinformatics to continue trying to solve disease or for them to work in a bank?

And given the value that research offers to society, how much of our economies it drives, shouldn’t the rewards for this level of contribution be greater than that offered to those who offer no contribution? We’re driving the best and brightest minds out of science because of the poor career prospects and limited rewards offered. I’d rather see those top minds stay in basic research than taking lucrative jobs on Wall Street that do nothing positive for society.

I don’t see any problem here. Accepting a grant is like any other contract: you get something, and in exchange you give something. When an OA mandate is in force, part of what a researcher gives in exchange for the money is the right not to post to PubMed Central. Researchers who object to that requirement (if we can imagine such an animal) are quite at liberty to reject the contract, refuse the federal money, and seek funding elsewhere.

The problem is that the federal government is offering conflicting rights to researchers. Bayh-Dole specifically grants researchers that IP and that they don’t have to give it up to accept federal funds. Because of Bayh-Dole, such a requirement won’t stand up in court and at best can only be a suggestion.

Point taken that arXiv, PLoS, F1000, FigShare and blogs are too well known to count as “elephants in the room” rather than being overlooked.

On “really can’t see what the supposed advantage of the Bayh-Dole Act might be”: I meant for the government, not for the institutions.

On “Yet we continue to mint many more Ph.Ds than there are academic jobs for. And I doubt many people go into academic science research for the financial rewards! <– And you don’t see either of these situations as problematic?"

I certainly don't see it as problematic that people don't go into academic science of the money. For people picking a career on the basis of the money they can make, academic science is a stupid choice and always has been. That's as it should be. Science research is playing in the biggest, best sandpit in the world. Sometimes I am amazed that anyone ever gets paid to do it. (I don't.) People who go into science to get rich have other options than government-funded research.

And, yes, it is a problem that we make more Ph.Ds than we need — or, to put it another way, that we need fewer Ph.Ds that we make. My point was that if the inability to produce private profit from publicly funded research discourages some people from entering Ph.D programs, then the tendency of that discouragement is in the direction of balancing the situation rather than further unbalancing it. Sorry not to have been clearer.

The advantage of Bayh-Dole for the government is to drive research which drives the economy to avoid the “industrial irrelevance” mentioned by The Economist. How much money are technology and pharmaceutical companies contributing to the US economy? How many jobs do they offer? How much of this would be possible without basic research?

The ideal situation for anyone is to find a career they love that offers a tremendous level of remuneration. Right now, you’re right, you’d be an idiot to choose a career in science for the pay. But why shouldn’t we reward the people who are doing important work to improve our health, our economy, our future? Why shouldn’t we make science as attractive a career as possible? Why must science be the realm of the hopeless romantic, the ascetic monk willing to sacrifice all comfort and happiness for the cause?

I think we need all the Ph.D.’s that we make, at least in the process of making them. They’re the ones actually doing all the research. The problem is what to do with them after they have their Ph.D. If we offered better levels of funding for scientific research and placed a higher societal premium on the benefits it offers, we’d find a place for them. That’s perhaps a point a bit peripheral to Bayh-Dole, but part of the larger picture where we make science a really unattractive career for most people.

You might feel differently if technology transfer revenue was making up a significant portion of your institution’s budget

I looked for such an institution a while back, and could find only NYU, which makes oodles from Remicade (an arthritis drug I think). As best I can tell, patents are at best a wash (make no more than they cost to pursue) for nearly all universities.

My data were hardly a Cochrane review, of course. It would be nice to see some good evidence regarding Bayh-Dole’s actual effect on universities.

There is a good discussion of this in Paula Stephan’s new book, “How Economics Shapes Science.” Not only is it an “oversimplification” in her word to attribute this all to Bayh-Dole (there were court cases, including a Supreme Court case that make life forms patentable, preceding the law), but the trend had been developing for a long time.

In any event, there are many examples offered in her book, including the Cohen-Boyer patent for recombinant DNA (Stanford), the Emtriva patent (Emory), the Lyrica patent (Northwestern), and Taxol (Florida State). Not all are patented. There’s Gatorade (University of Florida) and Everyday Mathematics (University of Chicago). And don’t forget Google (Stanford).

As I understand it, the revenue is fairly concentrated among a fairly small number of top schools. Some schools are better at turning research into patented profitable IP than others. For those schools, it is significant revenue. Stanford made around $67 Million in 2010 while spending $7.5 million (http://otl.stanford.edu/about/resources/about_resources.html).

You’d have to look on a school by school basis to see if actual reports are released, but the National Academy did a recent report trying to look at, among other things, the impact of Bayh-Dole (http://www.nap.edu/openbook.php?record_id=13001&page=R1) and you may find some of what you’re looking for in there.

It’s also important to remember that Bayh-Dole only applies to research funded by the US federal government. Research done with funding from other sources may have completely different sets of rules regarding IP.

Gee, I thought we were supposed to write a paragraph, but all the other Chefs wrote a page. In any case they exemplify my point about every guru with a different view, so I hereby incorporate all by reference. My recommendation to management is this. Step back and ask how much time are we going to allocate to thinking about all these change issues? Establish a budget of attention and stick to it. Otherwise change issues will eat you alive.

Joe rightly notes how three big gorillas–Apple, Amazon, and Google–have been shaping the market for books but not in ways that necessarily serve the interests of scholarly book publishers. As small players, they tend to get jerked around by the gorillas, who are focused mainly on the trade consumer market. But there is another way in which scholarly books are being marginalized, and that is in the discussion surrounding OA. OA advocates mostly don’t talk about books at all because they assume that authors of books have an economic stake in them that does not exist for journal articles. (This is a mistake: some authors of journal articles have become rich from their share of royalties on the reprinting of their articles in anthologies.) But erecting a “digital divide” between books and journals in the discussion of OA is as potentially harmful as the siloing of book and journal content was before projects like UPeC got under way finally. Administrators, librarians, and faculty may get exercised about RWA and FRPPA, but they completely ignore the need to discuss OA for books also and what funding structures must be in place for books to go OA as well. Only a very few universities have done much of anything to advance the cause of OA for books. And part of this discussion also needs to be whether the “book” in the digital age should be more than just an electronic facsimile of print. Jim O’Donnell has recently suggested on Liblicense that we may be heading toward the age when the “post-book app” becomes the dominant mode of conveying scholarly research.

I think David Wojick’s point is well taken, we would all do well to beware the advice of pundits.

Having said that, allow me to add my perspective. Once again I see that Kent has hit upon important trends both in his original post (the comment about the proliferation of scientific literature on MEDLINE) and in his comments about Paula Stephen’s book on economics and science. But Kent, I think you have missed an important trend and hence the biggest elephant of all within that data. I think it worthwhile to dig a little deeper and ask what are the catalysts behind this proliferation of data. Many people attribute the increase in literature output to new technologies, you alluded to this in your original post. While I believe that is true, it is only a partial explanation.

The elephant in the proliferation of publication is not in the total number of articles but in the geographic origin of those articles. I have been doing an ongoing analysis of the PubMed database over the past few years. I use 1997 as my base year as it was the year before an important trend began to emerge, the rise of Asia as a source of medical information. In 1997 PubMed archived 451,993 articles in that year. Authors from the ten largest western countries (US, UK, Canada, France, Germany, Italy, Sweden, Netherlands, Switzerland and Spain) accounted for 255,370 (56.5%) of those articles. Authors from the six largest Asian countries (Hong Kong, Japan, Taiwan, China, South Korea and India) accounted for 45,945 (10.2%) of those articles.

By 2011 things had changed dramatically. In 2011 PubMed archived 984,284 articles (experience has taught me it will be another 12 – 16 months before PubMed has fully recorded all articles published in that year). This is indeed a dramatic increase in global output (approximately a 118% increase). Indeed output from the top western countries also experienced an impressive increase. The total number of articles published from the top ten western countries in 2011 was 505,870 (an increase of 198%). However the statistics from the Asian six are even more impressive. In 2011 PubMed archived 186,280 articles from authors in these six countries. That is a fourfold increase over output from these countries in 1997. These six Asian countries now account for almost 20% of all the articles archived on PubMed.

This data has profound implications for western publishers. We all grew up in an age where medical science was dominated by research institutions in Western Europe and North America. All of the systems we have in place to monitor and record scientific advancement are based on a shared cultural heritage and the expectation that English would be the language of science.

This is where economics comes in; the relative economic advancement of Asia compared to Europe and North America would dictate that the gravitational center of science will continue to shift from the Atlantic to the Pacific. It would be hubris to expect that the systems that we have built over the past sixty years to monitor and record the advancement of science can easily accommodate this profound change. We are used to a bi-polar world where science is dominated by authors from Europe and North America. This dynamic is already not true. The scientific world is now tri-polar (Europe, North America and East Asia). On current trends, in twenty years time, the scientific world will once again be bi-polar with the literature dominated by researchers from East Asia and North America.

We may even be seeing the beginnings of the rise of Mandarin as a language of scientific publication. English and English language publishers might have to make some room in the front seat for their Mandarin speaking colleagues.

Good point, Mark. However, the incentives around those new papers are poorly understood, and even malfunction at times. It’s a key trend, you’re right.

My point exactly Kent. There are many cultural aspects of research in China that are not clear to western publishers. Having lived in China, I have observed the miscommunication that constantly occurs between western publishers and Chinese researchers. I think it is an issue of grave concern. My concern would be that the systems western publishers have in place for vetting and reviewing manuscripts may not be effective in an Asian context.

I see I did make one error in my post. The annual output of papers from the western ten countries has increased by 98% not 198% as I wrote in my original post.

“Executive leadership and boards of directors are often composed of individuals that built their careers developing print product lines or publishing in print books and journals.”

I made a similar point on a recent SK blog so glad to see this being raised more. But in some way Michael’s arguments, as cogent and relevant as they are, miss a deeper point.

These executives not only have a career built on success in a paper environment, but they are now drawn into a cyclical budgeting cycle that reinforces the status quo. They have no respite from forecasting, budgeting, reforecasting, etc etc based on current business structures. When they can get away to talk to the troops they do acknowledge that things have changed.

But they are then drawn back into the same cycle: “What is our book revenue thi syear; what do we budget for journals next year; what’s the three year prognosis for xyz that we’ve always done?”

In this environment where is the opportunity for execs to really – really – consider the future when all their time is spent accounting for the recent past and using that to predict the short and medium term future?

As I said in that earlier comment, and Michael seems to echo here, in a relatively benign and static environment this doesn’t matter.

In a turbulent environment it perhaps does.

(And incidentally another elephant: the argument often surfaced here that the article is a highly evolved ‘thing”. It surely is. But in a turbulent environment, the highly evolved often suffer as they have lost the adaptability of, for the want of a better phrase, the mongrel generalist.)

This is the main theme of a book by business consultant Geoffrey A. Moore titled “Escape Velocity: Free Your Company’s Future from the Pull of the Past” (HarperCollins, 2011), which was assigned reading for a recent strategic retreat of a board of directors on which I serve.

This a good point Martin. I’ve found that organizational budgeting processes seldom align with strategic planning. I can’t tell you how many times, at numerous organizations I’ve worked at or worked with, I’ve seen months of strategic planning thrown under the bus as soon as the budgeting process starts and everything reverts to the status quo. One method I’ve seen to escape this trap is for the organization to set aside a substantial “venture fund” to be used only for new product development. Proceeds from new products then accrue back to the fund at 100% until the “loan” is repaid and then a some fixed percentage thereafter (15% for example) to cover those products that are not successful. We did this when I was at the American Academy of Pediatrics and proved very effective.

A corollary problem is the budget timeline. Budgets are usually approved months in advance of the fiscal year and the organizational budget process typically starts 3-6 months before that. Meaning that budgets can be developed well over a year before funds are spent. Given the pace of change today, that process is simply untenable. An organization cannot be nimble enough to thrive in today’s information ecosystem when funds are committed a year in advance. Organizations that budget to granularly or that have too many silos in place will find the problem exacerbated.

I felt that this post left out a really big elephant that i couldn’t quite articulate, but today’s post – The Emergence of a Citation Cartel – does it by example.

The “elephant” is this out-of-control myopic self interest that creeping into everything … not creeping – rushing into everything. Our economic system is now dominated by the extreme maximization of nominal profit. In the Emergence of a Citation Cartel, we have the same algorithm screwing up science. Anyone who chooses not to play the game becomes irrelevant, and therefore the game is dominated by those who play the system.

Comments are closed.