A picture of an barometer pointing to "change"
Photo from iStockphoto.

Looking back on 2009, there was one particular note that seemed to sound repeatedly, resonating through the professional discourse at conferences and in posts throughout the blogosphere: the likelihood of disruptive change afoot in the scientific publishing industry.

Here in the digital pages of the Scholarly Kitchen, for example, we covered John Wilbanks’ presentation at SSP IN and Michael Nielsen’s talk at the 2009 STM Conference. They were both thoughtful presentations and I agree with many of the points raised by both speakers. I think Wilbanks is right when he says that thinking of information in terms of specific containers (e.g. books, journals, etc.) presents an opening to organizations in adjacent spaces who are able to innovate without the constraints of existing formats. I also agree with Nielsen’s point that acquiring expertise in information technology (and especially semantic technology)—as opposed to production technology—is of critical importance to scientific publishers and that those publishers who do not acquire such expertise will fall increasing behind those organizations that do.

It has occurred to me, however, that I would likely have agreed with arguments that scientific publishing was about to be disrupted a decade ago—or even earlier.  That we are speculating on the possibility of the disruption (here were are talking of “disruption” in the sense described by Clay Christensen in his seminal book The Innovator’s Dilemma) of scientific publishing in 2010 is nothing short of remarkable.

Lest we forget (and this is an easy thing to do from the vantage of the second the decade of the 21st century), the World Wide Web was not built for the dissemination of pornography, the sale of trade books, the illegal sharing of music files, dating, trading stocks, reading the news, telecommunications, or tracking down your high school girlfriend or boyfriend. As it turns out, the Web is particularly good for all these activities, but these were not its intended uses.

When Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research. Put another way, the Web was designed to disrupt scientific publishing. It was not designed to disrupt bookstores, telecommunications, matchmaking services, newspapers, pornography, stock trading, music distribution, or a great many other industries.

And yet it has.

It is breathtaking to look back over the events of the last 18 years since the birth of the Web. It has grown from an unformed infant, to a promising adolescent, to a sometimes-unruly teenager. In that time we have witnessed vast swaths of the global economy reconfigured as new industries emerged and old industries were upended. New modes of communication have transformed the workplace—and the home lives—of hundreds of millions of people. From the vantage of 1991, it would have been impossible to predict all that has happened in the last 18 years. No one would have believed that much could change that quickly.

And yet it has.

The one thing that one could have reasonably predicted in 1991, however, was that scientific communication—and the publishing industry that supports the dissemination of scientific research—would radically change over the next couple decades.

And yet it has not.

To be sure, many things have changed. Nearly all scientific journals (and an increasing number of books) are now available online. Reference lists are interconnected via digital object identifiers (DOIs). Vast databases such as Genbank and SciFinder have aggregated and parsed the structures of millions of biological and chemical sequences and structures. Published research is more accessible than ever via search tools such as Google Scholar, PubMed, and Scopus. New business models, such as open access and site licensing, have emerged. And new types of communication vehicles have emerged such as the preprint server ArXiv, video journals such as JoVE and the Video Journal of Orthopaedics, and online networks such as Nature Network, Mendeley, and (most recently) UniPHY—to name just a few innovations. To be sure, scientific publishers have not ignored the Web. They have innovated. They have experimented. They have adapted. But it has been incremental change—not the disruptive change one would have predicted 18 years ago.

Looking back at the publishing landscape in 1991, it does not look dramatically different from today, at least in terms of the major publishers. The industry has been relatively stable. And one would be hard pressed to characterize the number of mergers and acquisitions that have occurred as particularly numerous relative to other industries. Moreover, these mergers and acquisitions are more likely to be explained by the rise of private equity and the availability of cheap capital than by technological innovations related to publishing.

The question then becomes, not whether scientific publishing will be disrupted, but rather why hasn’t it been disrupted already?

In examining the reason for this surprising industry stability, I think it is useful to start by looking closely at the functions that journals—still the primary vehicles for the formal communication of research—serve in the scientific community. Why were journals invented in the first place? What accounts for their remarkable longevity? What problems do they solve and how might those same problems be solved more effectively using new technologies?

Initially, journals were developed to solve two problems: Dissemination and registration.

Dissemination. Scientific journals were first and foremost the solution to the logistical problem of disseminating the descriptions and findings of scientific inquiry. Prior to 1665, when both the Journal des sçavans and the Philosophical Transactions were first published, scientists communicated largely by passing letters between each other. By 1665, however, there were too many scientists (or, more accurately, there were too many educated gentlemen with an interest, and in some cases even an expertise, in “natural philosophy”) for this method to be practical. The solution was to ask all such scientists to mail their letters to a single person (such as, in the case of the Philosophical Transactions, Henry Oldenburg) who would then typeset, print, and bind the letters into a new thing called a journal, mailing out copies to all the other (subscribing) scientists at once.

While the journal was a brilliant solution to the dissemination problems of the 17th century, I think it is safe to say that dissemination is no longer a problem that requires journals. The Internet and the World Wide Web allow anyone with access (including, increasingly, mobile access) to the Web to view any page designated for public display (we will leave aside the issue of pay walls in this discussion). If dissemination were the only function served by journals, journals would have long since vanished in favor of blogs, pre-print servers (e.g. ArXiv), or other document aggregations systems (e.g. Scribd).

Registration. Registration of discovery—that it to say, publicly claiming credit for a discovery—was, like dissemination, an early function of journal publishing. Ironically, the Philosophical Transactions was launched just in time to avert the most notorious scientific dispute in history—and failed to do so. The Calculus Wars were largely a result of Newton, who developed his calculus by 1666, failing to avail himself of Oldenburg’s new publication vehicle. By the time the wars ended in 1723, Newton and Leibniz can be credited with doing more to promote the need for registration than any other individuals before or since. Oldenburg could not have scripted a better marketing campaign for his invention.

 

As enduring as journals have been as a mechanism for registration of discovery, they are no longer needed for this purpose. A preprint server that records the time and date of manuscript submission can provide a mechanism for registration that is just as effective as journal publication. Moreover, by registering a DOI for all manuscripts an additional record is created that can further validate the date of submission and discourage the possibility of tampering.

While journals are no longer needed for the initial problems they set out to solve (dissemination and registration), there are 3 additional functions that journals serve that have developed over time. These later functions—comprising validation (or peer review), filtration, and designation—are more difficult to replicate through other means.

Validation. Peer review, at least in the sense most journals practice it today, was not a common function of early scientific journals. While journal editors reviewed submitted works, the practice of sending manuscripts to experts outside of the journal’s editorial offices for review was not routine until the last half of the 20th century. Despite the relatively late provenance of peer review, it has become a core function of today’s journal publishing system—indeed some would argue its entire raison d’etre.

Schemes have been proposed over the years for decoupling peer review from journal publishing, Harold Varmus’ “E-Biomed” being perhaps the most well-known example. There have additionally been several experiments in post-publication peer review—whereby review occurs after publication—though in such cases, journal publication is still attached to peer review, simply at a different point in the publication process. To date, no one has succeeded in developing a literature peer-review system independent of journal publication. One could imagine a simple online dissemination system, like ArXiv, coupled with peer review. And indeed one could make the case that this is precisely what PLoS One is, though PLoS considers PLoS One to be a journal. It is perhaps not an important distinction once one factors out printed issues, which I don’t think anyone would argue are central to the definition of a journal today.

Filtration. In 1665 it was fairly easy to keep up with one’s scientific reading—it required only 2 subscriptions. Over the last few centuries, however, the task has become somewhat more complicated. In 2009 the number of peer-reviewed scientific  journals is likely over 10 thousand with a total annual output exceeding 1 million papers (both Michael Mabe and Carol Tenopir have estimated the number of peer-reviewed scholarly journals between 22,000 and 25,000, with STM titles being a subset of this total). Keeping up with papers in one’s discipline, never mind for the whole of science, is a challenge. Journals provide important mechanisms for filtering this vast sea of information.

First, with the exception of a few multi-disciplinary publications like Nature, Science, and PNAS, the vast majority of journals specialize in a particular discipline (microbiology, neuroscience, pediatrics, etc.). New journals tend to develop when there is a branching of a discipline and enough research is being done to justify an even more specialized publication. In this way, journals tend to support a particular community of researchers and help them keep track of what is being published in their field or, of equal importance, in adjacent fields.

Second, the reputations of journals are used as an indicator of the importance to a field of the work published therein. Some specialties hold dozens of journals—too many for anyone to possibly read. Over time, however, each field develops a hierarchy of titles. The impact factor is often used as a method for establishing this hierarchy, though other less quantitative criteria also come into play. This hierarchy allows a researcher to keep track of the journals in her subspecialty, the top few journals in her field, and a very few generalist publications, thereby reasonably keeping up with the research that is relevant to her work. Recommendations from colleagues, conferences, science news, and topic-specific searches using tools such as Google Scholar or PubMed, might fill in the rest of a researcher’s reading list.

Still, filtration via journal leaves a lot of reading on behalf of scientists. This has prompted a number of developments over the years, from informal journal clubs to review journals to publications like Journal Watch that summarize key articles from various specialties. Most recently, Faculty of 1000 has attempted to provide an online article rating service to help readers with the growing information overload. These are all welcome developments and provide scientists with additional filtration tools. However, they themselves also rely on the filtration provided by journals.

Journal clubs, Journal Watch, and Faculty of 1000 all rely on editors (formally or informally defined) to scan a discipline that is defined by a set of journals. Moreover, each tool tends to weight its selection towards the top of the journal hierarchy for a given discipline. None of these tools therefore replace the filtration function of journals—they simply act as a finer screen. While there is the possibility that recent semantic technologies will be able to provide increasingly sophisticated filtering capabilities, these technologies are largely predicated on journal publishers providing semantic context to the content they publish. In other words, as more sophisticated filtering systems are developed—they tend to augment, not disrupt, the existing journal publication system.

Designation. The last function served by scientific journals, and perhaps the hardest to replicate through other means, is that of designation. By this I mean that many academic institutions (and other research organizations) rely, to a not insignificant degree, on a scientists’ publication record in career advancement decisions. Moreover, a scientists’ publication record factors into award decisions by research funding organizations. Career advancement and funding prospects are directly related to the prestige of the journals in which a scientist publishes. As such a large portion of the edifice of scientific advancement is built upon publication records, an alternative would need to be developed and firmly installed before dismantling the current structure. At this point, there are no viable alternatives—or even credible experiments—in development.

There are some experiments that seek to challenge the primacy of the impact factor with the aim of shifting emphasis to article-centric (as opposed to journal-centric) metrics. Were such metrics to become widely accepted, journals would, over time, cease to carry as much weight in advancement and funding decisions. Weighting would shift to criteria associated with an article itself, independent of publication venue. Any such transition, however, would likely be measured not in years but in decades.

The original problems that journals set out to solve—dissemination and registration—can indeed be handled more efficiently with current technology. However, journals have, since the time of Oldenburg, developed additional functions that support the scientific community—namely validation, filtration, and designation. It is these later functions that are not so easily replaced. And it is by closely looking at these functions that an explanation emerges to explain why scientific publishing has not been disrupted by new technology as yet: these are not technology-driven functions.

Peer review is not going to be substantively disrupted by new technology (indeed, nearly every STM publisher employs an online submission and peer-review system already). Filtration may be improved by technology, but such improvements are likely to take the form of augmentative, not disruptive, developments. Designation is firmly rooted in the culture of science and is also not prone to technology-driven disruption. Article-level metrics would first have to become widely adopted, standardized, and accepted, before any such transition could be contemplated—and even then, given the amount of time that would be required to transition to a new system, any change would likely be incremental rather than disruptive.

Given these 3 deeply entrenched cultural functions, I do not think that scientific publishing will be disrupted anytime in the foreseeable future. That being said, I do think that new technologies are opening the door for entirely new products and services built on top of—and adjacent to—the existing scientific publishing system:

  • Semantic technologies are powering new professional applications (e.g. ChemSpider) that more efficiently deliver information to scientists. They are also beginning to power more effective search tools (such as Wolfram Alpha) meaning researchers will spend less time looking for the information they need.
  • Mobile technologies are enabling the ability to access information anywhere. Combined with GPS systems and cameras, Web enabled mobile devices have the potential to transform our interaction with the world. As I have described recently in the Scholarly Kitchen, layering data on real-world objects is an enormous opportunity for scientists and the disseminators of scientific information. The merger of the Web and the physical world could very well turn out to be the next decade’s most significant contribution to scientific communication.
  • Open data standards being developed now will allow for greater interoperability between data sets, leading to new data-driven scientific tools and applications. Moreoever, open data standards will lead to the ability to ask entirely new questions. As Tim Berners-Lee’s pointed out in his impassioned talk at TED last year, search engines with popularity-weighted algorithms (e.g. Google, Bing) are most helpful when one is asking a question that many other people have already asked. Interoperable, linked data will allow for the interrogation of scientific information in entirely new ways.

These new technologies, along with others not even yet imagined, will undoubtedly transform the landscape of scientific communication in the decade to come. But I think the core publishing system that undergirds so much of the culture of science will remain largely intact. That being said, these new technologies—and the products and services derived from them—may shift the locus of economic value in scientific publishing.

Scientific journals provide a relatively healthy revenue stream to a number of commercial and not-for-profit organizations. While some may question the prices charged by some publishers, Don King and Carol Tenopir have shown that the cost of journals is small relative to the cost, as measured in the time of researchers, of reading and otherwise searching for information (to say nothing of the time spent conducting research and writing papers). Which is to say that the value to an institution of workflow applications powered by semantic and mobile technologies and interoperable linked data sets may exceed that of scientific journals. If such applications can save researchers (and other professionals that require access to scientific information) significant amounts of time, their institutions will be willing to pay for that time savings and its concatenate increase in productivity.

New products and services that support scientists through the more effective delivery of information may compete for finite institutional funds. And if institutions designate more funds over time to these new products and services, there may be profound consequences for scientific publishers. While it will not likely result in a market disruption as scientific journals will remain necessary, it will nonetheless create a downward pressure on journal (and book/ebook) pricing. This could, in turn, lead to a future where traditional knowledge products, while still necessary, provide much smaller revenue streams to their publishers. And potentially a future in which the communication products with the highest margins are not created by publishers but rather by new market entrants with expertise in emerging technologies.

The next decade is likely to bring more change to scientific publishing than the decade that just ended. However, it will likely continue to be incremental change that builds on the existing infrastructure rather than destroying it. It will be change that puts pressure on publishers to become even more innovative in the face of declining margins on traditional knowledge products. It will be change that requires new expertise and new approaches to both customers and business models. Despite these challenges, it will be change that improves science, improves publishing, and improves the world we live in.

Reblog this post [with Zemanta]
Michael Clarke

Michael Clarke

Michael Clarke is the Managing Partner at Clarke & Esposito, a boutique consulting firm focused on strategic issues related to professional and academic publishing and information services.

Discussion

102 Thoughts on "Why Hasn’t Scientific Publishing Been Disrupted Already?"

It’s the in “Designation” function that the role of scholarly publishing has changed the least, and none of the new technologies cited here really address that function.

Back in July, I argued that a potential threat to Scholarly Publishing would be a change in the Designation process, and that the technology that could do that was social networking.

The other threat would of course be a collapse of library budgets- that would reflect a larger societal change rather than a technological change, but if the economy languishes, it could happen.

Hi Eric – I did not touch on social media/professional networking in this post and I should have. Though I mainly see the impact of social media on filtration. Twitter and FriendFeed are already being used as filtration tools and I think this trend will continue and will become increasingly sophisticated.

A lot would need to happen for professional networks (e.g. Mendeley, UniPHY) to have a disruptive impact on designation. The only way I could see that happening is if these networks contributed some type of article level metrics that could be standardized and that were not proprietary to any one network. It will be very interesting to see what happens if any of these networks really catch on.

I don’t think institutions can allow their library budgets to collapse as their faculty have to publish somewhere (to say nothing of the difficulty in conducting research without current information). If this were to happen, we will have much bigger problems on our hands than changes to designation.

I was think of how the number of Twitter followers, though completely stupid and meaningless, seems to be used as a measure of “designation” for marketing types. I can imagine how the number a followers a scientist has on a well designed scientific social network might become a proxy for their significance in a community, and thus relevant in a tenure revue.

PLoS is experimenting with a variety of metrics along these lines, some that I think are about as meaningless as Twitter followers in my opinion.

I disagree — I think the validation function is the key one, but also inseparable from designation. Without validation, science would degenerate into in the best case art and in the worst case something more corrupt and politically-steered than it already is. The designation of the journal largely determines how much time reviewers spend on the manuscript, so determines the quality of the articles in more ways than one.

The importance of filtering is underestimated — good creative editors are a wonderful thing — but honestly that is the closest of the three to having a technological AND cultural surrogate. We will always read papers by the people we’ve been impressed with at talks & conferences, so we will always have social network filtering as well as automated search.

Joanna – Just to clarify, my reply to Eric’s comment was circumscribed to the potential influence of online social networks (e.g. Mendeley, UniPHY, Nature Network, etc.). So I completely agree with your assessment that the filtering is the function closest to having a technological and cultural surrogate — and that social networks can play a role in this.

The point I was trying to articulate is that I don’t see social networks as having a particularly strong role in validation or designation. I would love to hear your thoughts on how they might if you think differently.

Sorry, Michael, I agree with you! I was just disagreeing with Eric 🙂

I am really glad you have articulated the validation / designation issue. If you don’t mind me saying so, I’d like to see you publish that point more concisely now you’ve worked it out, in one of those designated journals.

Mike
An excellent article and one that will help those submitting comments to OSTP.

Excellent, thought-provoking post. If you want to scare a bunch of editors, suggest that they should get together with some folks from Google and come up with a “journal” system completely outside of the box, something envisioned from scratch. Designation is the one function that keeps the current system around, and most of the established “publications” do not want to lose what they have.

Michael – wonderful article. On your list of things scientific journals do, there’s one more that perhaps seems obvious, but that really is slightly peculiar in the context of other types of publication: uniqueness. This is complementary to, but distinct from, your criterion of Registration. No reputable scientific publication re-prints articles that have been published elsewhere: each article is supposed to be original and unique. The condensed output of a particular scientific episode, experiment, theoretical effort, story or whatever you want to call it, appears in only one, unique place, and is expected to be always cited by that fixed representation. There are some variations on this theme by type of article (a letter, a regular article, and a review paper may cover the same research) and certainly some authors publish multiple articles that are not terribly far apart in their content, but the extraordinary sanctions journals place on authors who try to violate it (submitting work whose full substance they’ve already published elsewhere) suggests an importance beyond copyright or the foibles of editors.

In contrast a journalistic article, review or opinion essay printed in one newspaper or magazine may with little fear of reprisal be sold to countless others as well; syndicators and press agencies exist to spread good writing around without bothering to re-write. Short stories find themselves reprinted in many disparate collections. Books often come out in many editions with large portions intact from one version to the next.

So why must the scientific article be unique in its published journal-based citable representation? I think it’s a good thing, but I’d like to better understand why.

“In 1665 it was fairly easy to keep up with one’s scientific reading — it required only 2 subscriptions.” But what of the huge amount of letter-writing and letter-answering needed, which for many seems to have required four or more hours every morning…?

Sounds like about the same amount of time I spend on e-mail. Perhaps we need to introduce a law of conservation of correspondence?

With any media, the two components: carrier, and carried (content) evolve differently. ( Container) Semantic-based tagging only involve the carrier. Likewise, thee list of technologies at the end of your post only involve the carrier.

IT has been vary unaware of culture, so the cultural purposes of journals have a long way to go before culture/content/carrier will be disrupted. At the core a journal is contained by its peer review committee.

One traditional function of the journal that Michael did overlook was that of archiving, which in the traditional print world, meant preservation of the document (usually in public libraries) for future reference and posterity.

The preservation of a single, immutable record (as the NISO report calls “Version of Record”) is still very important in the electronic environment and perhaps even more so in a future of multiple versions existing simultaneously.

Phil – I agree, that is an oversight. I would add “curation” to the list of “-ations” above. It goes closely with registration (and I initially thought it would fit there) but it is a different enough function that it deserves its own category.

New technologies have created as many problems as it has solved on this front. Certainly there are advantages of to having digital copies that can be easily stored in multiple locations thus protecting against damage or loss. But on the other hand, technology has created the problem of format migration and has introduced an element of the ephemeral into publishing (articles can be instantly retracted, Web sites can vanish). Those, such as yourself, with a library background can no doubt expand this list of challenges.

On the driving purposes behind HTML and the Semantic Web, you are talking about an AI guy who really wants to distribute and allocate recognition functionality, so that machines get smarter. Letting humans do the markup, aka recognition, speeds up the creation of a corpus over which machines can reason.

Legal and financial publishers’ business models have not been disrupted very much either, because like science publishers they efficiently deliver important and complicated content that is of great value to wealthy institutions, and they continue to innovate in workflow tools that save time and analytical tools that add value to the information. I don’t see that going away anytime soon, but if someday a faster/cheaper solution comes along that matches or beats current solutions the customers will flock to it. Even in this circumstance, science publishers have a special advantage in that their information comes from thousands of institutions that gain a financial benefit from not opening it up, while legal data comes from relatively few sources (and those are already public, and Google Scholar is already tracking them), and likewise financial-market data is relatively open. Thanks Michael Clarke for this very interesting and thought provoking article and commentary.

Nice review. I remember in 1991 when there was a commonly held belief that all publications would soon be available world-wide online for free and that libraries would no longer exist. Remember that commercial that shows someone in a library in Tokyo on a computer bringing up an online book from a library in the US? I think this is a good lesson that things do not happen the way we think they will nor as quickly as we can imagine them. Probably a good thing. Thanks.

Michael,

Agreed that publishers can add value with “semantic technologies”, but will customers pay for it?

If customers pay $X for a regular information product how much more will they pay for the semantically enabled version of the same product? How can publishers monetize “semantic enablement” in the real world? Or is it simply a case that they will go out of business it they don’t do it?

Richard.

HI Richard – I don’t think it is a case of trying to convince customers to pay more for semantically enabled versions of the same products. I think rather that semantic technologies will enable the creation of products that do not currently exist. Whether or not and how much customers will pay for them depends entirely on the usefulness of the products and the ability of others to replicate them.

Richard & Michael:

I agree with Michael. I have found that taking a traditional journal run and then applying semantic technologies creates a huge amount of interest from customers, but doesn’t necessarily add new revenue. However, ‘huge customer interest’ is not a bad thing either. As Michael says, creating new products is the ultimate way to go.

Michael Clarke’s excellent article plus the responses should be widely distributed. One important point that might be lost as it is split between two is the unholy alliance between “Designation” (and its use in the metrication of career and funding for scientists) and the commercial publishing business. The publishers are in the business of making money for their shareholders, the public funders want to assure their Ministers that they are deeply into the “accountability” of the scientists they fund. Neither wish to have discussed the current farce of so-called “peer-review”, “impact factors” “citation indices”. The victims are the scientists who are trying to maintain funding to pursue their pet obsessions, and they dare not speak out individually for fear of the consequences. Peer review is held up as the gold standard but is often far more effectively run by the less prestigious journals than, for example, the top weeklies. The latter do NOT peer-review most of the submitted papers; they have a junior editor (of unknown qualifications) read them through and, if doing his/her correct job, decides whether or not the submitted article will enhance the readership of the journal (i.e. cash), so most papers are rejected not peer-reviewed. The problems with Impact Factors and Citation Indices are well known, but no one has the courage to admit they actually harm science, which is now directed as much by the junior editors who guard the portals of the ‘prestigious weeklies’ as it is by anyone else.
The simplest way in which some form of sanity could be restored would be to admit that the long term worth of a particular bit of science cannot be judged in the short-term, and that all the funding agencies can do is to check that someone is working and producing papers, whose long term worth is unknown, and therefore all papers should have equal weight in career/funding decisions. The present fashion conscious nonsense should be relegated to the “Emporer’s new clothes” bin of the Danny Kaye song.

Not all publishers are “in the business of making money for their shareholders”, many of us work for not-for-profit scientific societies and research institutions.

And if you think scientists and others “dare not speak out” against the use of impact factors, you clearly have not been paying attention. Start here, here, here, here and here. That’ll get you going, and a little time spent on Google will bring up the massive loads of criticism the metric is attracting.

Fair comment David – my bit was far too sweeping, but I am disappointed that so few comments are made, that ‘conflicts of interest’ are largely ignored nowadays, and that there is tacit support for the present system from academic bureaucrats. The victims seem now to have becomed divided and ruled. In Australia, with the exception of the Funnelled Web WWWsite, I rarely hear anyone speaking on behalf of working academics – they seem to have become the lowest trophic level of an increasingly complex food web.

Well, there is a very vocal contingent who are very active in pointing out the conflicts between the business of publishing and the greater good of science, mostly congregated around the open access and open science movements. PLoS is conducting an interesting experiment in trying to find new measures to replace things like the impact factor which is controlled by a commercial interest.

We’ve certainly had our run-ins with Bentham here in the Scholarly Kitchen. But don’t most scholarly journals take money in exchange for publishing articles, often in the form of color or page charges? There’s a moral balance that must be carefully calculated, and it’s not limited to open access journals.

Wait a second – if you distribute it, then you lose uniqueness.

The truth is uniqueness, unless for the purposes or archiving is a DEFECT and not a FUNCTION. It hinders the progress of research.

How many times have we heard of something being rediscovered 30 years after it wass published in some obscure journal?

A better system would be the journals being ranked into tiers (as they are now anyway) and sometimes the higher level journals republishing something that wss published in a more obscure journal.

The reason this defect of uniqueness started and persists is because the editors of the journals want there to be somethinbg new in them because that raisesthe value of the journal. Of course if something was sufficiently obscure – published in some journal in Australia say – it *is* new to the vast majority of the journal’s readers, so something else must be at work here. I could say maybe in part, they want to encourage people to submit artiiclesds to the highest ranked publication, and NOT to the one that will publish them fastest, so they PUNISH articles and authors that would appear elsewhere – some journals, particularly in the medical field, punish any kind of announcement anywhere else. This can only SLOW progress. Uniqueness is a defect not a benefit. Aand for purposes of archiving you couild cite either the first place it appeared or the most prestigiousd places it appeared – or whatever the author nsad – within a short period of time not many years lter – should be standard.

started because

One of the best things on the broad topic that I’ve read in ages. The key concept, that the most important functions of journals are culturally driven and not technologically driven, is one that is, unfortunately, missed by many of my librarian colleagues.

An excellent article but it (and the subsequent comments) did not address, except in passing, the key problem namely the effect that all the recent changes are having on the direction of science, and for the opportunities that scientists have to work.
The publishers, whether commercial or not-for-profit, are competing with one another for trade. Various devices are sued to capture the market – F1000, citation indices, impact factors, etc. There is little evidence that any of these measure the long term value/quality of the work, however increasingly the journal industry is using them to control the process – “our reviews are by invitation only” I got recently by return from a ‘leading’ journal, whose WWWsite said otherwise.
The public funding agencies are desperate for some easy metrics for helping them dole out the cash, so they have adopted and embroidered those provided by the journals rather than really investigating how to get the best research done.
So there is an unholy alliance between the publishers and funders to support the current trend. Working scientists have to play along with the time-consuming nonsense and have not been involved in its construction; the committees that foist ERAs etc on working scientists are mostly of ex-academics, who have become bureaucrats.
So it is my contention that this unholy alliance using metrication and fashion for different reasons is slowly but inexorably throttling science, and assuming control of its direction.
Publishers should just publish, and lubricate the spread of information. That’s their role, not organsing beauty competitions.

Of course, you’re absolutely rght – and what this means is that DESIGNATION is a DEFECT and not a BENEFICIAL FUNCTION. People are not using their own judgement.

Whenever the functionof disignation becomes delegated to some person or institution, especially institution, it deteriorates with time.

This is the sort of thing that happened a number of times in the financial area.

Corporations were required by law to issue financial statements that were certified by some asccounting firms. When this started in the 1930s these firms were auditimng for investors and they had too be good. By the year 2000 the certification by CPAs was worse than useless.

Many places that invested money were required by law to invest only in securities that were rated highly by certain pre-selected rating companies. The ratings were in the end not only somewhat arbitrary – rating things lower or higher based on irrelevant factors – but flat out wrong – rating as top grade some (new) things that should not have been.(it’s easier when there’s no preceedent)

Banks made loans based on the of houses as rated by appraisers. This was more a legal requyirement than anything else. The appraisals were ALL too high, based on a formulas that assumed prices could not drop very much and could not go below what they’d been for years.

You have this problem any place there is no INDEPENDENT and INDIVIDUAL rating of the worth of the rater.

Not just law, but also custom, or an intertwined network of circular credentialing, can do this.

It is very simple. Think about who generates scientific content and who consumes scientific content. It is the same group of people and their numbers cannot be drastically increased due to 100X reduction in cost of distribution resulting from use of the Internet.

Internet is a great leveler. Everybody gets the same chance as publishing content. The rewards are measured in terms of popularity …. not necessarily the best measure for scientific content. Would a professor who has struggled for 10 years before getting tenure wanted be be judged for some layman …….. hell NO. Most professors think the content on the Internet is garbage (yes there is lots of it, but there is lots of cutting edge stuff as well). Content on the Internet CANNOT even be acknowledged without drawing major criticism.

Until the scientific content put out on the old media becomes COMPLETELY irrelevant the old world scientific publishing will not change. Strict hierarchies don’t deal with with change.

undergirds?

concatenate is a *verb*

On the topic though…

Changes will come when we realise the ego of a few is causing monstrous problems in law and progress. Copyright of easily reproducible digital information is untenable; ideas should not be ‘owned’. Trust and credibility will remain essential.

Scott – “concatenate” can be used as either a verb or an adjective. In this case it is being used as an adjective modifying the word “increase,” which is being used as a noun (see Websters).

I’m not sure what your question is about “undergirds” so I’ll trust you can resolve that one on your own.

This article did not touch on copyright so I don’t follow how that is “on the topic.” Nor do I see how copyright is relevant to the functions of journals described above. There are many journals that do not require authors to transfer copyright (instead using exclusive licensing agreements or creative commons licenses) and they serve the same dissemination, registration, validation, filtration, and designation functions as those that do.

While copyright and patent law are evolving fields—and ones where, in some cases, we have perhaps not found the right balance as yet—to say that “ideas should not be owned” is an extreme position and one that I don’t think is defensible unless you want to revert to a pre-industrial economy. Ownership of ideas is central to our information economy (even more so than it was to the industrial economy). Without it there would be no incentive to invent anything and most of us would be working in fields with a life expectancy of less than 40 years, eventually dropping dead due to some illness that would likely be treatable today with antibiotics or identified earlier with an MRI. Moreover, there would be no computers, no telephone (let alone smartphones), no Internet, and therefore no blogs—and where would we be without those?

There are many drugs that are not developed because

1)Now the cost of getting approval by the FDA has risen to astronomical levels. And now even a new things has arisen – some generics have become monopolies or near monopolies because of the extreme expense of even getting the manufacturing approved – and of course actually the need for some expertise in getting staarted.

2) A drug that is not patentable cannot be developed – because it s just not worth the money to anybody to spend the millions and take the time to get it approved. If a vitamin could be of reat value, the fresearch isn’t done, unless the vitamin can be combined with something is patentable in a sinle product like I think Merck is trying to do with niacin. If a drug hass multipler uses the company will get only approved asnd not bother to get others, because it isn’t worth the money and risk to get approal so that they can ADVERTISE it.

3) If someone gets an ideas but they do not own the patent – it is abandoned.

Companies now don’t look for what might be the best drug scientifically. They look for something on which they might own the patent on, which also hopefully has gone at least part way through the 20 or moreyear long drug approval process, and try to see which, if any one of them might do something.

If something that is in the public domain could be the best medicine, they try to combine it with something that they own, even if the drug or vitamin would be quite valueable without it.

And I think they even sponser carefully contrived studies – no data fraud but specially contrived situations – to undermine the case that certain vitamins or other things in the public domain, might be useful in combatting or preventing illnss. For instance they might sponser a study that combined two different vitamins, or use beta-carotene in place of Vitmin A.

Sorry, you’re right about the language. I can dream of a world where concatenate is just verb though, right? 🙂

No incentive without ownership? Are you kidding? The fact that companies don’t think the research is worth the effort unless they can get a ‘monopoly’ on the product also stems from this flawed psychology – obviously, it would help us make progress if we could all access all the information. Profitability would then depend on the ability of the company to find an edge with their processes, skills, reputation, etc. Marketing doesn’t belong in that list, as it distorts trust distribution – at least as it is generally practised today. We just need the playing field to be even – and trust to be valued appropriately.

When companies know how valuable their credibility is, when they know how much they can trust other companies, and when consumers also understand the value and appropriate distribution of trust, then it does not make sense to reason that ‘it’s not worth my while to do this research ’cause someone else will just come along and profit from the knowledge without the expense’. I believe we’re in the very early stages of a new ‘trust based’ economy. Journals are very important agents in that economy.

Ownership of ideas is hindering progress, not helping it. Yes, it is central to our information economy, and so deeply entrenched that most don’t think it questionable, but it is the crux of many of our current issues. It divides the effort involved from the pay-off received. Technology is driving that situation toward a ludicrous state – putting old ladies in gaol for sharing files?!? There’s plenty of other examples.

There can be attribution without ownership – an important factor in terms of the credibility of information, and not insignificantly a way to preserve important ego-based motivation to do the work. Scientific journals facilitate both factors. Ownership of ideas hurts the journals too – e.g. when companies keep their IP secret.

Validation, filtration and designation are functions performed even by those I choose to follow on twitter. Of course that’s not very scientific, but highlights the fact that reputation – i.e. credibility – is becoming a much more widely understood commodity. It also highlights what we can achieve when we work together cooperatively and share information, rather than by competing and trying to keep the information to ourselves.

Publishing in general is being disrupted – and like you I am surprised that scientific publishing has not yet been affected more drastically. But I think it’s happening, and will happen quickly.

Scott: No incentive without ownership? Are you kidding? The fact that companies don’t think the research is worth the effort unless they can get a ‘monopoly’ on the product also stems from this flawed psychology…

With drugs we have another problem. The drug approval process is premised upon people investing a lot of money.

The money is so much, and the time so long, that absolutely nobody can or will do it without a chance to make a profit.

So far Bill Gates is not sponsoring drug research at a loss and that’s what is needed.

In addition we have a lot of censorship and self-censorship on these questions.

It is not that research does not get done – some sort of research is done -but mostly only what you could call “basic research” There people still feel free to talk and communuicate freely.

Not applied research that would result in anything being used.

People are wary even of recommending anything nt approved.

There is almost a conspiracy to hide from the public how bad and broken the system is.

Sometimes newspapers and television will tout a “new” development that is actually only the latest little development in 15 year old research.

Nowadays it often comes with a warning to patients that this is not available – but maybe might be in a year or two.

The reason for all of this is that the drug approval process is so expensive and time consuming nobody really can do it – or those who can would rather spend their limited funds on something they would get areturn on.

If some small organization does research on something it is usually when they have a potential patent = monopoly. They will often eventually sell it after half a dozen years or more to a big company since they haven’t got the capital to push it through the drug approval process.

It is then hostage to the fortunes and politics of that company.

I think there is some sponsorship of research for drugs intended to be used JUST in underdeveloed countries,

Let me tell you something else. In such cases, they deliberately AVOID competition even when there are no legal issues.

Take the issue of cheap, adjustable, glasses for myopia.

There are several organization trying to do this. They cannot get it started on a large scale.

http://www.nytimes.com/2010/01/02/business/global/02glasses.html

Here is a descriptioopn of one of them., Someone commented how long would it be till it would be made illegal. I didn’t know maybe it already is.

http://www.samizdata.net/blog/archives/002697.html

They could use millions of tehm – all that is distributed is a couple of tens of thousands of pairs.

But…

If they STARTED selling them it in the United Statea and Europe it would then be easy to ALSO not much later distribute them in Africa.

It seems to me taht people are avoiding competing.

I guess you might have the problem that regulatory agencies do not like the idea of people buying glasses without a prescription.

Partly this is the idea people are too stupid to pick the right glasses and partly it is the idea that they way people will go and get tested for glaucoma and other things.

Perhaps they get money from eyeglass companies or perhaps they are afarid that obstacles will be put in their way should they try to sell them in the United State and Europe. I don’t know. But there’s got to be reason for this not happening.

Now all of this is not discussed because of teh tremendous censorship and self-cebnsorship on medical treatements and cures. the people who speak most freely are often touting quack remedies. people with real remedies have hopes of getting it used …eventually and they don’t realize how many years it may be tikll something is used.

nobody knows the problem – nobody can disucss what to do. And all we hear are stories of when something was approved taht shouldn’t have been. Of course since something needs to be patentable it is somewhat more likely to have some side effects than something the body is used to.

The problem is people don’t realize how the whole drug development system is broken – it started to be broken around 1950 with teh advent of “:double blind” studies, it got more broken with the need to prove effectiveness (not just safety) before trying AND IT HAS GOTTEN PROGRESSIVELY MORE BROKEN WITH TIME.

Now anything new HAS TO cost a large amount of money, regardless of what the science might be.

And that is why there were no new antibiotics for a long time. The system almost eliminates drugs with certain characteeristics. Nothing not-patentable. Nothing nmot owned. Not much that would not be used over and over again, as opposed to a small course of treatemen, like antibiotics – and now consider also that doctors want to AVOID using a new antibiotic as much as posisble.

Vaccines needed special legislation to encourage research. They got immunity from being sued, so when they *are* sued , lawyers concoct a false theory about mercury hoping they can collect somehow – because if the damage is from teh vaccine itself – say an autoimmune reaction in some children – there would only be a small payoff, not worth it to big time lawyers.

We actually know of course how to manufacture vaccines much faster than we do now but this is all in an approval nightmare. And some other things are too – sniffers for detecting explosives at airports.

The problem can occur with *anything* that needs to be certified to be used.

Now when things go ahead without needing certification – like new surgical ideas – you get problems but maybe not as bad in the long run as what we have now.

The best thing would be to have something that puts in caution but no too much and get rid of approval for effectivebness which really slows things down.

I agree ownership of ideas really does hinder progress. In the very short run ownership may promote it, but not in the long run. People need to develop things for their own use. And for one reason or another may donate things to the public domain.

A prescription is still needed – less tahn 2 years old. this is probably for legal reasons. The FAQ bny the company apparently deliberately obscures this and they even through in something about updating your prescripton – I am sure for legal reasons. If they *SAID* it was legal reasons probably fewer people would do it and they’d get into regulatory trouble maybe..
11. Do I need a special prescription for TruFocals?
Not at all. To make your custom pair of TruFocals, all we need is your regular prescription.

11. Why do you require prescriptions be no more than one year old?
Eyeglass prescriptions are usually valid for two years. When you order TruFocals we recommend that you update your prescription. That way you can enjoy all the benefits for the full life of your prescription.

12. What happens when my prescription changes?
TruFocals’ adjustable focus capability typically accommodates small changes in prescription; if the change is dramatic, however, you should replace your TruFocals. That’s one reason you should update your prescription when you order TruFocals.

13. Can I buy TruFocals from my regular eye care professional?
Yes. But, because TruFocals are a new technology, your eye care professional may not yet have heard of them. Send, fax, or email us your prescription and we will alert your eye care professional that you want to purchase TruFocals from him/her.

The web site does not contain A CLUE about the cost – meaning it is probably well over $100. Of course it hardly makes any sense to require a prescription for a pair of glasses that you can automatically adjust since you are changibng what it does all the time. But taht is apparently the law and I don’t even know when this went into effect – Maybe the 1980s. If this could be freely sold withouyt a prescrition it would be less and if the patent had expired this would go for undrr $30 or even under $15 in stores. legal reasons are also probabkly keeping this off eBay.

Excellent article, and it’s high time semantic intelligence was applied across all areas of science (esp clinical medicine). Peer review could be overhauled with a commitment to developing an online reputation system to access information quality ( See:http://jopm.org/index.php/jpm/article/view/11/21)
Technology, openness, and an overhaul of the obsolete tenure system wedded to outmoded publishing models would go a long way to helping science promote the public interest.

Finally, the author writes: “Nearly all scientific journals (and an increasing number of books) are now available online.” True, but most are not FREELY available, so their real impact is becoming increasingly irrelevant in competing for mindshare on the Web — even among scientists.

Peter – Thanks for providing the link to your article in the inaugural issue of the Journal of Participatory Medicine (I had read much of the issue but had not got to your article as yet). Your and Richard Smith’s proposal to develop an online reputation system for science and medicine is intriguing. Geoffrey Bilder at CrossRef has done some of the best thinking on this topic that I am aware of. I think a conference on the subject is an excellent idea.

Your comment on “mindshare” is also very interesting. The question of whether or not freely available (open access) scientific publications result in higher readership or citation is one that my fellow Scholarly Kitchen blogger Phil Davis has done some first-rate work on, concluding that free accessibility does result in increased access but not increased citation. A more recent study by Gargouri et al. concludes that free accessibility (in the form of institutional archiving) does provide a citation advantage, though as discussed here in the Scholarly Kitchen there is some question as to whether this is due to free accessibility or other factors.

Leaving that issue aside, journal citation (as you suggest) is not an ideal measure and moreover it does not capture clinical impact and is only a rough measure of “mindshare.” Open access effects may be greater than can be measured by citation in the traditional literature. It would be interesting to look at the citations (links) in blogs and other online media to open access versus subscription content to see if there is any bias towards freely accessible content.

I sometimes wonder how many of the people who contribute to these discussions are practising scientists. Most scientists I know would dearly like to see the likes of Elsevier go out of business. At a time when universities are in dire financial trouble thanks to incompetent bankers, that would provide a great saving of money (number two priority, after firing 50% of HR, and 100% of bibliometrists).

Sadly, publishers have been smart enough to latch on the the wish of innumerate politicians and university administrators to rank individuals by various sorts numerical measures that serve mostly to promote spivery and dishonesty in science.

I see little option for the future other than online self-publication with ALL the raw data made available and comments encouraged.

One aspect that I would like to understand better is the role of editors in the peer-review process. I know of a few examples where the editor of a journal has decided against the advice of the reviewers, either publishing a “rejected” article or rejecting an “approved” one. And usually, there is little chance to appeal an editor’s decision.

That is IMHO where peer review is (curiously?) coupled to journal publishing. Why is that so, and is there any prospect for any change in the future?

Peer-reviewers are usually viewed as advisory, and editors take their advice seriously. But, in some cases, the editor may know something, sense something, or understand something uniquely, and make a unilateral decision after consulting with outside advisors. The best editors seek plenty of advice.

This is a very good analysis. Journals are built into the institutional structure of science, and for good reasons. (Not to brag but this is an example of the point I made in my 1973 essay “The Structure of Technological Revolutions,” which is that the hard part is the institutional changes necessary to implement the revolution. The car took 60 years to transform society.)

I would however use the term “aggregation” rather than “filtration” for one of the 3 core institutional journal functions. Each journal collects or aggregates papers related to its specialty, it does not filter out everything else in the world. Aggregation and validation then go together, because peer reviewers are asked if the paper is an important contribution to the community served by the journal. If anything, peer review is the filter. The third function, “designation” merely means that those researchers who get ranked as most important get rewarded for that achievement. It is all of a piece, a system.

In short it is a Darwinian system, or a marketplace of ideas, if you like. Competition is intense, because there are many more good ideas than can be read or funded. As Clarke points out, the Web does not (yet) change this equation. This is not to say that this structure can’t be disrupted by technology we have yet to see used.

In a perfect world all of what you are saying is true. In the pre-Internet era There was a requirement for aggregation and filtration because

1) space in a publication is limited (bandwidth)
2) cost of distribution was not zero

BUT the Internet has fundamentally changed both

1) space and bandwidth in unlimited
2) cost of distribution in zero

Even with 100X change in two variables, the fact the scientific publication is not disrupted means that it is not a functioning market or those variables do not matter to the current setup.

There are many fields where the web does change the fact that anybody can come up with an idea, write a proposal and possibly submit it for funding. Again, since even this market is not disrupted by the Internet, it again indicates that the funding ecosystem is either not a functioning market or a very protected market.

Change is always hard and the biggest players in the market almost always tend to loose when there is 10-100x change in variables that affect the market.

I disagree that there is a market failure. The essential role of the journal is to collect and publish the most important recent work in the community served. Bandwidth and cost of distribution are largely irrelevant. Subscribers are paying for intelligence, the judgement of importance, which the Web does not change.

The web was designed for exchanging scientific data? Then why didn’t HTML support such data. Have you tried writing math in HTML? Not so good.

Now there is MathML, but how many browsers support it (Firefox, Seamonkey, and to some extent Opera and Amaya)? What about Internet Explorer?

The W3 Consortium talks a good game about web standards, but how good is it at getting them implemented?

HTML simply tells the computer to display a block of text as a paragraph. It was designed to display scientific articles, not proofs, derivations or lengthy calculations. It is really very simple stuff. The miracle is that we have been able to make it look like magazines, include API’s, etc., but that has not helped science. The colossal irony is that the Web does not work for science, because it has been swamped by popular uses. See my little essay on this:
Making the Web work for Science
http://www.osti.gov/ostiblog/home/entry/making_the_web_work_for

It was designed to display scientific articles, not proofs, derivations or lengthy calculations.

As long as those scientific articles lacked mathematical notation. Once you allow individual equations with mathematical markup, organizing them into proofs, derivations or lengthy calculations is easy. Do high-energy physicists not require math?

You would have to ask Tim B-L, but my understanding is that line spacing was too hard to do, to handle math notation in HTML. Are you claiming that this somehow proves that HTML was not intended to handle scientific articles? What use do you think he had in mind? What evidence do you have for this historical claim?

I don’t know if Tim B-L (or anyone else) could have done that in 1991.

What I am saying is that intent is not design. While he may have intended that the Web be used as a medium of scientific exchange, with HTML as scientific markup, how was that intent embodied in the design? What is it about HTML (as opposed to Tim B-L’s thoughts of HTML) that allows for scientific discourse? As a math guy, I may be exaggerating the amount of math that scientists need in their papers, but then, what features of HTML are specifically scientific?

Another divergence might be between a vision of the Web as confined to large institutions as opposed to being available to individuals.

Your criticism is well founded. Unfortunately you introduced it by questioning Clarke’s statement about intent, not design, so perhaps we got off on the wrong track, as it were. It is a good question how much this design failing may have retarded the use of the Web for scientific communication.

David,

There’s no reply under your previous post, so I’m responding here. I wouldn’t say that the original HTML was badly designed as such. It was probably about as good as it could have been in 1991 (aside perhaps from a lack of extensibility). I believe a deeper problem may have been the W3 Consortium’s emphasis on promulgating standards over devising implementations.

Ummm….publishing has consolidated continuously since the ’90s. And the technology and the mergers have been disruptive. It’s not as dramatic as in trade or newspapers but that might be because the advertisers do not pitch to regular consumers and their household budgets. The market is highly educated and typically well compensated. The student market is captured because the products are necessary for their credentialing. Professional publishing has always been less glamorous and less volatile than trade so that might be one reason why it appears unchanged to outside observers.

“Looking back at the publishing landscape in 1991, it does not look dramatically different from today, at least in terms of the major publishers. The industry has been relatively stable. And one would be hard pressed to characterize the number of mergers and acquisitions that have occurred as particularly numerous relative to other industries. Moreover, these mergers and acquisitions are more likely to be explained by the rise of private equity and the availability of cheap capital than by technological innovations related to publishing.”

MB – Who were the leading professional publishers 20 years ago? Wiley, McGraw-Hill, Elsevier, Wolters Kluwer, Springer, etc. Who are the leading professional publishers today? Wiley, McGraw-Hill, Elsevier, Wolters Kluwer, Springer, etc. I’m not saying there have not been consolidations. Blackwell merged with Wiley. Taylor & Francis has been acquired by Informa. Lippincott, Williams & Wilkins is now owned by Wolters Kluwer. But there have been mergers and acquisitions in many industries in the last 20 years. As you point out, trade and newspaper publishing have seen much more industry consolidation. Look at any other information industry: bookstores, software, travel agencies, recruitment, etc. Technological innovation has transformed these industries and given rise to Amazon, Salesforce.com, Expedia, and LinkedIn among many others. The list of leading firms from 20 years ago versus today is going to look very different. My point is that the mergers and acquisitions in professional publishing have not, by and large, been technology-driven. Blackwell did not merge with Wiley due to Wiley’s disruptive technology. Informa does not have a disruptive technology that lead to the acquisition of Taylor & Francis. These mergers and acquisitions are nothing unusual and are part of the ebb and flow of any industry. If anything, they have more to do with the historically low costs of lending than with anything going on in the industry itself.

Comments are closed.