Cover of "Smart People"
Cover of Smart People

I recently started to watch a Michael Nielsen TEDxWaterloo talk, but stopped early on, because his first story hit me all wrong. It’s similar to the story Clay Shirky tells about a mathematician with a blog who was trying to find a combinatorial proof to P versus NP problem. And it reminded me of a recent talk Tim O’Reilly gave touching on “open science.” In talks like theirs about this topic, single, carefully selected examples are leveraged to a cry to “Open science now!”, as Nielsen’s talk is titled.

I did finish watching Neilsen’s talk after absorbing my reaction to what he was doing — which seemed akin to using one washcloth to cover a football field. One anecdote = one anecdote.

But back to “open science,” which isn’t very easy to define, I’ve found. Generally, in its best form, it speaks to activities like SETI@home, where a lot of participants help get science done. But there’s another, broader version afield. For instance, the definition give at openscience.org is:

  1. Transparency in experimental methodology, observation, and collection of data.
  2. Public availability and reusability of scientific data.
  3. Public accessibility and transparency of scientific communication.
  4. Using web-based tools to facilitate scientific collaboration.

Stepping carefully through this four-part definition, the pros and cons of the “open” approach become a bit clearer:

  • the first item prohibits a placebo group or double-blind trial
  • the second item prohibits proprietary research like the kind that yielded the light bulb or the Manhattan Project
  • item the third argues that paying for finished research reports isn’t good (nor is printing them on opaque paper, if you use light sarcasm to fight the buzzword “transparency”)
  • the fourth and final item prohibits counting as “open” experiments using real people in physical settings

The rhetoric of “open” seems at times to be the new equivalent of “free” or “pure.” If the Internet didn’t make everything free or pure, at least it can make it open, an emerging logic seems to aver.

“Open science” that leverages the broader population’s infrastructure or data exhaust can make very good sense as a trial design option, not as social policy. It’s an interesting trial design when deployed well. But “open” in the social policy sense is a double-edged sword.

Yet advocates of “open science” often speak as if there are no potential downsides.

Science is a competitive field, and that’s a good thing by and large. Competing for grants, tenure, position, and prestige all help to bring out the best in people, the best in their ideas, and the best in their execution of study and research protocols. But it also means that there is a certain circumspection to science, a certain level of secrecy needed.

Peer-review is protected as a confidential peer activity, and is one of the most important parts of getting the right papers to the right audience in the right shape. “Open” peer-review experiments have failed for a variety of reasons, including nervousness on behalf of the participants and an unwillingness to have assessments published openly — not to mention that “open” peer-review is a solution looking for a problem. Peer-review actually works very well when what needs to be said can be said without fear of embarrassment or recrimination.

In medicine, the potential downsides of “open” have perhaps expanded the farthest. “Open science” arguably led to the autism-vaccine scandal, in which high-powered scientific studies were drowned out by misguided conspiracy theories further fueled by overblown celebrity anecdotes and more recently by feckless politicians. These people have no expertise, yet the “open science” atmosphere gives their half-cocked notions full throat. The scientific discourse was all done “in the open” — policy statements from major organizations were freely available; news coverage was widely available; and patients, physicians, and vaccine companies were thrown into turmoil. If “open” works, then why did this dangerous trend take more than a decade to finally stop? And why did an investigation shrouded in secrecy prove to be the most effective tool in thwarting the misinformation?

What was needed was the voice of authority, of scientists telling non-scientists the difference between evidence and anecdote, between proof and paranoia, and doing it in dramatic fashion with legal enforcement. “Open” created chaos that authority couldn’t contain. Legitimate, well-earned, and prudent authority became just another voice in an “open” sea.

“Open” doesn’t always make sense in the case of governance or government, either. Most tricky solutions and complex compromises are worked out behind closed doors, outside the limelight, in private. What if the most effective path to solutions is private and full of things best not observed?

How “open” is “open”? Are we going to have webcams tracking every beaker spilt or pipette snapped, with commentators denouncing the feckless scientist as clumsy oafs wasting their research funds on unnecessary equipment costs? Will society become the groundlings in the process of science just as they are in the process of 24/7 media circus politics? Or is that too “open”?

If you want solutions, you have to tolerate a certain amount of privacy, secrecy, and discretion. If you want paralysis, you make sure everything is known, everyone is accountable, and nothing goes on out of sight.

Even Google acknowledged through actions that spoke louder than its words that it values private innovation over open innovation, scuttling Google Labs shortly before having Google X, it’s top-secret research and development facility, exposed in the New York Times. “Google is so secretive about the effort,” the Times reported, “that many employees do not even know the lab exists.” It seems that

“Open science” is a poorly defined term — sometimes it’s used to mean crowd-sourced science, sometimes it’s meant to mean open access publishing, sometimes it’s just not clear what it means.

Science should do what’s best for science and those science seeks to help. “Open” is about style points, not results, at least in the cases where it does not meet the criteria above. Michael Nielsen celebrates the fact that “anyone” can download the human genome thanks to how open the Bermuda Sequence Policies were. So what? If I download it, what good does that do me? I don’t have the time, materials, or expertise to do anything with it. (To Nielsen’s credit, he acknowledges the inherent conservatism of scientific conduct and academic culture. Unfortunately, he uses this as a contrast to the wondrous beauty of “open,” and his infatuation with “open” seems complete.)

“Open” accounting doesn’t make you an accountant; “open” long-haul trucking doesn’t make you a long-haul trucker; and “open” science doesn’t make you a scientist. It takes more than permission to become competent.

Instead of “open” science, we need “funded” science and “more productive” science.

The budget stalemate in Congress has slowed down science funding, and the anti-science movement in the US is as potent as ever — in fact, I wonder if this notion that “science is for everyone” has fed into it. Evolution is now just another theory, as is creationism or intelligent design. The message rumbling beneath the surface of “open science” seems to have been misinterpreted culturally as, “We can play, too — no special skills required, just a hypothesis.”

Where might this all lead? If “open” promotes science style over science substance, it may lead to stymied science, and a culture that has even less expertise at its helm. As The Onion published in a parody that hits a little too close to home, “Nation Finally Breaks Down and Begs Its Smart People to Just Fix Everything“:

Acknowledging they lacked the know-how to put anything together without it all falling apart again in a matter of seconds, millions of ordinary Americans implored the nation’s skilled individuals to just use their knowledge to end the financial crisis, manage the health care industry, determine which human beings are actually fit to hold political office, teach the nation’s children, and enact overarching policy decisions that serve the greater good.

In a post from last year, Phil Davis summarized this careful balance, one that “open science” ignores with significant peril:

The open secret here is that science depends on a balance of competitive drives and cooperative culture. Pushing it too far in either direction will likely cause inadvertent harm.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

11 Thoughts on "The "Open" Prescription — Why It Doesn't Always Make Sense"

I read your arguments carefully and I applied them to software development. It is now clear to me the methodology used for Apache, Firefox, MySQL, and the Internet will never produce anything.

Please read carefully. I said that “open” makes sense in certain settings — a lot of sense. But it doesn’t make sense all the time, so it’s not a prescription to be applied without thought. Open source software makes sense in a lot of cases, but not all. Because software is what it is, especially information purveyance software, there are many good candidate projects in that space. The challenge is not to over-generalize.

Although I agree that everyone’s definition of open science varies, this piece entirely misses the mark in just about every other respect. For instance, the autism “controversy” as described here was driven by media hype and entertainment personalities – not the availability of some data in a database. The Lancet, where the now-retracted Wakefield study was published, is not even an open access journal!

Furthermore, I think you would find few open science advocates who would argue that openness trumps genuine ethical concerns. Choosing an example from my own field, I would never advocate for the unfettered release of GPS coordinates for rare vertebrate fossil localities (it’s illegal in some cases and bad for site security), but I have no problem arguing that the raw data underlying morphometric or phylogenetic analyses should be available by default. To tie in another claim made above, just because you or I don’t have the expertise to use a data set doesn’t mean someone else doesn’t. It is ridiculous to claim that data (which in the vast majority of cases cost little or nothing to provide) should only be made available to a select few. Who judges who is worthy of seeing the data? The scientist who doesn’t want a potential competitor to evaluate whether the proper statistical test was used in a published paper? Should I not be able to access the character codings for a phylogeny of theropod dinosaurs because my primary expertise is in ceratopsians? This last example is a little ridiculous perhaps, but it’s effectively what was argued in the above essay.

In addition to being a publishing scientist, I am a science educator at the high school associated with my museum. I deal with some pretty bright kids, many of whom go on to STEM-related professions, but it is an eternal battle to break the fallacy that science is a set of facts handed down from experts. Science is a way of thinking and a process of learning, and the best way to show this is to get down and dirty with the literature and with the raw data. Believe it or not, some of our students actually participate in peer-reviewed research (one co-authored a paper appearing in the current issue of Journal of Vertebrate Paleontology), and some of them do quite well at it. Recognizing that not every student will be able to do publishable research, every student should have the opportunity to work with real datasets and grasp what professional scientists do. Telling someone that they aren’t qualified to read the literature, access the data, or do science doesn’t protect science – it just creates another anti-science advocate.

(I should clarify that the last sentence isn’t to avoid hurting the feelings of people who genuinely can’t do science or want to do so in a dishonest way – it’s to promote the culture that being a good scientist isn’t just a matter of getting a Ph.D.)

I’m an advocate of humility. I think we need more of it. People need to be OK saying, “I don’t know.”

It’s true that everyone puts different things into the ‘open science’ basket, but there’s a world of difference between doing the research in private then making the data available upon publication, and having the entire scientific process be open, such that anyone can stand at your shoulder while you work.

I think Kent is taking aim at the idea that all science should be like the latter, which for many fields just doesn’t work. As you note, giving out the location of a fossil and inviting everyone to come down with a shovel would not be an improvement over having it excavated by a single professional. On the other hand, publishing a paper describing the fossil and making the morphological data available would be ‘open science’, but would ensure that a trained eye was responsible for collecting and analysing the data, and that the findings were vetted by peer review.

Nielsen stumbles over the ‘open after publication’ approach in his talk, but then dusts himself down and hurries off in another direction. The bioinformatician he mentions is probably ‘hoarding’ their data because they are not yet satisfied that it is robust and free from major errors- making it available right now might mean that errors are discovered more quickly, but might also lead to a cascade of incorrect conclusions from other scientists incorporating the faulty data into their own research. Why not wait until the checking was finished and the publish it in a peer reviewed journal? Releasing the data before then is akin to post publication peer review, which doesn’t work: having results skimmed over by 100 people is no substitute for having it put through the wringer by three.

Kent, I have to hand it to you, you really are willing to take on tough issues, I applaud you for your bravery.

I can’t help but think that this issue is related to the discussion last week about technology. Both CONSORT and the ICMJE Uniform Requirements require that medical research be open in the sense that researchers should report their methodology in enough detail which would allow other researchers the opportunity to review and replicate the original author’s results. However, it is very important that the original researchers should be able to do their research in private and not be forced to reveal their data until their work is completed. Before the internet, there was no choice, we had to wait until articles were published before work could be reviewed. I think that sometimes people confuse immediate access with open access to study results and datasets. Just because technology allows us more immediate access does not mean that this is a desirable outcome.

Yes, research results and (de-indentified medical data) datasets should be open, But researchers should be free to pursue their research in privacy and have some (not total) control over the timing of its release.

One of the fundamental issues here is the difference between voluntary and mandatory openness. The obvious benefits of collaboration do not justify the cost and burden of a massive new regulatory regime, which is what a lot of openness advocates are really calling for.

It’s been my experience that most researchers respond with either a blank stare or derisive laughter when I mention the concept of “open science” (meaning “open notebook science” where all your data is posted as you collect it). In a world with limited jobs and limited funding, the resulting competition makes idealistic approaches like this impossible for most hoping to sustain their careers.

This is — sadly, I think — a very good post. A few things I’d add:

The reason that it is more difficult to be enthusiastic about open science than open source software obviously has a lot to do with the academic reward system. It’s no surprise, then, that I don’t know a single open science advocate who isn’t deeply embedded in some offshoot of altmetrics. By the same token, while we can say that competition is a good thing in its purest form (symbolized by Rocky and Apollo Creed jogging alongside one another on a beach), the reality we are talking about has everything to do with funding. As for Neilsen’s enthusiasm, I think it goes without being said that only a crazy person would claim that all science should be completely “open” bar none; the hyperbolic example of the Manhattan Project certainly does the job here. Sometimes, however, we need evangelism. For this particular cause, this may as well be one of those times.

The failure to discriminate between when “open” is good and when “open” can be harmful is at the root of the controversy over WikiLeaks.

Comments are closed.