I recently started to watch a Michael Nielsen TEDxWaterloo talk, but stopped early on, because his first story hit me all wrong. It’s similar to the story Clay Shirky tells about a mathematician with a blog who was trying to find a combinatorial proof to P versus NP problem. And it reminded me of a recent talk Tim O’Reilly gave touching on “open science.” In talks like theirs about this topic, single, carefully selected examples are leveraged to a cry to “Open science now!”, as Nielsen’s talk is titled.
I did finish watching Neilsen’s talk after absorbing my reaction to what he was doing — which seemed akin to using one washcloth to cover a football field. One anecdote = one anecdote.
But back to “open science,” which isn’t very easy to define, I’ve found. Generally, in its best form, it speaks to activities like SETI@home, where a lot of participants help get science done. But there’s another, broader version afield. For instance, the definition give at openscience.org is:
- Transparency in experimental methodology, observation, and collection of data.
- Public availability and reusability of scientific data.
- Public accessibility and transparency of scientific communication.
- Using web-based tools to facilitate scientific collaboration.
Stepping carefully through this four-part definition, the pros and cons of the “open” approach become a bit clearer:
- the first item prohibits a placebo group or double-blind trial
- the second item prohibits proprietary research like the kind that yielded the light bulb or the Manhattan Project
- item the third argues that paying for finished research reports isn’t good (nor is printing them on opaque paper, if you use light sarcasm to fight the buzzword “transparency”)
- the fourth and final item prohibits counting as “open” experiments using real people in physical settings
The rhetoric of “open” seems at times to be the new equivalent of “free” or “pure.” If the Internet didn’t make everything free or pure, at least it can make it open, an emerging logic seems to aver.
“Open science” that leverages the broader population’s infrastructure or data exhaust can make very good sense as a trial design option, not as social policy. It’s an interesting trial design when deployed well. But “open” in the social policy sense is a double-edged sword.
Yet advocates of “open science” often speak as if there are no potential downsides.
Science is a competitive field, and that’s a good thing by and large. Competing for grants, tenure, position, and prestige all help to bring out the best in people, the best in their ideas, and the best in their execution of study and research protocols. But it also means that there is a certain circumspection to science, a certain level of secrecy needed.
Peer-review is protected as a confidential peer activity, and is one of the most important parts of getting the right papers to the right audience in the right shape. “Open” peer-review experiments have failed for a variety of reasons, including nervousness on behalf of the participants and an unwillingness to have assessments published openly — not to mention that “open” peer-review is a solution looking for a problem. Peer-review actually works very well when what needs to be said can be said without fear of embarrassment or recrimination.
In medicine, the potential downsides of “open” have perhaps expanded the farthest. “Open science” arguably led to the autism-vaccine scandal, in which high-powered scientific studies were drowned out by misguided conspiracy theories further fueled by overblown celebrity anecdotes and more recently by feckless politicians. These people have no expertise, yet the “open science” atmosphere gives their half-cocked notions full throat. The scientific discourse was all done “in the open” — policy statements from major organizations were freely available; news coverage was widely available; and patients, physicians, and vaccine companies were thrown into turmoil. If “open” works, then why did this dangerous trend take more than a decade to finally stop? And why did an investigation shrouded in secrecy prove to be the most effective tool in thwarting the misinformation?
What was needed was the voice of authority, of scientists telling non-scientists the difference between evidence and anecdote, between proof and paranoia, and doing it in dramatic fashion with legal enforcement. “Open” created chaos that authority couldn’t contain. Legitimate, well-earned, and prudent authority became just another voice in an “open” sea.
“Open” doesn’t always make sense in the case of governance or government, either. Most tricky solutions and complex compromises are worked out behind closed doors, outside the limelight, in private. What if the most effective path to solutions is private and full of things best not observed?
How “open” is “open”? Are we going to have webcams tracking every beaker spilt or pipette snapped, with commentators denouncing the feckless scientist as clumsy oafs wasting their research funds on unnecessary equipment costs? Will society become the groundlings in the process of science just as they are in the process of 24/7 media circus politics? Or is that too “open”?
If you want solutions, you have to tolerate a certain amount of privacy, secrecy, and discretion. If you want paralysis, you make sure everything is known, everyone is accountable, and nothing goes on out of sight.
Even Google acknowledged through actions that spoke louder than its words that it values private innovation over open innovation, scuttling Google Labs shortly before having Google X, it’s top-secret research and development facility, exposed in the New York Times. “Google is so secretive about the effort,” the Times reported, “that many employees do not even know the lab exists.” It seems that
“Open science” is a poorly defined term — sometimes it’s used to mean crowd-sourced science, sometimes it’s meant to mean open access publishing, sometimes it’s just not clear what it means.
Science should do what’s best for science and those science seeks to help. “Open” is about style points, not results, at least in the cases where it does not meet the criteria above. Michael Nielsen celebrates the fact that “anyone” can download the human genome thanks to how open the Bermuda Sequence Policies were. So what? If I download it, what good does that do me? I don’t have the time, materials, or expertise to do anything with it. (To Nielsen’s credit, he acknowledges the inherent conservatism of scientific conduct and academic culture. Unfortunately, he uses this as a contrast to the wondrous beauty of “open,” and his infatuation with “open” seems complete.)
“Open” accounting doesn’t make you an accountant; “open” long-haul trucking doesn’t make you a long-haul trucker; and “open” science doesn’t make you a scientist. It takes more than permission to become competent.
Instead of “open” science, we need “funded” science and “more productive” science.
The budget stalemate in Congress has slowed down science funding, and the anti-science movement in the US is as potent as ever — in fact, I wonder if this notion that “science is for everyone” has fed into it. Evolution is now just another theory, as is creationism or intelligent design. The message rumbling beneath the surface of “open science” seems to have been misinterpreted culturally as, “We can play, too — no special skills required, just a hypothesis.”
Where might this all lead? If “open” promotes science style over science substance, it may lead to stymied science, and a culture that has even less expertise at its helm. As The Onion published in a parody that hits a little too close to home, “Nation Finally Breaks Down and Begs Its Smart People to Just Fix Everything“:
Acknowledging they lacked the know-how to put anything together without it all falling apart again in a matter of seconds, millions of ordinary Americans implored the nation’s skilled individuals to just use their knowledge to end the financial crisis, manage the health care industry, determine which human beings are actually fit to hold political office, teach the nation’s children, and enact overarching policy decisions that serve the greater good.
In a post from last year, Phil Davis summarized this careful balance, one that “open science” ignores with significant peril:
The open secret here is that science depends on a balance of competitive drives and cooperative culture. Pushing it too far in either direction will likely cause inadvertent harm.