The Experiment (film)
Image via Wikipedia

Back in 2006, I had what I thought was a brilliant idea. From the pulpit of my semi-regular column in Against the Grain, I invited contributions to what I hoped would be a whole new column in that publication: a series of “How We Done It Bad” reports from libraries.

Here was the logic behind my proposal — everyone publishes reports or does conference presentations on things they’ve accomplished in their libraries, whether the accomplishment is the creation of a popular new service, a successful reorganization, the redesign of an important workflow, or even the elimination of a task formerly considered essential. Such presentations and articles are commonly referred to as “How We Done It Good” pieces. Vendors and publishers do the same thing, offering conference presentations and lunch meetings that focus on new and emerging products or the refinement of old ones.

As great and often useful as such pieces and presentations are, it occurred to me that what might be even more useful (or at least useful in a different way) would be cautionary tales, or “How We Done It Bad” pieces.

  • Did your library attempt to eliminate a no-longer-essential task, only to find (gulp) that it was still essential after all?
  • Did you introduce a new service, only to find that it actively enraged your patrons and you had to publicly and speedily backpedal?
  • Did you reorganize a library division only to find that the new structure failed in amazing ways you could never have anticipated?

Tell us about it — not for our amusement and schadenfreude (well, not entirely), but so we all can learn from each other’s mistakes without repeating them.

What was the response to my invitation? Crickets. If I recall correctly, I may have gotten one tentative message from a potential contributor, and maybe a couple of private emails saying “Hey, great idea!,” but no actual contributions. A year later, one library blogger was asking whatever happened to my proposal. Five years further along, the answer is still the same — either no one has made any mistakes worth mentioning, or no one is willing to mention them publicly.

Not that I’m shocked by the lack of response. It’s embarrassing to admit we’ve failed, and for some members of the scholarly information stream (like publicly-traded companies) it may not be possible to talk about failure in a public forum. It’s one thing for a librarian to stand up and say, “We tried eliminating an entire functional unit but had to reinstate it; oh well, lesson learned”; it’s another thing for a company with shareholders to make a similar public statement.

On the other hand, maybe companies are letting their fears get in the way of success. The other day, I came across an article in the online magazine Knowledge@Wharton. It quotes consultant Daniel Zweidler, who believes pharmaceutical companies might be able to save billions of dollars each year if individual companies would publicly share information about failed experiments in the early phases of research. He argues that a “failed” experiment is really just “an experiment that gives us negative information,” and goes on to argue that if 90% of drug experiments fail and those results are whisked away into a vault of secrecy, that means that drug companies are wasting 90% of the useful information that could be derived from their research.

Assuming that Zweidler is right, are drug companies and scholarly publishers similar enough in the right ways for this line of argument to apply to publishers?

Possibly. On the one hand, publishers don’t compete with each other in quite the same way that drug companies do. Drug companies tend to sell products that are roughly substitutable — if you have hay fever, you can buy Claritin from Schering-Plough, or Zyrtec from Pfizer, or a generic version of one of those products from some other company. Claritin and Zyrtec aren’t the same drug, but they fill the same function. The same is true for drugs that treat high blood pressure and diabetes and any number of other ailments; there will be multiple drug options, each of them formulated differently, but each of them treating the same problem.

The same isn’t true with scholarly books and journal articles, because copyright holders have true monopolies. You may be able to get articles on the general topic of molecular biology from several different publishers, but those articles are all unique and each treats a different “problem”; you can’t get the information contained in an Elsevier article from any publisher other than Elsevier. So it’s difficult to imagine the public disclosure of error affecting sales to customers.

On the other hand, publishers do compete for authors, and authors tend to care about how well a journal or a publishing house is run. A potential author who hears that Publisher X has been conducting crazy-sounding experiments in platform design or manuscript processing does have choices as to publishing venue, and might easily choose to go with a publisher who seems more stable – whether because that publisher genuinely takes fewer risks, or because it simply doesn’t discuss its risk-taking publicly.

The bottom line, I think, is that it’s probably not realistic to expect profit-seeking (or even revenue-seeking) entities to parade publicly the dirty laundry of their failures and misbegotten experiments. But we in libraries have every reason to do so, and should do it much more freely. And I plan to put my money where my mouth is in my next Scholarly Kitchen posting, in which I’ll discuss the lessons (some of them bitter) that we’ve learned so far from purchasing an Espresso Book Machine.

Enhanced by Zemanta
Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Discussion

16 Thoughts on "Let Us Now Praise Failed Experiments"

An early job of mine in publishing was at a place that was adamant about experimentation, but measured the outcomes of its experiments mercilessly. We’d launch publication after publication (on the pretty defensible theory that launching into the market was both an effective way to test the idea and cheaper than a lot of pre-launch market research). Then, we’d have weekly meetings to check on the success. There was a point in the response chart called the “doubling point” we’d watch with great interest — usually it arrived about six weeks after the first orders. If the we saw that inflection point and had half or more of the orders we’d anticipated, we had a success. If not, we had a failure. Hiring, firing, new plans, and so forth would all stream from that. I had a pretty good track record, but not spotless. Because of that, I learned to rip off the band-aid with editors and others (“Hi, sorry, but it didn’t work. We’re going to be closing this down. Your last issue is the one you’re working on.”), and to accept failure along with success.

Generalizing either success or failure is fraught with problems. But learning from failures is vital to learning to do things better. I learned a lot more from the failures than from the successes, and have always felt that you learn more from spilling a glass of milk than you do from drinking one.

Because “failure” is such a stigmatized idea, I think there are two big downsides — either we hold onto initiatives and ideas too long, or we don’t try anything that might fail. There is a price to the former, which is essentially being hostage to an old bad idea instead of being free to pursue a new idea with at least the potential to be good. And there is a price to the latter, which is stasis, fear, internal focus, and a meta-failure that makes any pointed failure pale by comparison.

Thanks Rick. I touched on this a few weeks ago at collaborationista.org. The value of failure continues to be of great interest in many circles, but Kent’s point about the enduring stigma attached to failure has to be addressed. So I think this requires a cultural shift before you’ll find folks queueing up to tout their failures. In my post I mention one very creative idea gleaned from a tweet chat (I wish I could remember who to credit but it was from an association professional); a failure line item in the budget. Establishing this fund would really set a tone of acceptance and encourage risk taking. I think it’s brilliant and wanted to pass it along.

Looking forward to your Espresso Book Machine expose!

My essay “A Post-Mortem for Gutenberg-e” (Against the Grain, Jan. 2009) was just such an analysis of a failed experiment, though i had no personal stake in that project (other than having been a member of the advisory board that Robert Darnton set up to advise him about the project when it was still just a vision in his mind). I agree that one can learn a lot from analysis of such experiments. In Gutenberg-e’s case, among the lessons learned was the crucial importance of developing templates so as not to create each new work de novo.

Rick, you are right that it’s human nature not to want to admit failure. But heck, if you’re not trying out something new, not doing can be worse failure sometimes. I have not so fond memories of a few of these along the way. Anyone remember AcqTalk? Also, I recall as a newbie serials librarian I was given the funds to purchase a very less-than-robust piece of stand-alone decision-making software to help determine serial cancellations. It did not have the chops to do what I wanted it to do, but we were only out $50 if I recall. And of course, reorganizations – I could go on and on about that (but I won’t). But maybe some day somebody will have the guts to do it. In the mean time, I am really interested to hear about your experience with the EBM!

I think one of the greatest as-yet-unrealized benefits of the ease of publishing on the Web is the ability to publish ALL results, not just successful results. The commenters so far have focused on the benefits of learning from failure within their own organizations, and the importance of accepting and encouraging and learning from failures. I completely agree. But I think a far greater societal benefit will be the “we already tried that and here’s what we found out” dimension. In the print world, we had to be selective about what was published, so the results of failed investigations rarely got published. That means some other investigator may blithely set out on the same path, not knowing that their brilliant idea has already been tried. What an enormous waste! And guess what — when _their_ investigation “fails,” it disappears into the same black hole. If investigator #2 really knew what investigator #1 did, she might elect not to pursue the investigation — or, even better, she might try a different slant, knowing what didn’t work the first time. My hope is the present trend to opening up publication, including funders mandating open access, will eventually lead to a culture that says “no experiment goes unpublished.” I can imagine a world (though I admit to not having a clear picture of how to get from here to there, I am very aware of publishers’ concerns here!) where we leave the proceses of selection for _after_ publication, not _before_ publication! Is that failed result the most important thing to publish? Of course not. Except to the person who is about to repeat the mistake.

The problem with this, at least in the world of bench science, is that often it’s impossible to know why a particular experiment failed. If I propose that gene X is active in pathway Y and I do experiment Z to prove it, and I get a negative result, is it because gene X is not part of pathway Y or is it because I’m an idiot and I accidentally mixed up my cell culture medium incorrectly, drastically altering the conditions I thought I was measuring? Let’s say it was the latter, and I publish this failed experiment. Now no one else will ever investigate the role of this gene in this pathway, which, in my extreme hypothetical example, is the key to curing cancer.

And so discovery is thwarted by my incompetence. There’s no real way to tell the difference between the two as my publication won’t include the error that I made, because I am unaware of it. The question must be asked as to how much value there is in publishing a literature that can’t be trusted. And since you’re asking the author to write up something that won’t count toward career advancement, how much time do you expect them to spend documenting their failures? Wouldn’t they be more likely to spend that time doing new experiments that might actually work?

Science requires a certain level of redundancy. Sometimes it’s important for others to repeat your experiments, even if they’re failures.

My thoughts exactly.

I love it when a potential client comes in and says “we tried that and it didn’t work”. That tells me exactly where to start solving their problem, and they sure are surprised to find out that it could work.

Perhaps naively, I assume that bench scientists don’t mix up their cell cultures. Or if they do, they ain’t bench scientists for very long.

You might be surprised…

It’s probably worth remembering that those who are scientists for very long tend to move away from the bench. Once you have your own lab, you’re not doing experiments any more, you’re mentoring students and writing grants at that point. Since the majority of the work is done by graduate students and postdocs, there are lots and lots who are not scientists for very long. So the work published out of a 40 year veteran’s lab may have been done by a first year graduate student who is about to drop out.

While libraries are not in the revenue-seeking business, a chief function of the university librarian is to demonstrate how valuable and forward-thinking they are to their deans and provosts. In addition, university librarians can be very competitive with their peers. Often these individuals are at the end of their career trajectories and wish to be known for all the positive changes that took place under their control–not their failures. This culture of signalling success trickles down through the layers of management and becomes the general modus operandi of library culture. Those who are not publicly and positively promoting the values of the library are asked to restrain themselves.

If you are able to begin, recruit and sustain such a public dialog that focuses on library failures, all the power to you. As someone with deep roots in university librarianship, my sense is that this will be viewed as counter-cultural.

Over the 23 years I’ve been working in libraries, I’ve seen a very wide spectrum of attitudes in regard to experimentation and degrees of openness to failure. I think you’re right that many librarians (and perhaps especially those in key administrative positions) tend to be reluctant to discuss failed experiments publicly; hence my piece. However, I’ve been encouraged lately by what I think is a trend in the opposite direction. One example is my boss, Joyce Ogburn, who, when she came to the U of Utah a few years ago, instituted a program of “innovation grants” that give library personnel funding to pursue new and even risky experimental projects. It’s explicitly understood that many of these will fail; to discuss them publicly would not, I’m confident, be punished in any way either formal or informal.

Nor do I think that public discussion of specific failures is dissonant with “publicly and positively promoting the values of the library.” On the contrary, I think it’s explicitly consonant with those values.

That said, I think I’ve also demonstrated pretty clearly my willingness to risk being seen as counter-cultural.” 🙂

“publishers don’t compete with each other in quite the same way that drug companies do”

This is literally true but I think there’s more competition than it suggests. Although books and journals are not truly interchangeable, when it comes to balancing the budget, they must be. No budget is limitless and there are always more unique resources that could be acquired. Comparisons are made and choices as to which suits our needs better are always made. This results in very real competition between databases, journal packages, individual book titles, etc. Although they are not truly interchangeable, they are in competition for our budget dollars. Sometimes two resources that are not even on the same level are competing, such as a large arts & humanities/social sciences index and a smaller psychology index.

It depends on how you broadly you wish to define “compete,” I guess. In the sense you’re suggesting, plumbers are in competition with grocers — after all, my family budget is limited and if I’m having a tight month I might have to decide whether to repair a water pipe or buy groceries. But plumbers compete with each other differently than they do with grocers: if I’ve decided I’m going to spend money to have my pipe repaired, I have to pick the best plumber. That implies a very different definition of “competition.”

It would be tempting to see physics journals as being like plumbers, each one offering the same service and jockeying to be seen as giving the best value. But there’s a big difference: no two journals actually offer the same content. Instead, each one offers a unique set of content, which greatly complicates the issue of competition. (None of this is to say that journals don’t compete with each other in any sense, but I was careful not to make that claim; I said only that they “don’t compete with each other in quite the same way that drug companies do.”)

But in some sectors, especially K-12 and college textbook publishing, publishers do compete in a very straightforward way. Having to satisfy state standards, as here in Texas, helps make this kind of competition very direct indeed.

That’s certainly true in some sectors of publishing, particularly in textbooks. But I’m assuming that the primary context of our discussion is scholarly publishing. Two different introductory biology texts compete directly against each other in a way that two monographs on Oscar Wilde do not–a library that needs one monograph on Wilde will likely need the other one as well, because each offers unique content.

Comments are closed.