Author’s note: Today we revisit a 2010 post about the concept of ‘soundness’ as it pertains to a scientific paper. When I wrote this post, I was a postdoc in the Department of Communication at Cornell. I had just finished reading a book by the cognitive linguist, George Lakoff on how our beliefs are strongly influenced by metaphors. While political scientists are likely familiar with Lakoff’s work, he is not well known in science, where terms and phrases are assumed to be descriptive and objective, not rhetorical or manipulative. I was intrigued by how the term ‘open access‘ was used to stand for a multitude of positions and beliefs, and how the phrase ‘level playing field‘ was used to justify using library collection funds to pay for open access publication charges. In this post, I attempted to unpackage and analyze the phrases ‘sound science’ and ‘sound methods.’

There are a few details that have changed over the last six years: The journal information page at PLOS ONE no longer includes a denunciation of mainstream editorial decision-making. The social media aggregator, FriendFeed, was shut down, and Bora Zivkovic moved from PLOS to Scientific American, where in 2013, he resigned after a sexual harassment incident.


Can a scientific paper be methodologically sound, but just not report any significant or meaningful results?

The answer to this question seems rather obvious.  But before accepting what would appear to be a truism in science, I’d like to explore what is “methodologically sound” science (and its variants) and what it implies.

foundation

In recent years, a number of publishers have launched open access journals designed to reside at the bottom of their peer review cascade.  These journals will accept papers that may not report novel results just as long as they contain a sound methodology.

Manuscripts considered for acceptance in PLOS ONE, for example, are not required to advance one’s field, but are required to be “technically sound.”  The scope for BMC Notes is exceptionally broad, requiring little beyond that a paper is “scientifically sound.”  And BMJ Open‘s criterion for acceptance is somewhat more positively worded, although still conspicuously vague, requiring that studies be “well-conducted.”

These acceptance criteria wouldn’t be so contentious if they were viewed only in isolation, as a way to promote the values of the journal.  But they are not.  They are often used as a denunciation of mainstream journals and are clearly dismissive of those who decide the fate of manuscripts.  This perspective is best expressed in the information page for PLOS ONE:

Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).

What constitutes whether a paper that is “technically  sound” is much more nuanced and much less clear than it appears.  In fact, you will not find discussions of what makes a sound methodology in any methods textbook.  So, I’ve asked several of my colleagues (in the social sciences, biomedical and information sciences; two of whom teach methods courses) about what “sound methodology” means to them. According to these researchers, a paper may be sound if:

  • it uses techniques that are appropriate to the question asked
  • it does what it purports to do — in other words, if the researchers claimed they ran a Western blot, there must be some evidence that it was conducted, like an image of the gel
  • it treats its data correctly and runs the appropriate analysis
  • its conclusions don’t overstate its results

Three of my colleagues provided much broader, gestalt-like answers:

  • “It’s complicated.”
  • “You have to look at the entire paper.”
  • “It all depends upon the context. You can’t be expected to run real-time PCR in the jungle.”
  • “Appropriate methodology is what your community accepts as appropriate methodology.”

From these answers, evaluating methodology is not a binary decision — right or wrong, sound or unsound — but requires the context specific to a field.  No method is perfect or ideal, although some are certainly more appropriate than others. And, making that decision requires expertise, which is the very raison d’etre of editorial and peer review.

This is why I have a problem with coupling the word “sound” with methodology, technique, or science.

The word “sound” implies that something is safe, strong, and  secure, like the foundation of a building, the very structure upon which a whole edifice is built.  Sound foundations are solid,  stand firm, and resist direct attacks, while weak foundations crumble over time or cannot withstand the assault of a competing theory or contradictory piece of evidence.

Presidents make frequent use of the “sound foundation” metaphor when talking about the economy during recessions because they give people hope that, when the building appears to be crumbling — lost jobs, high unemployment, stagnation or deflation — a new economy can be rebuilt upon a strong foundation.

“Sound” also implies that something is healthy and vibrant — science that spawns new hypotheses and directions for further research.  Unsound research is weak, lacks fitness, and is unable to thrive.

Neither of these interpretations of “sound” can be applied to scientific method.  Articles reporting negative, confirmational, or ambiguous results don’t get challenged.  They sit vacant to crumble and decay with the passage of time.  Nor is the sound-as-health interpretation a valid comparison: only articles challenging established dogma or reporting, at minimum, positive results are capable of spawning new hypotheses and advancing science.

In sum, the connections made between “sound” and “methodology” creates mental frames that simply do not coincide with how researchers actually evaluate methodology.

But there is more that is bothersome.  By accepting the “sound methodology” metaphor, the only difference between articles published in top journals and those appearing in archival journals is, to paraphrase PLOS ONE, what an editor thought was interesting and would attract readers.  Or, to quote PLOS ONE‘s community organizer, Bora Zivkovic, during one of his regular public rants:

When they say “quality science” they don’t mean “well-done science” like we do, they mean “novel, exciting, sexy, Earth-shaking, mind-boggling, paradigm-shifting science”, i.e., the kind that gets published in GlamourMagz. Then they do a circular logic trick: what is published in GlamourMagz is good science. When they say “peer-review” they don’t mean checking the quality of research, they mean keeping non-sexy research out. When they say “selective” they mean “elitist” and “keeping the non-sexy proletarian science out”

Rationalizations like this may help rally the troops or provide some solace for a rejected author, but they do a disservice to science by promoting an unrealistic view of the scientific method and a corrupted public image of the editorial and peer-review process.

“Sound methodology” suggests an ideal match to a scientific question that never quite exists in empirical science.  For all that the phrase implies, it should be replaced with something much more accurate, like “appropriate” or “persuasive” methodology.  Granted, it doesn’t connote the same trust and confidence as the word “sound,” although it may describe the process more accurately and honestly.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

12 Thoughts on "The Fallacy of ‘Sound’ Science"

Hi Phil

I agree with you in principle. The problem for publishers and the pub/perish community is that adhering to these standards, the number of accepted articles would drop precipitously including many that have taken one piece of research and created “n” articles or the converse where it takes many data points and thus significant time to fill out a picture rather than publishing early and often.

The impact would ripple through the entire community from funding agencies to promotion/tenure decisions to the economics of the STEM/STM journals. Tis is a consummation. Devoutly to be wished.

This is where things like preprints can come in really handy. Publish a preprint early, make the research transparent, get some feedback from the community, make the necessary adjustments, all BEFORE publication. Publishing “interesting” science early and often that later turns out to be less than solid hurts the entire research process. Research that is intriguing, survives transparent review and remains interesting, now that’s a win.

An interesting article, thanks. A few points to add, if I may:

– PLOS ONE also uses the term “scientifically rigorous”, Scientific Reports says “scientifically valid”; I am not that bothered by precisely which term is used, and don’t think the connotations of soundness are as important as all that.

– I think some of the points made by Collins and Pinch in The Golem are relevant here too: in particular, they argue that it’s no longer possible to get simple agreement on what constitutes sound science when you are dealing with a controversial scientific problem, as competing theories will interpret the same data in different ways. (I’ve probably over-simplified this to a ridiculous level.)

– Speaking as a PLOS ONE academic editor (and, of course, giving my own opinion only, not speaking for the journal in any capacity whatsoever), I think that the journal’s aim of moving away from subjective decisions about paper importance/impact can still let in subjectivity at the back door. In particular, there is a potentially subjective decision to be made as to when a study is complete enough to justify its conclusions; I think there will always be a grey area, where some might argue that extra experiments are needed to achieve scientific soundness, and others would argue that they were in fact increasing impact.

Thanks Jake. You make a good point about how a dispute can arise over methodology, especially if the claim is novel, controversial, or purports to upend the dogma of a field. Multidisciplinary OA megajournals are not attempting to compete for these kinds of papers, but provide a market for negative, confirmatory, and ambiguous studies, what Kuhn and the field of Science and Technology Studies refers to as “normal science.”

Nice essay. Now if we can just get rid of the term “settled science” that impedes new viewpoints.

First of all, what qualifies as “science”? Is psychology a science? Is economics a science? Is cosmology a science (is the Anthropic Principle science or philosophy)? In psychology, is some psychology science and other psychology not? Would a Freudian or Jungian accept a follower of Piaget or Skinner as doing science, and vice versa? A neoclassical economist is not likely to think that a Marxian economist is a scientist. Karl Popper famously thought both Marxian economics and psychoanalysis to be pseudo-science. Political science calls itself a science (though some college departments, as at Princeton and Harvard, more modestly call themselves “Politics” or “Government” departments). Surely, there can be more or less objective judgments about the use of statistical methods, no matter what interpretive framework is used in their deployment. But where do methods end and theories begin? For philosopher/physicists like Pierre Duhem, science is holistic, and the boundaries between observation, method, and theory are fluid and interdependent. I think the whole effort to apply a peer-review approach that assesses only the soundness of an article is philosophically suspect.

Sandy, I can’t agree more. The idea of “sound science” harkens to an Enlightenment view that this newfound tool–the scientific method–could be used as a dispassionate and objective measure for understanding the natural world. In reality, empirical science has guidelines but few hard rules.

Phil, I am interested in getting an understanding as to the differentiation of the role of journal publishers and staff and that of editors and the editorial board in making these decisions and policies. As suggested by a number of comments its more than whether certain disciplines are “science” but whether certain researchers consider or can be considered as exemplars of that domain. Sandy’s examples are excellent in raising this issue. And, the results of trying to fit within a journal’s perspective has lead, in more than one instance of the establishment of what might be considered contrarian journals and even acrimonious divisions of faculty within a department not only as to methodology but in validation of theories and even interpretation of results within all areas from the hard or natural “sciences” to the social “sciences”, and edging into the humanities.

How many publishers have reviewed the boxes in which they have placed their journals and asked whether there needs to be an assessment as to the constraints that creates even as more journals start to see content drift and overlapping and whether this calls for reassessment of even the concept of the idea of journals as boxes for knowledge distribution.

“Sound science” is a term thrown about in environmental policy debates, often to sow doubt, and with the term “junk science” may occurring nearby. I happen be on a work group attempting to update a society position on ‘what is sound science?’ . The first point of debate was whether we were on a fool’s errand, trying to define the undefinable. However, alternatives like “good science” or just “science” (as in, it either is or isn’t) seemed to have their own baggage, and the whole point was to have something in response to vague invocations of “sound science.” We hadn’t thought of Phil’s alternative, “contextually appropriate science,” but somehow I don’t think it has the sound-bite science ring to it.

Thank you Chris for posting this comment. At the heart of defining what is good/sound science is an assumption that there is just ONE SCIENCE. Personally, I believe that there are MANY SCIENCES, each with a set methods and techniques that are deemed acceptable to its community of researchers and practitioners. So instead of trying to define what is good/sound science, it would be more fruitful to define the question as “What are acceptable methods and techniques for THIS field of study?” This is still a little rigid, however, as particular questions are better suited to particular methods and techniques. This makes defining good/sound science as a reflection of context and community norms, not an objective criteria that one finds in a textbook. Quite simply, it just ain’t there.

Comments are closed.