Author’s note: Today we revisit a 2010 post about the concept of ‘soundness’ as it pertains to a scientific paper. When I wrote this post, I was a postdoc in the Department of Communication at Cornell. I had just finished reading a book by the cognitive linguist, George Lakoff on how our beliefs are strongly influenced by metaphors. While political scientists are likely familiar with Lakoff’s work, he is not well known in science, where terms and phrases are assumed to be descriptive and objective, not rhetorical or manipulative. I was intrigued by how the term ‘open access‘ was used to stand for a multitude of positions and beliefs, and how the phrase ‘level playing field‘ was used to justify using library collection funds to pay for open access publication charges. In this post, I attempted to unpackage and analyze the phrases ‘sound science’ and ‘sound methods.’
There are a few details that have changed over the last six years: The journal information page at PLOS ONE no longer includes a denunciation of mainstream editorial decision-making. The social media aggregator, FriendFeed, was shut down, and Bora Zivkovic moved from PLOS to Scientific American, where in 2013, he resigned after a sexual harassment incident.
Can a scientific paper be methodologically sound, but just not report any significant or meaningful results?
The answer to this question seems rather obvious. But before accepting what would appear to be a truism in science, I’d like to explore what is “methodologically sound” science (and its variants) and what it implies.
In recent years, a number of publishers have launched open access journals designed to reside at the bottom of their peer review cascade. These journals will accept papers that may not report novel results just as long as they contain a sound methodology.
Manuscripts considered for acceptance in PLOS ONE, for example, are not required to advance one’s field, but are required to be “technically sound.” The scope for BMC Notes is exceptionally broad, requiring little beyond that a paper is “scientifically sound.” And BMJ Open‘s criterion for acceptance is somewhat more positively worded, although still conspicuously vague, requiring that studies be “well-conducted.”
These acceptance criteria wouldn’t be so contentious if they were viewed only in isolation, as a way to promote the values of the journal. But they are not. They are often used as a denunciation of mainstream journals and are clearly dismissive of those who decide the fate of manuscripts. This perspective is best expressed in the information page for PLOS ONE:
Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).
What constitutes whether a paper that is “technically sound” is much more nuanced and much less clear than it appears. In fact, you will not find discussions of what makes a sound methodology in any methods textbook. So, I’ve asked several of my colleagues (in the social sciences, biomedical and information sciences; two of whom teach methods courses) about what “sound methodology” means to them. According to these researchers, a paper may be sound if:
- it uses techniques that are appropriate to the question asked
- it does what it purports to do — in other words, if the researchers claimed they ran a Western blot, there must be some evidence that it was conducted, like an image of the gel
- it treats its data correctly and runs the appropriate analysis
- its conclusions don’t overstate its results
Three of my colleagues provided much broader, gestalt-like answers:
- “It’s complicated.”
- “You have to look at the entire paper.”
- “It all depends upon the context. You can’t be expected to run real-time PCR in the jungle.”
- “Appropriate methodology is what your community accepts as appropriate methodology.”
From these answers, evaluating methodology is not a binary decision — right or wrong, sound or unsound — but requires the context specific to a field. No method is perfect or ideal, although some are certainly more appropriate than others. And making that decision requires expertise, which is the very raison d’etre of editorial and peer review.
This is why I have a problem with coupling the word “sound” with methodology, technique, or science.
The word “sound” implies that something is safe, strong, and secure, like the foundation of a building, the very structure upon which a whole edifice is built. Sound foundations are solid, stand firm, and resist direct attacks, while weak foundations crumble over time or cannot withstand the assault of a competing theory or contradictory piece of evidence.
Presidents make frequent use of the “sound foundation” metaphor when talking about the economy during recessions because they give people hope that, when the building appears to be crumbling — lost jobs, high unemployment, stagnation or deflation — a new economy can be rebuilt upon a strong foundation.
“Sound” also implies that something is healthy and vibrant — science that spawns new hypotheses and directions for further research. Unsound research is weak, lacks fitness, and is unable to thrive.
Neither of these interpretations of “sound” can be applied to scientific method. Articles reporting negative, confirmational, or ambiguous results don’t get challenged. They sit vacant to crumble and decay with the passage of time. Nor is the sound-as-health interpretation a valid comparison: only articles challenging established dogma or reporting, at minimum, positive results are capable of spawning new hypotheses and advancing science.
In sum, the connections made between “sound” and “methodology” creates mental frames that simply do not coincide with how researchers actually evaluate methodology.
But there is more that is bothersome. By accepting the “sound methodology” metaphor, the only difference between articles published in top journals and those appearing in archival journals is, to paraphrase PLOS ONE, what an editor thought was interesting and would attract readers. Or, to quote PLOS ONE‘s community organizer, Bora Zivkovic, during one of his regular public rants:
When they say “quality science” they don’t mean “well-done science” like we do, they mean “novel, exciting, sexy, Earth-shaking, mind-boggling, paradigm-shifting science”, i.e., the kind that gets published in GlamourMagz. Then they do a circular logic trick: what is published in GlamourMagz is good science. When they say “peer-review” they don’t mean checking the quality of research, they mean keeping non-sexy research out. When they say “selective” they mean “elitist” and “keeping the non-sexy proletarian science out”
Rationalizations like this may help rally the troops or provide some solace for a rejected author, but they do a disservice to science by promoting an unrealistic view of the scientific method and a corrupted public image of the editorial and peer-review process.
“Sound methodology” suggests an ideal match to a scientific question that never quite exists in empirical science. For all that the phrase implies, it should be replaced with something much more accurate, like “appropriate” or “persuasive” methodology. Granted, it doesn’t connote the same trust and confidence as the word “sound,” although it may describe the process more accurately and honestly.