Scientists, by nature, are a skeptical bunch. They demand that studies follow rigorous methodology, are written dispassionately, and subjected to, often, rounds of external critique and revision. No other truth-seeking institution exposes one’s writing to a group of peers, whose responsibility it is to look for faults. The end result is a new piece of knowledge that functions as a truth-claim. When enough consistent truth-claims are made and consensus is formed, progress is made in science. Western democracies that spend billions of taxpayer dollars to fund science do so largely because this form of truth-seeking helps to define standards of care, fuel commerce, and help guide government policy.
Or at least this is the way it is supposed to work.
In their opinion piece appearing in the July issue of Learned Publishing (“The Access Question“), Alice Meadows, Bob Cambell, and Keith Webster argue that when it comes to scientific publishing, we are operating with double standards.
They review five large surveys dealing with access to the scientific literature and point out gross inadequacies in their methodologies, including such fundamental problems such as the inability to specify the target population. They go deeper in exposing how question bias can radically alter how scientists respond to a question.
For example, consider the question, “There is NO problem with access to scientific publication.” Does the emphasis on the capitalization of the word “no” change the meaning of the question? For many, it does.
Also consider the leading, double-barreled question posed in the SOAP Survey to gauge what scientists think about open access publishing:
Do you think your research field benefits, or would benefit from journals that publish Open Access articles?
It should not be surprising that different surveys, designed and conducted in different ways, result with radically different results, as any introductory text in survey methods will tell you. What should be surprising is that these surveys are used to guide national science policy. Meadows writes:
The poor methodology (including loaded questions) of most of the surveys described above indicates a motivation to achieve a certain result rather than sound evidence for informing policy. This is ironic as the ultimate objective is to develop a better system for access to the peer-reviewed outcome of research. Perhaps such surveys in the future should themselves be properly peer reviewed. Until then those that quote figures from these surveys should do so with care and understanding of how these findings were produced.
If we care that science is conducted with rigor, shouldn’t we demand the same with science policy?