English: paid survey related photo
Survey: (Photo credit: Wikipedia)

Scientists, by nature, are a skeptical bunch. They demand that studies follow rigorous methodology, are written dispassionately, and subjected to, often, rounds of external critique and revision. No other truth-seeking institution exposes one’s writing to a group of peers, whose responsibility it is to look for faults. The end result is a new piece of knowledge that functions as a truth-claim. When enough consistent truth-claims are made and consensus is formed, progress is made in science. Western democracies that spend billions of taxpayer dollars to fund science do so largely because this form of truth-seeking helps to define standards of care, fuel commerce, and help guide government policy.

Or at least this is the way it is supposed to work.

In their opinion piece appearing in the July issue of Learned Publishing (“The Access Question“), Alice Meadows, Bob Cambell, and Keith Webster argue that when it comes to scientific publishing, we are operating with double standards.

They review five large surveys dealing with access to the scientific literature and point out gross inadequacies in their methodologies, including such fundamental problems such as the inability to specify the target population. They go deeper in exposing how question bias can radically alter how scientists respond to a question.

For example, consider the question, “There is NO problem with access to scientific publication.” Does the emphasis on the capitalization of the word “no” change the meaning of the question? For many, it does.

Also consider the leading, double-barreled question posed in the SOAP Survey to gauge what scientists think about open access publishing:

Do you think your research field benefits, or would benefit from journals that publish Open Access articles?

It should not be surprising that different surveys, designed and conducted in different ways, result with radically different results, as any introductory text in survey methods will tell you. What should be surprising is that these surveys are used to guide national science policy. Meadows writes:

The poor methodology (including loaded questions) of most of the surveys described above indicates a motivation to achieve a certain result rather than sound evidence for informing policy. This is ironic as the ultimate objective is to develop a better system for access to the peer-reviewed outcome of research. Perhaps such surveys in the future should themselves be properly peer reviewed. Until then those that quote figures from these surveys should do so with care and understanding of how these findings were produced.

If we care that science is conducted with rigor, shouldn’t we demand the same with science policy?

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

10 Thoughts on "Why Do We Allow Poor Science to Guide Policy?"

It is not clear that science policymaking can or should be more scientific, just because science is the subject. Should business policymaking be more businesslike? Art policymaking more artistic? Policy is made by the political system, where advocacy is the method. There are advocacy polls and objective polls, and serious players know the difference. Scientists may despise advocacy as unscientific, but they cannot wish it away. Advocacy is central to democracy.

It is pretty clear to me that policymaking can be more scientific. For a start, the introduction of randomised controlled trials would vastly improve upon current public policy: http://www.cabinetoffice.gov.uk/sites/default/files/resources/TLA-1906126.pdf. Whether we should displace the current system is dependent on the results a scientific approach produces.

Also, it’s not just scientific policymaking, but policymaking in general that should be made more scientific (including the arts). The author was simply pointing out that the tools of science, having been unparalleled in their success for certain domains of knowledge-generation, should be implemented on a wider scale (in this case, to help improve current research into science policy).

You seem to have missed my point, which is that science policymaking is no different from policymaking in general, and not more scientific in any case. In fact you seem to agree. As for making policymaking in general more scientific, how do you propose to do that?

No, I don’t think I missed you point. But you also said “It is not clear that science policymaking can or should be more scientific”. I was responding to the point that policymaking can and should be more scientific (irrespective of the subject). Also, I already said how I would go about doing this (well, tentative first steps): see the link above and read up on randomised controlled trials.

James and David, I think you are arguing two very different (but related points). If I understand your positions, James is arguing that the EVIDENCE used to support a particular policy decision needs to be based on rigorous methods (e.g. randomized control trials). David is arguing that politics is the process of assembling RHETORICAL TOOLS that advance a particular policy outcome and that science is merely one–but not exclusively the only one–of those rhetorical tools.

It turns out that we are talking about two different levels of policymaking. RCTs can indeed be used to test specific interventions. The US Dept of Education is doing something like this, testing teaching systems, many of which do not work it seems. I am talking about policymaking at the Congressional level. Funding fusion research for example, where RCTs do not seem to be applicable.

There is also a fairness problem with RCTs. We could test a 4 month OA mandate by picking a random set of journals and hammering just them to see if they survived. Would that be fair?

I have been an advisor to the SCISIP program, but NSF does not take apps from small businesses, only academics, for most programs. That is an unfortunate message.

Comments are closed.