It’s the responsibility of a competent survey researcher to disclose potential sources of biases and to detail how they may distort one’s findings. A professor of mine once taught me that it was better to reveal one’s biases or others would do it for you. It’s one of those cautionary tales that most social scientists know deeply, and a lesson that should be reiterated again here.
The report, “Highlights from the SOAP project survey. What Scientists Think about Open Access Publishing“ has been trumpeted recently with much fanfare, and there is much to admire. The survey reflects the cooperation of several publishers, foundations, and research organizations working together on a joint project. It presents the results of over 40,000 respondents on the perceptions and behaviors of active scientists with respect to publishing. It extends the dialog on how to support the production and distribution of freely accessible scientific literature.
SOAP’s main conclusions are uncontroversial: Scientists are generally supportive of uninhibited access to the research of other scientists and view access to funds and the lack of high-quality open access journals as barriers to publishing. I’m less comfortable, however, with how the researchers got to these conclusions.
First, the survey was based on a convenience sample, which was achieved by sending out requests to listservs and by directly emailing potential subjects. If you’re like me, when this email came through, you spent your 20 minutes doing something more meaningful than answering another Web-based survey. The profile of the respondents of this survey are therefore highly indicative of sampling bias and non-response bias:
The sources of the largest amount of responses, are, respectively, those of SOAP partners SAGE, Springer and BioMed Central, with 800k, 250k and 170k addresses. The fourth largest mailing was run through Thomson Reuters to 70k authors in fields where, after the first three months of the survey live-time, a relatively low response rate was observed.
Indeed, the majority of active scientists responding to this survey indicated that they have published open access articles, reaffirming the demographic profile of the respondents. The survey also includes questions that are leading, and may invoke acquiescence and social desirability biases. In the case of Q9, the question is also double-barreled:
Do you think your research field benefits, or would benefit from journals that publish Open Access articles
Not surprisingly, 89% of respondents gave a resounding “yes.” Given the issues I just mentioned surrounding this question, the researchers do not hesitate to question their results, but use this factoid to construct a definitive conclusion:
The most relevant findings of the survey are that around 90% of researchers who answered the survey, tens of thousands, are convinced that open access is beneficial for their research field, directly improving the way the scientific community work. At the same time, our previous study found that only 8-10% of articles are published yearly in open access journals. The origin of this gap is apparently mostly due to funding and to the (perceived) lack of high-quality open access journals in particular fields.
Luckily the researchers made their dataset available, and I was able to calculate the responses of several questions not reported in their report. I was particularly interested in what factors their respondents felt were most important when selecting a journal for publication. Below is a ranked list of the factors authors felt were either “important” or “extremely important”:
Q 13. What factors are important to you when selecting a journal to publish in?
- Prestige (94%)
- Relevance for community (90%)
- Impact Factor (84%)
- Likelihood of acceptance (79%)
- Positive experience (79%)
- Speed of publication (79%)
- Importance for career (75%)
- Absence of fees (67%)
- Recommendation by colleagues (57%)
- Open Access (45%)
- Copyright policy (36%)
- Organisation policy (36%)
Like similar studies of the publishing priorities of scientists, “Prestige,” “Relevance,” and “Impact Factor” are listed at the top, while “Open Access,” “Copyright,” and “Organisation policy” occupy the last places. You’ll also note that “Absence of fees” was listed as “important” or very important” to a submitting author, a detail highlighted in last year’s faculty survey by Ithaka S+R. Scientists who self-selected to take a survey on open access publishing seem very much like scientists in general: Everyone wants free access. No one wants to pay or to be told how and where to publish.
And yet the inclusion of Q13 changes the interpretation of the study quite significantly since it adds another dimension of scientists that did not make it into the narrative of the report. While respondents were overwhelmingly supportive of other researchers making their articles freely-accessible — the entire field if possible — they showed little interest in open access and copyright issues with respect to their own articles.
Given the scores of individuals involved in creating, promoting and analyzing this survey, and the voluntary participation of 40,000 scientists, the researchers missed a great opportunity to contribute valid and generalizable details to a field that is woefully lacking of objective data. While one should not dismiss the SOAP survey out of hand, we should be critical of what it measures, what the data mean, and how we can conduct better surveys in the future.