A notable science journal implements a new author service: For $5,000, the publisher will make that article freely available (OA) to the world upon publication. A substantial number of authors purchase this service, either by choice or because they were required to do so by their funding agency, and the program is deemed a success.

RIN logo

Several years later, the publisher is interested in knowing whether those articles made freely available performed any better than articles moving through the traditional publication route–in this case access by subscription. The publisher hires a consulting firm, gathers citation data at a particular point of time and makes a simple comparison between the OA group and the subscription group.

The results show with remarkable clarity that the OA group significantly outperformed the subscription group. Does that mean that OA was the cause of the difference?

Maybe. But before OA is considered to be a cause, the analyst needs to rule out other potential explanations for the effect. Authors elected to pay $5,000 for the OA service, and that choice alone may signal that there is something very different about the OA group compared to the subscription group. Specifically, the OA article group may:

  1. Represent researchers with better access to research and publication funds than those in the subscription group
  2. Represent research  funded by an organization that mandates–and pays for–OA publication
  3. Represent researchers located at institutions (or labs) that generate high-profile work, or
  4. Represent research performed by high-profile researchers

There are many attributes of papers that are hard to quantify, like novelty, relevance, and impact. These are characteristics that may only reside in the minds of readers, but they are associated with things that we can measure, such as funding and location and prior publication history.

Last week, the Nature Publishing Group (NPG) released a report on the citation and download performances of research articles published in Nature Communications–an NPG journal that provides authors with the option of making their article open access (OA) for a fee. The research was commissioned by NPG to the British Research Information Network (RIN) and posted on Figshare. Nature widely promoted the study and even gave the RIN analyst her own blog post.

The results confirm what many uncontrolled observational studies have reported since 2001, when Steve Lawrence published his provocative letter in the journal Nature–that there is an association between freely-accessible research and citations. The cause of the association is not clear, but it has been widely assumed that free access improves access, which leads to an increased chance of discovery and citation. The RIN researchers report:

Overall, articles published OA appear to show a higher number of citations, though the effect is small, and the data provided does not allow us to control for possible confounding effects such as the posting of articles in repositories, the number and location of authors, and the possibility that authors are selecting their ‘best’ papers to publish on OA terms. Similarly, any effect of OA on the timing of citations appears to be small, and we have not been able to control for possible changes such as increased awareness of the journal on the part of both readers and authors. But although the impact on citations is small, the impact of open access publication on HTML views and PDF downloads is large and significant, suggesting increased visibility for the open access papers.

After more than a decade of research on this topic, why is the research community no farther along in settling the access-performance question? Why are publishers still drawn to this question, yet remain unable to provide much evidence or insight clarifying the causal nature of the relationship?

The problem with answering this question is largely methodological. The vast majority of these studies are observational, meaning that the researcher does not attempt to provide an intervention (such as assigning free access to one group of articles) but relies on authors to make their own choices. As a consequence, the researcher is fundamentally unable to distinguish OA effects from self-selection effects. For the RIN/Nature study, OA may provide no beneficial effect with the exception of stratifying authors into those who are able and willing to pay $5,000 and those that are not.

There is a methodological solution to dealing with multiple and confounding sources of causation, and that is the randomized controlled trial. By randomly assigning papers into an intervention (free access) and control (subscription access) group, you can ensure that the groups are largely equal, in all respects, at the beginning of the study, with the exception of the intervention. I did such an experiment on hundreds of papers published in dozens of journals and, while not a perfect study, gets us much closer to isolating and measuring the OA effect.

Even without attempting an intervention, there were ways for the RIN analyst to check whether the OA citation advantage could be explained by other causes. She could have looked at the funding sources associated with the paper, the location of the research team, or the h-index of the first and last author. These data are available in the Web of Science–the source of the citation counts used in this study. The analyst could have asked NPG to indicate which articles were rejected by Nature (or Nature specialist journals) and referred to Nature Communication. As the sample size was small, a simple Google search would have turned up personal, departmental and lab pages where an article was made freely-available, or whether it was deposited in PubMed Central or an institutional repository. Without attempting to gather any additional information, we are left scratching our heads as to whether the citation difference was caused by open access publication or merely associated with open access publication. We are no closer to knowing the answer than in 2001.

Still, those promoting this study use language that suggests that open access is the cause. For example, RIN executive director, Michael Jubb, stated for the Times Higher Education that the results added to “the growing body of literature showing that open access is good for article citations and, especially, online visibility.” NPG’s own press release makes a breathlessly bold claim followed with a hint of hesitation:

It’s clear to see that the effect of open research on citations impacts all levels of research positively. We realise that this doesn’t definitively answer the question of whether open access articles are viewed and cited more than subscription articles, but we think this contribution adds to the debate.

After 13 years of research on this topic, are we really still at the debate stage, or is “debate” just a positive spin on a study that promulgates confusion and ambiguity on the topic?

Apart from the marketing surrounding this study, there are other details in the report that concern me. First, the analyst makes a simple comparison based on just one citation observation (total citations, accrued in April 2014), for all articles, despite the differences in their ages. The analyst acknowledges that a more “fruitful and accurate” analysis would have compared articles at a certain point in their publication lifecycle (e.g. at one and two years), but makes no effort to gather these data [Note: you can extract yearly citation counts from the Web of Science]. Similarly, the analyst was provided with the exact publication date for each article, but ignores these data for simple categorical breakdowns. I’m not even confident that the analyst understood how to code the data, the test she used or how to interpret its results.

It appears that the analyst got OA and subscription-access papers mixed up. Figure 5 (should that be Table 5?) reports median citation differences between OA and subscription-access (Subs) papers. Overall, OA papers received 11 citations compared to just 7 citations for Subs, a difference of about 4 citations in favor of OA. The OA-citation advantage is consistent across all subject types. However, in Figure 6, the analyst reports negative z-scores for each subject, suggesting that the effect benefits Subs over OA. The study also reports an effect size of -.16, suggesting that OA papers performed worse off than subscription access papers.

I’m not sure why the analyst, the project coordinator at RIN or those commissioning the study at NPG didn’t pick up on the disparity between the text and the tables, or why the effect size goes in the wrong direction. You don’t need to be a professional statistician to pick up the apparent contradiction.

Similarly, the main results of the study–that the citation effect is small–are not supported by the data. The effect is rather large, in fact–about 4 citations across all papers in the study or nearly 60% more citations in favor of the OA group.

This study relied not on a sample of articles from Nature Communications, but the entire population of articles–every single research article published from 2010 through 2013. With the entire population of articles, it is not necessary to make a statistical inference from the data. All that was really necessary was some descriptive statistics–like median and interquartile comparisons. Unfortunately, the analyst put more confidence in the statistical test than on the simple comparison. When a statistical test returns some wacky or counter-intuitive results, there is usually something wrong in the data or in the statistical model. This should have set off some red flags with the analyst or those that reviewed this report.

I appreciate that the NPG/RIN agreed that the data should be openly available for debate, but this seems like a feeble excuse for a poorly-analyzed study. Those at NPG who commissioned this study should have demanded a lot more from the RIN and missed another opportunity to tackle the access-citation question. While this report was not treated as a peer reviewed journal article, perhaps it should have been, and met the rigorous standards for design and reporting that the NPG demands for any study that is worthy of its brand.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

21 Thoughts on "Is Open Access a Cause or an Effect?"

Hi Phil

I’d like to refer your readers to Ellen’s blog (http://blogs.nature.com/ofschemesandmemes/2014/07/30/investigating-the-open-access-citation-advantage) which explains the methods used and the fact that because of the way in which SPSS runs the statistical test, it will only ever produce a negative z value: this tells us nothing about the direction of the effect. The effect size, by the way, is a standardised measure allowing comparison within and across studies: this is what we used to determine whether effects were small, medium or large.

As previously discussed, much of the additional data you say RIN should have investigated we’re not able to provide easily – for example, it would be breaching the understanding of confidentiality between author and publisher to reveal whether an author had been rejected from Nature but accepted by Nature Communications.

We realise that this research doesn’t definitively answer the question of whether open access articles are viewed and cited more than subscription articles, but by releasing the data on Figshare (http://figshare.com/authors/Nature_Communications/598818), we are making it possible for anyone to build on and improve our analysis, and we hope to start a conversation. Already, we’re discussing what more we can do to research this topic, and we appreciate your perspective.

Kind regards

Amy
Corporate Communications Manager
Nature Publishing Group/Palgrave Macmillan

Let me make sure I understand. Because of confidentiality policies, we cannot release the right data. Therefore we will release the wrong data???!!

I’m not sure I would call it “the wrong data”, but it’s definitely inadequate data to answer the question this study is claiming to examine, no matter how much they hedge on the conclusions.

When I download the figshare file I see around 75% of the rows (articles) have a value of #N/A in the download figures. Is this correct? Are the #N/A values equivalent to zero values?

if I assume the #N/A values are equivalent to zero values then there is not a single article with anything between 1 and 66 downloads after 30 days (or between 1 and 187 after 180 days). That strikes me as an unusual distribution.

Amy, thanks for your response. I’m curious why the communications manager of NPG is defending the statistics used in this study and not the analyst at RIN. If we’re going to have a discussion about the rigor of this study, we need to have a response from the RIN analyst.

Jacob Cohen’s (1988) Effect Size calculation is meant to serve as a rule-of-thumb. Cohen’s d (not “r” as reported in the report–r is usually reserved for Pearson’s Correlation) is calculated as follows:

Mean treatment minus Mean control / Standard Deviation of the control group

As the treatment mean is larger than the control mean, the result should be positive. In addition, the reason why the effect size was so small was because the Standard Deviation was so huge in this dataset. Why? Because the analyst worked from a single observation taken for all papers at just one point of time. Papers that were three-and-a-half years old were measured at the same time as papers that were just a few months old.

A better measure of effect would be the Multiplicative Effect. For example, the multiplicative effect of OA on paper citations was 1.6 (or about 60% increase for the OA group), which is no small effect.

This lack of rigor in bibliometric studies continues to baffle me. Why is it that when you do a study on the scientific literature, it seems acceptable to the field to throw the scientific method in the garbage and to ignore the notion of doing controls?

Without adequate controls, an experiment is meaningless. I can’t tell you how many times I’ve heard a researcher tell me that they spend OA funds on their top papers, but don’t bother spending the money on their mediocre ones. At this year’s Council of Science Editors meeting, UTSA’s Kay Robbins clearly stated this in her talk in the OA session as just one example. What this study does, like every other study claiming OA citation advantage, is show that labs that have more money to spend do work that gets cited more than those with less money to spend, and that when researchers select their top papers for OA status, those papers usually draw more citations than their lesser papers. No surprises there.

As noted in the post, the one study that actually controlled for this, that made a random selection of papers OA, showed no citation difference whatsoever (though enormous differences in usage):
http://www.fasebj.org/content/25/7/2129
It’s important to recognize this. In science, majority does not rule–the truth rules.There’s a growing body of poorly designed, inconclusive studies on OA and citation, and one definitive answer that outweighs them all.

It’s discouraging though, to see normally rigorous scientists touting these sorts of studies because they fit in with their preconceived notions and ideologies, taking an “I’ll see it when I believe it” approach rather than the other way around.

I urge all to consider reading the new edition of David Glass’ book on experimental design–I edited the first edition, and it offers a superb treatise on the concept of controls, why they’re so necessary and how to perform them:
http://cshlpress.com/default.tpl?action=full&cart=14072364846620379&–eqskudatarq=1020&typ=ps&newtitle=Experimental%20Design%20for%20Biologists%2C%20Second%20Edition

I agree that “Without adequate controls, an experiment is meaningless”. But this was not an experiment – it was an observational study. Do you believe all of epidemiology is meaningless? Can we say nothing about smoking and lung cancer unless we induce people to take up smoking? This study replicated a very well-established and uncontroversial association. Yes, it doesn’t establish causation, but then they went to almost comical lengths to point that out.
An example of an actual, meaningless experiment “on the scientific literature” without a control group is Bohannon’s OA sting. But when that was pointed out, you said that controls were “Useful, but not necessary. If I do an experiment in zebrafish, must I also replicate the same experiment in goldfish?”
http://scholarlykitchen.sspnet.org/2013/11/12/post-open-access-sting-an-interview-with-john-bohannon/#comment-116598
Apparently that study was a better fit with your “preconceived notions and ideologies”.

Fair enough, you make a good point. This is not an experiment, it is indeed an observational study. But observational studies also need controls, and proper experimental design. In this case we’re talking about a Case-control Study (https://en.wikipedia.org/wiki/Case-control_study), in which, “two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute.”

When one does these sorts of experiments, great care must be taken in order to select the control group so the results are meaningful.
http://ebp.uga.edu/courses/Chapter%207%20-%20Observational%20and%20quasi-experimental/3%20-%20Cohort%20and%20case%20control%20studies.htm

Case-control studies have two potential biases that do not exist to the same extent in a cohort study. First, you have to pick controls. If the controls are different then the cases (i.e. older, larger, different lifestyle, different habits), that introduces bias. Second, you are always looking backwards in time (retrospective design) to determine prognostic factors. While in some cases you may have good records and little bias (for example, descriptions of the surgery performed and the amount of radiation given), other variables may be subject to significant “recall bias”.

In this study, the control group is poorly chosen. The many variables that exist between the control group and the experimental here make it very difficult to give much weight to any observational conclusions.

And as you note, observational studies can only show us correlation, they say nothing of causation.

So statements like this are thus problematic:
http://www.nature.com/press_releases/ncomms-report.html
“It’s clear to see that the effect of open research on citations impacts all levels of research positively.”
““This study adds to the growing body of literature showing that open access is good for article citations”
“…we’re confident that the analysis shows that open access has positive effects for both authors and readers.”

All of the above imply or openly state that open access is the causative factor here and that’s going beyond what the data tell you.

It’s even more problematic to do an observational study like this after a definitive experimental study (with rigorous controls against selection bias) has been done to rule out this particular causation (http://www.fasebj.org/content/25/7/2129). This observational study then, is the equivalent of looking at people who carry zippo lighters and concluding that zippos cause lung cancer, despite all the existing laboratory evidence pointing toward smoking as the culprit.

To make things worse, as detailed in the post above, the statistical analysis was inappropriate, inadequate, and incorrectly performed.

As for the Bohannon paper, where he did an experimental manipulations, rather than an observational study, my earlier point was that any conclusions that can be drawn from what he did are limited by the controls he employed. His study says nothing about the reliability of the OA journals he tested as compared with subscription journals. It would be wrong to imply (as some mistakenly did) that anything could be understood in that area from Bohannon’s work. However, strictly limited conclusions can be drawn from the experimental population used and the controls performed. The study is not as meaningful as it could have been, but has some limited meaning.

People wanted to use it to show that OA has poor quality control as compared with subscription access journals, and that is not supported by the data. Other people then tried to dismiss the actual conclusions the data does support by complaining that it did not address the question above. Neither side is correct.

This is discussed in detail in our initial coverage of the effort (http://scholarlykitchen.sspnet.org/2013/10/04/open-access-sting-reveals-deception-missed-opportunities/):

The lack of a control means that it is impossible to say that open access journals, as a group, do a worse job vetting the scientific literature than those operating under a subscription-access model. The study does reveal that many of the new publishers conduct peer review badly, some deceptively, and there is a geographic pattern in where new open access publishers are located.

I would just like to point out how conversations about OA tend to equate scholarship with the hard sciences, and leave out considerations of how the social sciences and humanities fit into this. My suspicion is that OA (author pays) favors the powerful and well-funded, and David’s comment confirms that. Is OA only about NIH- or NSF-funded studies?

It would be nice to see someone do a study of the effects of OA in monograph publishing in the humanities and social sciences.

So what you’re saying is, the best authors are the ones who choose to go OA?

No. I’m saying that the RIN analyst cannot rule out self-selection as an explanation in her study. $5,000/paper for OA services is quite a barrier to participation. The analyst should have compared the OA and Subscription groups to make sure they were comparable before making a simple comparison.

I am expressing my opinion purely focused on biomedical journals only as a retired Information professional with career across three major biopharma corporations.

Most OA journals are free (subscription, PDF download, reprints, etc.). You get what you paid for. Most of the premier and top tier biomedical journals (i.e., JAMA, NEJM, Lancet, BMJ) are NOT free, even though some articles are for open access.

Furthermore, visibility, impact, peer-reviewed journals are tightly intertwined. Speaking about my last job with a leading HIV drug firm, the most important journal articles are published in NEJM, then JAIDS/AIDS, then the next tier (third ones I am not going to name).

The targeted readership is the HCP’s (health care professionals) having decision making or influencing power on prescriptions. These include doctors, PA’s, pharmacists, PBM’s, insurance company formulary board members,

It is simply unthinkable, unacceptable, and unprofessional to cite or show a copy of most open access journal articles to prove a point (efficacy or safety) for a drug.

I sense that things are changing and I am beginning to see a few decent articles on biomedical subjects with even fewer OA journals. But it will be a LOOONG way to go before any OA access biomedical journal would be b considered as topline.

On a separate note, I wonder how many top tier universities would place equal weight on published articles in a premier journals as those published in a second tier OA journal for tenure granting process of a professor. What about the same process for a grant application decision? What about the same process for employment decisions?

I respectfully submit that OA journals may be OK to have one’s article published, but they are currently not good enough from the perspective of impact, influence, or importance.

All the analysis dating back to the ISI (Institute of Scientific Information) Impact Factor are indirect measure based purely on statistical manipulation of numbers. Best authors focusing too much on OA journals are misguided and misinformed. OA journals should only be considered as an adjunct or supplementary proof. Fancy numbers do not translate to TRUE VALUE.

Comments are closed.