Editor’s Note: Today’s post is co-authored by Chef Roger Schonfeld and Dylan Ruediger, Program Manager for the Research Enterprise at ITHAKA S+R.

Researchers write research articles for a primary audience of peer researchers and clinicians. This is not new — and author practice does not appear to be changing, given that it is driven by a strong set of academic incentives. But the actual distribution of these research articles has changed substantially. While once these articles were only readily available in the major research institutions that subscribed to the print journals in which they were published, today these research articles are increasingly distributed online at no expense to access. The result is a steadily growing mismatch between intended audience and actual distribution. This shift, which has been building over time, is insufficiently considered by editors or publishers in their editorial practices or product strategy. 

In this piece today, we provide a framework for understanding this mismatch and we discuss some of the challenges that it has fostered. In a subsequent piece, we will review the product landscape intended to bridge parts of this gap. 

photograph of a man reading a book on a subway train
David Nitzsche, “Subway ReadingCC BY-ND 2.0

The Actual Distribution Has Expanded Significantly

The actual distribution of scholarship has expanded in at least two important ways over the past two decades. First, as a result of the reduced marginal costs of digital provisioning over the internet (as compared with traditional print distribution), far more educational and research institutions are able to subscribe to far more publications. Second, as a result of OA and other free access initiatives, a growing share of works of scholarship are now far more widely distributed to the global public, making them available beyond the walls of the academy. This latter effect has changed not only the size but also the nature of the audience. 

Policy-makers have very intentionally advanced open access, in particular, with this second objective in mind. Take for example last year’s OSTP policy guidance, the Nelson Memo, which emphasizes many times the importance of free public access, with particular call outs to the rights of taxpayers and access “to the people.” The Nelson Memo actually emphasizes the extent to which even embargoed public access has increased the readership of the scholarly literature as it sets the stage for even greater circulation by eliminating embargoes on publications derived from federally funded research. As funders and policy makers continue to expand public access to scholarship, the mismatch with the intended audience that we discuss in greater detail below could reasonably be seen to be their responsibility. 

The Mismatch Is Value Neutral

Ultimately, open access policies and business models and free access initiatives have expanded the distribution of scholarly works dramatically. This may well have resulted in many of the positive benefits foreseen by proponents. Still, the resulting mismatch between intended audience and actual distribution poses real challenges as well. Some of these challenges are not the result of any ill intention. 

It can be deeply confusing for even the most well educated non-expert to try to wade through the scholarly literature outside their own field. An illustration can perhaps help. If you or a loved one have grappled with a serious medical condition, perhaps one that could require surgery, over the past decade, undoubtedly a first port of call is Google. Such a search can produce a wealth of hits, including from hospitals trying to market their services. It will also bring up research articles studying the advantages and risks of various forms of mainstream treatments, and alternatives, including small sample reports from the field. These studies are a valuable contribution to the scientific literature as the building blocks for more powerful research studies, meta-analyses, or systematic reviews. But when a patient or their loved one encounters such a study, it can reinforce their own hopes for alternative treatment or produce doubts that are not supported by anything approaching a scientific consensus. 

Clearly, the potential confusion of lay readers is no reason to hit the brakes on open access. The answer here is to consider how to provide a substantially expanded audience with access to trustworthy science that is effectively translated for its needs and who should bear responsibility for facilitating this translation. 

The Mismatch Can Also Be Exploited

Where there is goodwill, translation services may make a substantial difference. But as widespread academic fraud and other factors increasingly lead to an erosion of public trust in science, the mismatch can create an effective vector for the malicious introduction of misinformation. The vector for the introduction of misinformation is complicated. A misleading or false piece of research is introduced into the scholarly record, perhaps through a preprint service or through a formal publication (sometimes in a predatory journal). Because it is published open access, anyone can read it if they discover it. Then, a public actor, such as a prominent politician, points to that item of research as supporting a point of view they are trying to advance. The false piece of research need not have been placed in coordination with the public actor in order for this vector to be worrisomely effective. While these forms of deceit are not yet endemic, it takes little imagination to recognize their potential for harm. 

While publishers cannot control all the factors at play, they are increasingly investing to identify and control fraud, for example through the “integrity hub” initiative, as well as through many investments at individual publishers. It is less clear how and if they can minimize the risk of deliberate misuse of the scholarly record.

Whose Responsibility? 

Rebuilding trust in the scientific record, and by extension in science itself, will rely on addressing the mismatch between intended audience and actual distribution, which has been largely absent from these discussions and initiatives. To date, what dialogue has taken place typically blames authors, rather than the structure of the system, for the resulting challenges. 

For example, there is a steady stream of articles like this one — “The Needless Complexity of Academic Writing.” And Twitter has brought us ironic gems like this one: “what if paywalls are to protect the public from academic jargon”? But while some observers may mock academics, the actual incentives they respond to are clear. Promotion and tenure standards and professional norms encourage authors to address specialist audiences and can discourage writing in accessible prose. Funders likewise incentivize work that is intended, at most, for an audience of peers. One analysis found that jargon is actually quite helpful in grant applications: “A writing style that is structurally complex with fewer common words, but is written like a story and expresses more scientific certainty, correlates with receiving more money from the NSF.” 

Looking Ahead 

Even as the actual distribution of articles is expanding, open access publishers are considering readership and audience comparatively less than they ever have before. They are instead prioritizing authors as a result of the shift to Gold open access business models that many of them are pursuing. One result is that the value of science to the general public is not being emphasized anywhere in the value chain. And trust in science is being pursued as a problem to be solved by addressing shortcomings in research integrity rather than, also, ensuring that high-quality, trustworthy, understandable translations of science is available “to the people.”

So, it is possible that the present mismatch will grow only deeper as a result of the new set of public access policies. That said, there are also a growing number of initiatives interested in translating academic research beyond an audience of peers to something accessible by a wider public. In a follow-up post, we will profile some of these services and ask questions about their place in the scholarly communication system — and their sustainability.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Dylan Ruediger

Dylan Ruediger is Program Manager for the Research Enterprise at ITHAKA S+R, where he focuses on exploring research practices and communities.

Discussion

15 Thoughts on "Intended Audience and Actual Distribution: A Growing Mismatch?"

If to “a primary audience of peer researchers and clinicians” why do so many papers begin with a paragraph or more spelling out to that audience the general importance of the work? The “primary audience” already knows that, say, understanding malaria, is immensely important. Yet the papers begin with descriptions of world deathrates, especially among children.

Of course, the authors know that the granting agencies get much of their funds from public coffers and the agencies implicitly demand that authors keep in mind the need to market to the politicians who reflect public perceptions. The latter are much influenced by quick-fix advertisements in the non-science media and wily politicians who, for example, make uninformed pronouncements on COVID-19 therapies.

You mentioned that “Second, as a result of OA and other free access initiatives, a growing share of works of scholarship are now far more widely distributed to the global public, making them available beyond the walls of the academy. This latter effect has changed not only the size but also the nature of the audience.”
Would you be able to share any data showing that OA content is accessed more by laypeople/the general public, compared with paywalled content?
As someone who works full-time on OA journals, I’m not convinced that members of the public who are interested in research, are reading articles directly on publisher’s sites. More likely that they get this content distilled through a third party – eg. blog post, or social media.

However, I’d be very happy to be proven wrong!

Nice to read this article but there is also a question of acces. Why we mandate that Open Access Publishing is without a subscribtion? Not a paywall, subscribtion. Why it is not allowed to set subscribtion with a reasonable price for laypeople/the general public. This should not be compared with paywalled content. Researcher/Institution would arange some kind subscribers to open with publisher.
Professional magazines are doing just that and it is good way share content.

Subscribe to open path for Researcher/Institution.

Just to correct auto correction in my original comment

If someone is really interested in research, she/he is reading articles directly on source’s site, i.e. publisher’s or preprint platform or OA repository, etc. Distilled content can be enough for those interested in news about science, I guess.

Perhaps the upcoming profile of initiatives interested in translating research will include ones that help readers better learn the language of science. Increasingly there seems to be opportunity for another strategy to address the gap between readers’ readiness to understand available presentation of science and the content provided in scholarly communications written for those conducting and disseminating research. Does not higher education have responsibility to raise society’s understanding of scientific reasoning–science literacy, logic and communication–beginning within the fuller academic community that includes graduates and employees that may remain part of that unintended audience? I look forward to further reflections from the Kitchen on its well framed topic of the impact of the mismatch.

Hear hear, Roger. It’s funny as I clicked through to read your post right after finishing giving a webinar* about interpreting and promoting research relating to the Sustainable Development Goals. I think initiatives like the SDGs have in part been responsible for some recent growth in the “science curious”. It may also derive from the higher expectations of younger generations that they will not be excluded i.e. people increasingly feel it is their right to be able to understand developments in science (indeed, this is #27 in the UN’s declaration of human rights).

Beyond the points you make here, I think the real challenge is a legacy “build it and they will come” mindset. It’s not enough just to add a summary to an article abstract page (or whatever). Broader audiences have never heard of most publishers and certainly aren’t actively going to publisher websites (or institutional repositories, or aggregators, or even Google Scholar) to look for content. It pains me to see time, money and effort going into writing beautiful summaries, creating infographics, videos etc – without commensurate effort put into getting that content to the audiences it is supposed to benefit.

At Kudos we’ve spent 10 years collecting hundreds of thousands of plain language summaries of research. But the really important (unique?) thing we do is to publicize this content, whether as individual summaries, curated collections around topical issues etc – lots of targeted promotional activities to make sure the science curious can easily find and explore research, explained directly by researchers (rather than having been interpreted through the lens of any particular media organization’s agenda, for example).

This is probably part of what you plan to cover next, but I just wanted to flag that it’s not only about the language in which research is shared – it’s also about how and where it is (or isn’t) promoted. As a sector, we need to tackle these problems together.

*https://info.growkudos.com/landing/knowledge-cooperatives-benefits-for-publishers-dl if anyone wants to watch / read!

I disagree with the basic assumptions of the article and think it is very speculative – that “lay people” are now getting lost in the OA forest in a big way and perhaps coming to harm should be better evidenced rather than just asserted.

What is more obvious, after all:
– not only the number of (accessible) scientific publications has increased, but also that of scientifically trained people. They use publications for professional purposes, as engaged citizens, or for research in less formalized settings than universities. “Peers” are thus found to an ever greater extent outside universities, and “the wider public” is not to be seen as the poor relations who basically always understand only half and therefore need help through “translation”.
– not only humans read texts, but also machines, be it in science, in business, or through NGOs. These machines will not care about jargon, they use the texts as data.
– Funding of research, especially from the government, is increasingly addressing all these dimensions and no longer just traditional science.

The problems with disinformation and fraud are hardly due to too much accessibility to research, but – among other things – to two points:
– False incentive systems in science that lead to crisp theses and correspondingly “optimized” measurement data.
– the lack of potential of journals to adequately reflect today’s data-driven science – the path from lab to article has become too long.

Especially in data-driven science, science would need an infrastructure that would be more similar to a Science GitHub than to publishing products. That would really advance quality-assured research and would at least make the misuse of e.g. preprint servers by politically motivated forgeries more difficult.

But then the question of “translation” would presumably also arise anew for everyone who is not directly involved in the research process.

It’s fascinating to me how often commentators, like the chefs here, worry about potential imaginary harms caused by too much access to knowledge (especially in the hands of–gasp!–the public), when the most devastating case of academic fraud in our time came from a prestigious, highly selective, subscription journal with full peer review. The Lancet published Andrew Wakefield’s fraud in 1998, 4 years before the OA movement began. This wasn’t a pre-print, nor was it particularly accessible outside subscriptions – but neither fact inhibited its use by politicians, anti-science movements, and other bad actors.

I doubt anyone can find a preprint, OA journal article, or ‘predatory’ publication responsible for anything remotely comparable to Wakefield’s catastrophic impact on global public health. If anything, we should be speculating on the potential harm caused by implicitly trusting the work of elite paywalled journals, given how the reputation of the Lancet (& Elsevier) validated and amplified Wakefield’s fraud.

Fostering uncertainty and doubt around access to knowledge does nothing to address the big, systemic issues facing academia and society as a whole, like research fraud, mistrust in science/expertise, or mis- and dis-information.

Hi Matt,
Speaking for myself, I want to emphasize that I in no way feel that open is the only issue here. Rather, I see it as part of a web of changes to trust in civic institutions, expertise, and the scientific record that we need to address in a variety of ways. I’m glad you raised the Lancet/vaccines/autism case in particular, which I agree was a singular failure, although I suspect that some of the cases during the pandemic came close or perhaps even surpassed it in terms of their own misinformation, politicization, and catastrophic outcomes. That said, I cannot agree with you that we should look past the effects of free/public/open access as one important ingredient in the changing environment. It is part of the new normal which presents an obligation societally to improve how scientific information is utilized by a broader general public.

By the way, in case you haven’t seen it, I’ve written more broadly about the questions of whether our current approaches to scientific communication are fit for purpose (including the Lancet case which garners a brief mention):
https://scholarlykitchen.sspnet.org/2021/11/01/is-scientific-communication-fit-for-purpose/

Thanks,
Roger

Ah yes, the Lancet is a singular failure. Just like every RetractionWatch entry for one of the big established journals is a singular failure. Just like the many coincidental singular failures which show a correlation between impact factor and retraction rate (see https://doi.org/10.3389/fnhum.2013.00291 & other work by Brembs, et al). Failures in OA publishing, of course, are not singular confluences of unfortunate events – they are systemic issues justifying caution and concern whenever OA initiatives show signs of pushing towards change.

Sarcasm aside, I cannot agree that expanding access to knowledge is itself a problem, nor that it is somehow a driver of political misuse or your other speculative harms. Take your healthcare example. Yes, some patients might find fringe research or alternative information that misrepresents their condition – but most problematic information wasn’t paywalled to begin with. (See also: legitimate newspapers erecting paywalls around their “good” journalism, while misinformation purveyors share widely and freely.) If anything, the changes you worry about should *improve* the ratio of valid-to-fringe medical information available to a patient – unless you’re arguing that subscription research is less valid?

Similarly, a patient might find scholarly information that leads them to question their doctor – but in the U.S. healthcare system, medical professionals and insurers routinely minimize or ignore the experiences and pain of people whose bodies differ from the (white, male, able-bodied) default. The information problem faced by patients is NOT greater access to medical research – it’s healthcare experts who lack the training, incentive, and/or desire to engage with patients as equals.

The reality is that the public can access all kinds of information already – the internet genie is out of the bottle. The result of subscription gatekeeping is merely keeping a portion of that information out of circulation – a portion which, according to the gatekeepers, is the most carefully-vetted, reliable, and important new knowledge. It is rank elitism to pretend that fostering ignorance in this way actually benefits the people left outside the gates.

Hi Matt, I’m sorry but I think I’ve lost the thread of what we are disagreeing about. You said that the Lancet case was the “the most devastating case of academic fraud in our time.” I agreed that it was a “singular [exceptionally good or great; remarkable] failure.” And I also think we are both agreeing that the world has changed dramatically and that open access has done lots of good. Indeed, Dylan and I specifically emphasized that to the extent open access is a contributing factor it is still “no reason to hit the brakes on open access.” I think we agree on that also. Where we seem to differ is that in this piece we argued that more work needs to be done to help translate the work of academic science/medicine to a broader audience and that this work is more important than ever given the expanded audience to these materials. Am I understanding that correctly?

Of course, the information problem that we mentioned in this piece is not the only one that patients face, nor did we suggest it is. But it does exist, in my own personal experience, for well-educated and privileged patients and I know for many others as well. And I can’t understand why we would want to ignore that.

Apologies for the “singular” confusion- I read a different sense of “singular” – as “unusual” or “a special case”, in contrast to the way individual instances of failure in OA publishing are routinely used to indictentire publishers or publishing models. (Peer review failure in an Elsevier journal? An unfortunate accident in the review process. Peer review failure in PLOS or Frontiers? Can’t trust the entire publisher or OA publishing in general.)

I am happy to agree with you that OA efforts are leading to more people having access to academic work, what you describe as a “mismatch” in distribution. Where I disagree is your proposition that this “mismatch” is cause or driver of the issues you raise. In paragraph 6 you worry about healthcare misinformation, and in paragraph 8 about political misuse of research – but as the Lancet issue demonstrates, *neither scenario requires or is particularly exacerbated by open access.*

Put every OA article behind a paywall, roll back every single funder mandate . . . and neither of your problems will improve. Patients will still be using the internet and finding misleading or false information (crystals cure cancer!), frequently designed to look or sound like scholarly work; they just won’t have access to any actual research studies, of any quality. Similarly, politicians will still find lots of mis/dis-information and think-tank justifications for harmful policies (“we found no link between cigarettes and lung cancer”) – and the few journalists who still try to fact-check political claims with other research will run into paywalls.

My position is that the problems you describe are the result of the internet revolution, and a side effect of disappearing barriers to producing and sharing information across all of society. If OA is anything in this context, it is an opportunity for traditionally-respected experts to compete with the mis- and dis-information already circulating.

So what I take away when I read this piece, despite your careful aphophasis, is the following argument: OA initiatives are causing a mismatch in research distribution, and that mismatch is causing problems; therefore, OA initiatives are causing problems. It follows that that as OA initiatives grow, the problems will grow too – so if we don’t want these problems to worsen, what is the natural conclusion? Why, perhaps we should stop hitting the gas on OA – maybe we should even hit the brakes!

That’s why I keep saying that OA is not the problem. Increased access to academic information (your “mismatch”) is not the problem. Suggesting that these problems arise because the wrong kind of people have greater access to scholarly work is, in my view, both incorrect and elitist.

Hi Roger. I love the topic here, but find myself sort of echoing Matt’s sentiments. This topic is worth exploring much more than it has been explored, and getting a bead on it is important to the future of science communication. But fundamentally, I think your argument needs a stronger evidentiary foundation. Much more research is being produced today than 10 years ago, and 10 years ago before that—we all know the trends. And of this, much more is being made available via open access. But beyond this, the numbers get fuzzy. Is the uptake of research in each field increasing? Hard to say. Maybe we measure this by citation rates? Are open access works being cited more by researchers? The answer appears to be no. Is the public accessing these articles at a significantly higher rate? I haven’t seen any data to suggest this (maybe it’s out there?); it stands to reason that more people outside research are finding articles, but are they reading these articles at a higher rate than before? The historical snapshots in various research articles points to about 75% of research paper uptake being from researchers and students, and 25% from governments, industry, educators, and the general public.

To your point, though, you’re arguing (if I’m reading your essay correctly) that if more people are finding these papers in the wild and attempting to use them, then we should make more of an effort to make these papers understandable. Maybe AI will have a role in helping to make complicated biotech and astrophysics articles more understandable to the lay public, but researchers won’t do this type of writing on their own—they have no incentive to do so (and are more often penalized for “dumbing down” their prose). Furthermore, if the purpose here is to head off misunderstanding and misuse, this simplification alone won’t solve the problem. This discussion thread has mentioned the Wakefield study, but in fact the history of science communication is really a nonstop contest of wills between what scientists are debating and what the public wants to fit into its narrative. When Darwin’s work was published in the late 1800s, it spawned 50 years or horrifying work—both in science and in the public mainstream—on eugenics, justifying everything from restrictive social laws to mass sterilization to mass murder. The public has always interpreted science in its own way to fit its own narratives, as we continue to see today (anti-vaxxers, climate change deniers, weight loss gurus, etc.). Complicating this dynamic is that fact that, at least over the last generation, an entire industry of think tanks has evolved which is dedicated to undermining public confidence in science by not just misunderstanding science, but deliberately misrepresenting it (see, for example, Merchants of Doubt, where paid industry “experts” threw a wrench into efforts to curtain smoking, acid rain, and more).

There’s a huge challenge out there, to be sure, but the answers will probably have less to do with improving public literacy through more readable journals, and more to do with science communication 101 stuff—improving K-5 science education, reducing the partisanship in science policy (by enlisting social networks in the science communication effort, for example), changing the tone of science communication from lecturing to partnering, and even curtailing (by legislation? by withdrawing tax-exempt status?) disinformation factories. Journals are just a small cog in the scicomm wheel—important to researchers, but probably not a lynchpin in broader conversation. Continuing to improve journals is important, I agree, but for whom and to what end are some really meaty questions that need a lot more exploration. Thanks again for raising this issue.

Comments are closed.