Science is a process. We tack towards discovery, towards truth, because the process encourages curiosity, critical thinking, experimentation, correction, and, at least in recent years, competition. When it runs properly, the process as a whole, over the course of time, is trustworthy. To be sure, individual scientists misbehave and scientific works are riddled with problems, but the process seeks truth. 

Tacking towards truth through iteration and error correction is a workable model when scientists are talking only to one another. But in today’s environment, openness brings individual scientific works far more readily into the public discourse. And public discourse is intensely politicized, with science serving in turn as an enemy, a scapegoat, a virtue signal, or a vector for misinformation. I believe our sector is overdue for a conversation about whether our model for scientific scholarly communication is fit for today’s environment, or whether it is increasingly leading to an erosion of public trust in science.

Five Points was a slum on the Lower East Side of Manhattan. Declared in 1858 in the New York Herald a “nest of drunkenness, roguery, debauchery, vice, and pestilence,” the neighborhood was home to a combustible mix of New York’s poorest citizens: recently arrived (predominantly Irish) immigrants, unskilled laborers, and African Americans. Highlighting the district’s renowned chaos and vulgarity, the figures in the painting fight, flirt, and generally misbehave amid dilapidated buildings. Although the artist has not been identified, this image is well known, having been reproduced as a lithograph in a 1855 guide to New York City.
The Five Points, The Metropolitan Museum of Art.

Openness

In earlier eras, the distinction between scientific communication with peers and public communication of science was greater. Over the past two decades, one of the underlying rationales of the open access movement has been that the general public should have ready and free access to the scientific record. As we begin to better understand the second-order consequences of openness, we must grapple more systematically with how they can be addressed. 

One of the most notable effects of the open movement has been an array of mechanisms that provide unmediated access to reviewed — and unreviewed — work. As a result, scientific communications with peers is now not only publicly available but widely exploitable by those who wish to foster misinformation and public discord. A consequence is that the scientific research enterprise — and in particular journal editors and publishers — are becoming responsible not only for facilitating peer to peer communications but also for public access. As such, they are grappling with the upstream exploitations and downstream public communications and misinformation which were previously squarely outside their remit. And, publishers are also getting cut out of the loop altogether, as scientific findings are increasingly being communicated first by press release.

Politicization

Many Americans despair about whether science is taken seriously enough in public life. Certainly, the pandemic has brought home not only enormous challenges in public communication about science but also the serious consequences of our failures in this respect. We have faced significant challenges in public communications about and trust in science — on topics from masks to snakeoil treatments — that has exacerbated the crisis. The chaos around vaccines is especially demoralizing, given that sizable public investments in scientific research, including vaccine science, have provided pharmacological interventions that are halting a new pandemic’s spread while it is still underway.  Likewise, hesitancy to dig fearlessly into the origins of the coronavirus is yet another case of politics interfering with what should be societal and scientific goals to prevent a future pandemic. Even for all the advantages that science has given us in fighting this pandemic, politics has made a mess of it. 

To be fair, the divisiveness of scientific communication over the past two years and the abject failures in public communication on scientific issues must be situated in the broader decline in public engagement in and trust of civic institutions. It is related to the crisis of democracy we have seen emerge in a number of North American and European countries in recent years. As the federal government itself warned in the leadup to last year’s election, efforts to target scholarly publishers are in some cases designed to sow public discord [PDF]. The challenges of generating trust in science exist within an environment where actors are intentionally working to divide us. Strategies are needed to resist this division, some of which should be led by the scientific sector itself, including our publishers and libraries.

Self-Policing

The problem here is that science itself is not always trustworthy, especially in the role that it is now playing in our society. There are too many examples of scientific misconduct and fraud, and too many failures to prevent them, to ignore. Much of science, perhaps the vast majority of it, is not of concern. But too often, incentives are misaligned with the goal of scientific quality: competition has tremendous benefits in producing excellence, but the downside is the production of fraud as well. The result is that, taken as a whole and given its role in our society, scientific practice and communication are insufficiently trustworthy. The consequences of this failure extend beyond the current pandemic to other global imperatives such as climate change.

One category of problems is scientific misconduct and fraud, which, it is important to note, is perpetuated by scientists themselves. This category includes scientists who use fraudulent data, inappropriately manipulate images, and otherwise fake experimental results. Publishers have been investing increasingly to block bad contributions at the point of submission through editorial review and more is almost certainly needed, likely a combination of automated and human review. Another form of misconduct is the failure to disclose conflicts of interest, which, notwithstanding efforts by publishers to strengthen disclosure guidelines, have continued to be disclosed “too little too late,”

Beyond individual misconduct, there are also organized and systematic challenges. We are seeing “organized fraud” and “industrialized cheating” to manipulate the scientific record to advance self-interests. These choreographed efforts include citation malpractice, paper mills, peer review rings, and guest editor frauds. And, even if it does not rise to the level of misconduct, we have seen the use of methods and practices that make substantial portions of at least some fields impossible to reproduce and therefore of dubious validityWhether individual, organized, or systematic, all these are threats to scientific integrity. 

Overall, it is clear that science is failing to police itself. Some observers hope that “open science” will minimize misconduct and fraud, and as much as it may help it seems unlikely to be sufficient. Indeed, a number of cases have been discovered by an “image detective” who has been profiled not only in Nature but also in the New Yorker. Some egregious misconduct is investigated at a university, funder, or national level. What none of this does however is prevent misconduct. This is all after-the-fact detection. The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively.

When the editorial process fails to detect fraud or other serious problems in submissions, these submissions are issued publicly and in many cases formally published. Preprint services were not prepared to combat their role as vectors of misinformation, generating a series of preprint disappointments that have been extensively chronicled in The Geyser. Peer review has failed on too many occasions, with major journals publishing articles about COVID treatments that turned out to be unsupportable. The effect on the public trust of science has been just as corrosive as the vaccines-cause-autism scandals of years past, in whose shadow we still shiver. And of course, we also have the cases of journals that appear to provide relatively little review or are actually not really providing any review, including those that are sometimes termed predatory

Once fraudulent or otherwise inappropriate materials are published into the scientific record, a retraction is in one sense a good outcome because the official record is corrected — but in another sense it is evidence of failure. There have been important efforts to improve the retraction process in recent years, such as this proposal recently covered in the Scholarly Kitchen, but ultimately I maintain that we need an external body like an airline accident investigation board to investigate these failures and make public recommendations for process and policy improvements

While the specific issues vary, and specific solutions therefore do as well, at the highest level we must ensure that the right incentives are provided for all participants in scientific scholarship and scholarly communication. Some observers excuse these failures as limited only to a small portion of science. Others will point to them as the understandable consequences of “scholarship as conversation” in which hypotheses are advanced and tested and rebutted and accepted over time — as if scientists are only speaking to their peers and there is no risk of misunderstanding, misinformation, or politicization. As Ed Yong recently wrote in The Atlantic, “Pundits have urged people to “listen to the science,” as if “the science” is a tome of facts and not an amorphous, dynamic entity, born from the collective minds of thousands of individual people who argue and disagree about data that can be interpreted in a range of ways.” And, if science has flaws in its trustworthiness, it is no surprise that it is challenging to generate public trust in its findings. 

Our Sector Must Engage 

Let me be clear: The problems I discuss here are caused by a mismatch between the incentives that drive the practice of science and the ways in which openness and politicization are bringing science into the public discourse. While this set of problems is not caused by the publishing sector, in the end scholarly publishing has a responsibility to provide the basis for trust in science. 

In recent years, several major publishing houses have aligned much of their effort to support the public good around their support for the UN’s Sustainable Development Goals. Taking nothing away from the SDGs, which represent some of the most urgent and important thematic focus areas for scientists, they do not address the issue of trust in science. 

Another question for universities, funders, and publishers to ask is how they can contribute to trust in science, especially if existing models for scientific scholarly communication are no longer fit for the broader purpose they find themselves needing to play. Here are several proposed priorities: 

  1. To provide a scientific record so completely trustworthy that it contributes to rebuilding public trust in the institution of science. This would require an array of changes, including incentives that dramatically reduce incidents of scientific fraud and misconduct. 
  2. To ensure the continuing prioritization of scientific openness even as the research security imperative grows more pronounced. This would require more than just hopefulness about “open science” but rather a realistic appraisal of how the split with China and other geopolitical priorities may affect scientific collaboration and communication. 
  3. To provide a user experience that is so seamless and value added that users choose validated sources of information. This would require that publishers expand their thinking about piracy beyond its place as a business risk and address it as a strategic challenge in ensuring a trusted information environment. 

There is opportunity here for the scholarly publishing sector to take strategic leadership for its role in science and scholarship. Doing so would require a vastly different kind of engagement with academia, working arm in arm with senior research officers, policy makers, funding bodies, and libraries. This in turn would require the major publishing houses to speak with a single leadership voice on topics that have thus far been elements of the competitive landscape. The long-term benefits to science and the public would pay substantial returns.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Discussion

23 Thoughts on "Is Scientific Communication Fit for Purpose?"

Thanks, Roger – an important call to arms for all of us and I agree with your assertion that publishers must engage with these issues. There is most definitely more that we can and should be doing. That said, I don’t think that we should underestimate the connection between declining trust in science, lack of trust in expertise of all kinds, and the growth of populist authoritarianism around the world. This wave of anti-rationalism has been growing for a long time, particularly in the US where the founding principles of liberty and egalitarianism naturally lead to a resistance to intellectual authority (Tom Nichols writes about this in his excellent OUP book, The Death of Expertise). In other words, I don’t think we can address these issues successfully without understanding the broader social, political and media ecosystem that we’re operating in (and especially the profit-driven social media algorithms). That’s not to let us off the hook as an industry, more to recognize the overwhelming power and challenge of the systems we operate within.

Totally agree, Alison — and I think it’s also important to bear in mind that the wave of anti-rationalism you mention by no means uniquely connected to the current rise in authoritarian Right-wing populism. For decades, the academic Left has been pushing the ideas that reality itself is socially constructed, that there’s no such thing as historical fact, that “objectivity” is a pretense, that trying to establish “truth” is an inherently racist endeavor, etc. Anti-rationalism seems to be part of a broad social dynamic that has infected thinkers, politicians, and the public across the political spectrum since the early 20th century, and it’s terrifying. (Of course, it’s more terrifying the more power its adherents have, which is why the Trump presidency — and Trump’s continued influence — was, like Lysenkoism before it, more scary than college sociology classes that teach the social construction of reality.)

Thanks Alison, and Thanks Rick, for picking up on what I was writing about the decline in public engagement in and trust of civic institutions, which I believe is highly related to the crisis of democracy that many countries have faced in recent years. I agree entirely that the larger social, political, and media ecosystems are at the heart of the challenges we face. That said, I think those forces, when they work against us, make it all the more incumbent that the field of science ensures it is trustworthy.

Absolutely, Rick. While it’s largely right wing populists in the ascendence now, this is a problem across the political spectrum and rejection of expertise is not limited to Trump voters. The examples you cite frustrate me just as much!

I think the right wing populists are in the ascendence as a percentage from the general public (in general overestimated), but the left wing ideology has a much higher influence as it infected academia (in general underestimated).

Excellent post Roger. Skip Lupia, now completing his term as Assistant Director at NSF, gives a fabulous lecture related to the themes you raised. Here’s a version: https://www.youtube.com/watch?v=hLMsylnIOOE . I highly recommend it. Perhaps your post will inspire some movement toward a shared, collaborative vision in this community.

It has appeared to me that the enormous jump in scholarly communication fraud has coincided with the enormous increase in English-language publishing by Chinese scholars, and anyone following the notices in Retraction Watch can’t help but notice the large number of Chinese names in those notices. While “publish or perish”, an invention of the West, has always created a perverse incentive towards fraud, the requirements of the Chinese government on its researchers are so extreme that one really can’t blame the researchers much for doing what they have to do in order to keep their jobs and continue to do good research. Recovering the reputation of “science” has to start with some kind of universal condemnation against the Chinese government’s policies that are polluting the scholarly output of the world. https://www.sixthtone.com/news/1003146/publish-or-perish-the-dark-world-of-chinese-academic-publishing https://www.mdpi.com/2304-6775/4/2/9 https://www.tandfonline.com/doi/abs/10.1080/07294360.2021.1971162

However, the Chinese govt may have started to solve this problem on their own: https://www.mpiwg-berlin.mpg.de/observations/1/end-publish-or-perish-chinas-new-policy-research-evaluation but we have not had enough time to see if it really solves the problem or if the mindset is too baked into an entire generation of scholars now.

As you note, addressing research fraud by “gating” content dissemination does not seem economically viable or realistic.

However, simple steps such as transparent attribution of research contributions is a more practical and realistic way for journals and preprint servers to immediately improve the quality of their research communication.

From a research communication integrity point of view, it’s shameful that most journals and preprint servers disseminate content with long author lists where individual contributions are unspecified, and most individuals are only identified by an ambiguous text string.

Thanks for the prompts to our industry Roger, this is an interesting read from multiple angles. One practical takeaway that I took from the session you facilitated at the National Academies of Science Summit is that publishers can be doing more to work with research officers at universities around research integrity and publishing ethics. I know COPE has been working to build bridges across institutions and publishers but there is more we can do cross-industry.

Thanks for your important piece Roger. I’d like to add some context from three angles, for what it’s worth—there may be more to discuss here.

First, it’s important to note that science communication really has two main buckets of activity: internally-focused work like grant writing, journal articles, etc., whose main purpose is to communicate science to other scientists; and externally-focused work like blog posts, press releases, SciAm articles, etc. These two buckets are typically viewed as totally separate undertakings, but they’re all science communication. Open solutions straddle this border—more internal work is made available via open today, and more external work as well (including not just preprints, but over 50% of today’s journal articles are published with an open license). Are these solutions as valuable and influential internally and externally? Absolutely, However, the exact dynamics are a work in progress. The fact that science misinformation is picked up by the press may have more to do with the press than with science—i.e., the fact that newsworthy science is the kind that makes bold claims (e.g., hydroxychloroquine). Add to this the fact that Twitter is a huge vehicle for promoting science today (good and bad), and that traditional science journalism collapsed over the last decade, and you have a toxic recipe for more misinformation about science making it into the public discourse.

The second angle is public trust in science. According to data from the University of Chicago’s GSS Data Explorer (https://gssdataexplorer.norc.org), US public confidence in the scientific community hasn’t wavered much between 1972 and 2018, with about 40% (plus or minus 2.4 percent) of respondents expressing a “great deal” of confidence in science and 46.4% (plus or minus 3%) expressing “only some” confidence. Lest we lament in the 40% figure, it’s important to note that only the military enjoys higher confidence numbers; the science numbers are higher than for medicine, the Supreme Court, organized religion, education, and more. We do indeed seem to be witnessing a lot of anti-science noise and amplification at the moment, but time will tell whether this is an actual decline in public confidence in science or whether it’s just feedback from the mic of social media.

And third, science on the whole is remarkably good at self-policing. Only a tiny fraction of the total body of published scientific work is retracted annually. The fact that retractions happens is good—a sign that the community cares about quality. And of course, keeping track of what’s being retracted is important lest our science head off down the wrong track. But I think it’s unfortunate we view this activity as a sign that science is broken and untrustworthy. Again, looking at the numbers, it’s a sign that mistakes were made (and sometimes outright fraud was committed), and that corrections are being implemented. More importantly than focusing just on retractions, I think, is working to give science the tools to ensure quality, and also to reduce pressures to publish more and produce breakthrough science as opposed to conducting boring replication studies. Improving transparency, replication and reliability, and reducing fraud, all leads to better science, and in this effort, open solutions can help. They won’t be a cure-all, but they will help ensure that there’s actual data behind findings, that fake images aren’t used, that hypotheses aren’t retrofitted to align with findings, and that we give more credit for sharing data.

Glenn, I’m not sure we can conclude science is remarkably good at self-policing by looking only at the post-publication retraction rate. Sure, the work doesn’t make it “into print” always but in some cases this “not in print” work is now widely available, preserved for long-term access with no labeling indicating it was rejected from entering the formal publication mode, and sometimes is changing policy before it ever even gets considered for print … sometimes from the press release alone. It isn’t to say that everything is broken but in no way should the scholarly communication system rest on its laurels.

Are you referring to preprints Lisa? I agree that we have more work to do to tone down the amount of press attention these papers receive. But regarding the formally printed stuff, there’s a really interesting Oct 2018 Science article on this topic at https://bit.ly/2ZKDTGf. This article is an analysis of Ivan Oransky’s Retraction Watch data, which is now publicly available. According to the article (and Ivan’s data), only about 1 in 10,000 formally published papers are retracted (for a variety of reasons), and this rate has remained roughly steady over the last 10 or so years. To the extent the raw number of retractions has increased, this is both a reflection of the greater number of papers being published and also greater vigilance on the part of publishers (especially prestige journals). Also skewing the data is that a large number of retractions are concentrated with a small number of authors, and about half of retractions happen for reasons other than fraud or plagiarism. I agree with you that science communication shouldn’t rest on its laurels, but I also think our concern over this issue can get misinterpreted by the general public. To wit, our quest for improving the system may sound to the non-science community as proof that science doesn’t know what it’s talking about and is rife with fraud, neither of which are even remotely true, of course.

I would argue that the ratio isn’t what matters, but the absolute number of retracted papers. Think about how many deaths have been caused by just that one paper about vaccines and autism. The ratio would only matter if it were so extreme that the maleficent actors couldn’t find the bad science buried in the mass of good, but clearly they can, at least at the 1:10,000 ratio.

Sure—I agree. One bad paper can inflict a hellish amount of damage to science and society. But realistically, we are never going to eliminate all the fraud, error, and flawed analysis in science. For one, publishers simply aren’t set up (in terms of personnel numbers or expertise) to fact check every article they publish. And even more fundamentally, we rely (and will always rely) to a huge extent on the good will of scientists to be truthful, first and foremost. Teaching and enforcing the norms of proper scientific conduct is beyond the scope of our current science communication efforts.

More to the point of Roger’s article, though, we rely on our science communication norms and systems to report science facts honestly, accurately and without hyperbole. That the Wakefield study resonated with so many parents is tragic but understandable given the failure of this system—a fraudulent study hyped for 12 years (before it was retracted) by the press and anti-vax community. This study “lives” on, and other flawed studies (hydroxycholoroquine, horse dewormer, etc.) will also get their 15 minutes of fame, in part due to bad science, but also due to bad science communication.

But, improving the communication that comes out of science gets us only partway to a solution. We also need to focus on improving the ability of the press, public and policy makers to understand how science operates and realize that knowledge advances as more evidence comes to light. Some people have called this science literacy; others have called it critical reading; Carl Bergstrom and Jevin West refer to it in their book as “calling bullshit.” I prefer to call it “regaining control of the science brand.” Our society has co-opted the name “science” and applied it to just about everything, from pet food to exercise bikes, so we create many false equivalencies between these “sciency” commodities and the latest research on climate change. “My science-based shampoo didn’t do squat for my hair volume, so why should I believe what scientists say about vaccines?” Working to educate the public about what science is (and isn’t) may eventually help combat science misinformation—not counterpunching with more facts (which never works), but training people how to think more critically about the world around them, which, after all, is what science brought to society 400 years ago.

I’m referring to preprints, conference papers, conference posters, press releases, etc. All the ways that science is communicated and, dare I say, published.

Scientific communication should never be geared towards individuals who don’t have trust in science. We shouldn’t dumb-down our scientific methods, nor avoid researching things that aren’t popular. Though we do have a lot of vaccine hesitancy here in the U.S. (mainly because of social media and politics), it is in no way connected with folks overall views of science, nor the public support for science.

I don’t think anyone is arguing about changing the scientific methods and research choices, but rather about the channels of dissemination for the research. OA and preprints have meant a heck of a lot of good, but parting the paywall curtain for the ignorant public to peek in has a definite downside. The question is whether the good outweighs the harm. It’s not an easy question to answer.

One angle that hasn’t been covered in this discussion: the role of management and employers in the process of scientific communication. Today, for most researchers in universities, their management and employers seem to sit in the background, outsourcing the responsibility for both the reviewing and the dissemination of research to publishers/repository managers et al. If one of their staff publish something that is later shown to be incorrect, the damage to scientific integrity has been done – as has, by extension, damage to their institution’s brand. I contrast this with that part of the research community which is largely ignored by the scholarly communications industry* – the research done by IGOs, NGOs and think tanks. The research from these organisations is rarely outsourced to journals, books and repositories, it tends to be published on their websites. This means responsibility for the publication sits squarely with the organisation itself, it has not been outsourced to a third-party. (The OECD, where I used to work, even goes to the extent of stating, in the front matter, where responsibility for a publication lies – and it is never the author, it is either an OECD Committee or the OECD’s Secretary-General.) The point is this, the responsibility sits with whoever employed the author(s), so quality control becomes a management responsibility. And the incentives for management to take this responsibility seriously are strong. Firstly, they want their brand to be influential because they want the messages from the research they conduct to cut through. Secondly, they want to win future funding. Plainly, if they put out poor-quality research their brand will be damaged and they’ll fail on both counts. Roger argues for publishers to have “a different kind of engagement with academia . . . senior research officers, policy makers, funding bodies and libraries”. But I don’t think that’s enough. I think it’s time for management to step up to the plate and assume a leadership role in the responsibility for what their staff publish and how it is disseminated (and, before you start, no, I don’t think this is incompatible with academic freedom).
*ignored until now, of course. At Coherent Digital we’re bringing the research from IGOs, NGOs and think tanks into the mainstream with Policy Commons (you knew I’d have to get that in somewhere!)

A fascinating idea Toby. How would this NOT impact academic freedom, though? In my stints with non-profit research agenies and IGOs, it’s customary for division heads and sometimes higher to sign off of publications. If university deans or some other administrative officer are heretofore asked to sign off on research (is this what you’re suggesting?), what if they did so with a heavy hand and required changes that materially affected the paper’s conclusions? Not that they would, but imagine a situation where such an official thought that a particular paper was too controversial and would cast the university in a bad light; or, in the other direction, imagine a situation where the university wanted to exaggerate a finding (or its role in a finding) in order to make the university look better. Even the spectre of having university officials (who weren’t conducting peer review) audit and/or edit the work of university researchers would probably be met with howls of protest. And yet without some sort of review and input like this, its not likely the university’s legal offices will be willing to put their official imprimatur on research papers. So all this said, how do you see this working in the university environment?

My idea isn’t that management does the signing off (plainly, conflicts of interest etc would be an issue), rather that they take responsibility for ensuring proper reviewing has been done. They could, for example, appoint an independent review group (or groups for different disciplines). Universities could even come together to appoint review groups. The point is that management gets involved and assumes responsibility for something that they ought to be taking seriously. Note, my suggestion covers both reviewing and dissemination. I find it astonishing that universities are prepared to pay journals $,000s to publish papers when they could self-publish for less. Scale is, of course, needed, so, again, universities could collaborate around publishing platforms. However, this would only be possible if management did another thing: start looking at *what* their researchers have published and stop looking at *where* they’ve been published when considering career advancement. (And ditto for funders).

Apologies for veering off on a tangent but I’m always on the lookout for cool new ideas and I think you have one here, Toby. If I hear you right, the essence of your idea is that participating universities would sign on to some sort of “academic integrity” compact wherein the universities would pledge to ensure that published articles meet some minimum standards such as peer review, data inclusion, accessibility, and other attributes they deem to be important. They would become vested in ensuring that their research is being responsibly conducted, disseminated and preserved—and can share best practices in this regard and compete for the mantle of “most responsible.” This approach would also put a dent in predatory publishing. If universities start paying attention, they certainly won’t support the decision of researchers to publish research in disreputable journals—maybe a small squeeze on academic freedom but a worthwhile one. Is this on the right track? I think approach dovetails nicely with Roger’s thesis, which calls for engaging publishers and funders in this effort as well. Happy to continue thinking through this directly by email.

I thought it would be quicker but I agree this is the likely future:
“This manuscript submission and management product will be sold to institutions as improving productivity of their researchers by decreasing the effort to manage manuscript submission and review and as improving institutional control over institutional quality metrics (which are heavily dependent on publication metrics). It is even possible to imagine that institutions might centralize the management of this process in ways parallel to the central review and management of grants and the centralization of researcher profile management (e.g., with PURE).” (https://lisahinchliffe.com/2017/02/06/elsevier-predictions/)

I agree that science communication is currently in big trouble. But I am a bit confused about the solutions you propose.
On one side I see the push to more open and decentralized science (which I totally agree with) and then I see mentioned terms like “validated sources of information” and “a single leadership voice” (which definitely sounds like more centralisation).
Could you maybe clarify what you mean by “open science”? We might have a different understanding…
You also mentioned “piracy”, so curios what you think about Sci-Hub. IMO is a huge benefit to science.

Comments are closed.