Editor’s Note: Samantha Green is Director of Product Marketing at Silverchair.

When something scares me, I don’t hide — I learn. These days, artificial intelligence is that monster under the bed — unsettling, powerful, and poised to reshape our world. It scares me for multiple reasons: because change in general is hard and uncomfortable, because seismic shifts like this have a historical pattern of leaving too many people behind, and because, on a human level, I value connection, creativity, and curiosity.

Fear isn’t just a warning; it’s a prompt for deeper investigation. The research community is built on turning uncertainty into discovery — a model we need now more than ever.

History shows us that fear, when paired with curiosity and discovery, can drive meaningful progress. I’m reminded of the Luddites — a group often misunderstood, but whose story offers a powerful lesson for our AI moment.

Remember the Luddites

I’ve been hearing the term Luddite thrown around toward those who are overwhelmed by AI and unsure how to use it, or those who maybe share my fear of its risks. It suggests someone who is reactive, who buries their head in the sand at the prospect of change.

But that’s not what a Luddite is.

During the Industrial Revolution, as factories and machines automated many aspects of textile labor, the Luddite movement fought for the rights of workers (see more from Kyle Chayka or the UK National Archives). They fought for better conditions and for progress that centered on the ability of workers to earn a living wage and support their families. Rather than being reactive to change, the Luddites were proactive: they saw the impact these industrial machines would have, not just on economic progress, but on the lives and well-being of their communities.

Luddites were protesting changes they thought would make their lives much worse, changes that were part of a revolution being celebrated as progress. They were fighting for economic justice, not against progress itself.

Today, many of the world’s largest technology companies are forecasting future layoffs as AI investment grows. The numbers are staggering: nearly 150,000 staff members were let go from tech companies in 2024, so those companies could realign around AI.

Elsewhere and perhaps closer to home for many of us in scholarly publishing, copyright is up for debate. A federal judge in San Francisco ruled that using copyrighted books to train AI systems constitutes fair use. Meanwhile, in Washington, D.C., a federal appeals court confirmed that artwork produced by artificial intelligence without human involvement is not eligible for copyright protection under U.S. law, affirming that only creations with human creators can receive such protection.

Like the Luddites, we stand at the threshold of a technological revolution. Their fight wasn’t against progress, but for progress with a conscience — a reminder that every leap forward should be guided by the question: not just can we, but should we?

Jeff Goldblum in front of a Jurassic World logo
Image used under CC BY license from Heute.

Can vs. Should

Thinking about these conflicts, I am reminded of the classic Jurassic Park observation: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” It’s a strong reminder to support funding in the humanities, but it’s also a reminder of the power of critical thinking.

Innovative, blue-sky thinking (the can) is critical to the scientific process, but we need to balance it with applied thinking (the should).

This balance — between curiosity and rigor — is exactly where the research ecosystem excels. Built on rigorous inquiry and ethical reflection, it’s uniquely positioned to lead the global conversation toward an AI future that serves humanity and our environment, not just markets.

Ethical AI Leadership

Today’s moment calls for thoughtful, evidence-based leadership: AI will reshape our world, but who steers it matters. Tech giants race to deploy AI, yet the research ecosystem — built on ethics, collaboration, and accountability — is uniquely equipped to guide this transformation.

Universities and scholarly publishers are built for accountability and collaboration. Their rigorous peer review and interdisciplinary culture stand in stark contrast to tech’s “move fast and break things” ethos. If we want AI applications that uplift humanity, not just markets, we must let the research community lead — challenging, questioning, and guiding technology every step of the way.

Crucially, research thrives on interdisciplinary collaboration. By uniting science and the humanities, economists and ethicists, technologists and sociologists, the academy creates holistic solutions that balance technical prowess with human values. The research model is inherently self-correcting: diverse experts challenge conclusions, refine methods, and share findings openly. This ongoing debate stands in stark contrast to corporate development.

On top of that, AI’s rapid growth carries a heavy environmental price, as AI data centers continue to use more power each year, and each query is roughly 7-10 times more energy-intensive than a Google search.

AI’s future hinges not on raw capability but on wise application. Adding the Luddite lesson: meaningful progress demands that we challenge technologies that threaten our communities, even as we harness them for good. The research industry has the structure, the ethical backbone, and the collaborative spirit to bridge what we can do with what we should do with the advancement of AI. Rather than chasing profit and prestige, AI can instead empower our ability to flourish 

Just as fear can spark discovery and history can guide our choices, our collective future with AI depends on who leads — and how. Industry initiatives like the Committee on Publication Ethics (COPE) offer a blueprint for the commitment needed to take ownership and address system flaws. We can also see examples of community-owned infrastructure, and a focus on transparency and accountability, in Crossref and the Research Organization Registry (ROR). The rigor and responsibility required for ethical AI innovation already exist as part of the framework of this industry. Ultimately, if we want technology to reflect our highest values, empowering the research community to set the pace and the principles, while standing against competing industries, is our best bet.  

Samantha Green

Samantha Green is the Director of Product Marketing at Silverchair, where she is responsible for communicating the impact of the Silverchair Platform, as well as ScholarOne Manuscripts and Conferences. Before joining Silverchair, she has held various roles across the last decade at organizations like Wiley and Morressier.

Discussion

4 Thoughts on "Guest Post — Fear, Learning, and Luddites: Opportunities to Lead the AI Revolution"

It’s also worth remembering _why_ we have this distorted historical view of the Luddites. Who controlled the narrative of how the Luddites are remembered? It was the factory-owning class, the technocrats of their day, who saw a challenge to their bottom line and did everything they could to discredit it, using the tools at their disposal – a fledgling media industry owned and controlled by that same ruling class.

Thoughtful — and essential — piece on a topic that affects us all, with smart recommendations for strategies to effectively manage the flood of changes AI will bring. Well done!

This is a very rosy view of the Luddites. There are plenty of historical documents in the UK National Archives giving eye-witness accounts of mobs of hand-loom weavers breaking into factories in Yorkshire and smashing the power-loom equipment which threatened their livelihoods.
Perhaps if there’s an inaccuracy with calling today’s Ai sceptics Luddites, it’s because they are unlikely to break into a cloud data centre and smash the servers. Yet.

There’s an interesting and overt bias in this article. The bias is that the research community is represented only by universities, and corporations are only for profit and tech who move “fast and break things”, but do not do research. The bias continues that only universities represent the research community. Finally, that it is only the university and academia that can provide the ethical balance we need moving forward.

This thinking is widespread. I believe we need a much bigger tent group. The economy can be separated into three sectors; not for profit, for profit, and government. If you look at it this way, money flows from the for profit organizations via paying taxes to the government, which then allocates it to many not for profits, including universities through grants and contracts. Not for profits also make their own income directly through memberships, publishing, meetings, and a creative mix of many other activities: but they do not support the tax base.

This thinking ignores the incredible amount of research and publication that is done within other organizations, but not shared beyond the organization itself. Internal corporate research powers much of the innovation and application of our world economy. Some of this information is classified as secret on behalf of governmental organizations or is proprietary to the organizations and laboratories of pharmaceutical, chemical, software and hardware development (tech), think tanks and many other organizations (large and small) who have not shared their findings outside of their organization walls. It is primarily university members, driven by the pressure of tenure, who share research as part of their regular work.

It was not always so. A scant 40 years ago, academia, government, and corporations worked together through grants and partnerships to produce research and develop potentially implementable technology.

It may be time again to encourage and foster working together. Instead of depending on government grants, we should look for ways to work with tax paying organizations. For corporations, instead of sending their research and development offshore, perhaps they could find willing departments at universities to do that research instead.

This rearrangement would require new thinking and accommodation. University overhead is very high. Awareness of industrial research done in for profit organization is low because those researchers don’t publish often, so those in academia are largely unaware of it. But they are not second class. They just have different drivers to measure their success.

The only way to ensure AI is adopted and utilized healthily is through collaboration among all sectors of our society

It is also interesting to note that the writer is employed by a for profit organization that has recently deployed several AI initiatives to their clients.

Comments are closed.