On September 20, 2024, MIT Press hosted a workshop, Access to Science & Scholarship: An Evidence Base to Support the Future of Open Research Policy, funded by the National Science Foundation and hosted by the American Association for the Advancement of Science. The workshop was driven by these questions:
As the January 2026 implementation date for the new Office of Science and Technology Policy’s public access requirements approaches, how can we ensure that current and future policies lead to the most effective and trustworthy ways to share research? Can the research community develop and fund a research agenda to understand how these changes will impact the research process leading to an understanding of how to most effectively further the open science agenda?
I had the honor of participating in the workshop as a panelist and reflecting on what such a research agenda might look like. Following the release of the workshop report, I interviewed Amy Brand, Director and Publisher of The MIT Press, who is leading this project.
Could you tell us a bit more about the workshop? What were the primary objectives and what led you to develop this project?
The workshop evolved out of an informal working group of faculty and others at MIT that met regularly during the spring and summer of 2023 to discuss the current state of research publishing and bring in outside experts to share their insights.
It had become clear to me in my many conversations with researchers at MIT and elsewhere, that most of them did not understand the changes that were happening in journal publishing and why. For example, they knew there now various ways to publish their work open access, and they may have perceived there were differences in publishing norms from field to field, but they didn’t know how and why new business models like pay-to-publish had come about and what the larger implications were for publishing or the research enterprise.
That group decided the most useful contribution we could make was to identify open questions in need of more research, the idea being that it would be much better for the research ecosystem to have publishing policies and practices that were informed by real data and knowledge about which models work and which don’t, which produce the most transparency and equity, what different models cost and who pays — in short, what is actually best for the overall future of research. Once we had produced our initial white paper featuring a group of open questions about research publishing, we decided it would be helpful to socialize the work and broaden the conversation, which is how the NSF-funded workshop came about.
Speaking personally, I was also invested in educating faculty at MIT about how precarious the current situation is and was for non-profit journal publishing, including university presses and smaller scientific societies. That motivation got somewhat lost in the mix of our discussions and the eventual workshop, but the undercurrent remains, for example in helping grow awareness of how large commercial publishers have reaped the most benefits from open access publishing thus far.
The workshop was framed with the concept of open research policy and many of the sessions focused on open access and publishing. Could you share your thoughts on how policy and publishing are connected from your perspective?
Ultimately, these policies drive and shape publishing models, economics, and workflows. When a policy dictates that federally funded research must be publicly accessible, then publishers shift to business models that fulfill those public access requirements. They also experiment where they can to land on models and workflows that ideally sustain their operations and varied business objectives. For reasons of bandwidth, our discussions focused exclusively on US policy, as did the workshop.
What were some of your key take-aways as you reflected on the presentations and discussions at the workshop?
There are many rich takeaways, and I was overall very pleased with the quality of discussion at the workshop. Most of us felt we really needed a full week to dive into these rich topics, not just one day.
One key takeaway is that we clearly need more rigorous economic study of open data along the lines of Johns Hopkins’ RADS (Realities of Academic Data Sharing) Initiative, which looked at costs of public access to research data and found that their researchers average $30K per grant on fulfilling open data requirements. This is the kind of economic study we need much more of because, although everyone agrees a world of open and reusable scientific data would be wonderful, in fact it is extremely complex and expensive, especially when it comes to making data truly useful and meaningful for other researchers, but also with respect to the basic infrastructure to preserve data at scale. This is a great example, too, of what happens when policies are announced before we’ve done the work of modeling the implementation and consequences of those policies.
I was also struck by how central peer review is to all of these ongoing changes, and the challenge of getting peer review right in the context of many new publishing models such as the move to preprint publishing. We all accept that peer review is imperfect and that the system is currently under strain. At the same time, we all agree that improving and modernizing peer review to fit current authoring practices is critical to publishing and to research integrity. There were also some interesting side conversations about how to use AI in appropriate ways to upgrade the peer review system and lighten the burden on referees.
Although we didn’t talk as much about it, I also found myself thinking throughout the day about how much the research community would benefit from better cross-institutional collaboration on community infrastructure. There’s a lot we could learn from studying the successes and failures of the past in shared scholarly infrastructure in order to build more resilient partnerships to support non-commercial platforms and solutions going forward.
The report from the workshop summarizes the key points from each workshop session and identifies research questions related to the topics discussed. How were the research questions developed and prioritized?
After several months of meeting, discussing, and drafting back in 2023, our working group had initially identified a set of 50 or so priority open questions (as listed in the original report). We wanted this workshop to bring in the expertise that could surface additional questions and also get us closer to identifying those questions that stakeholders consider to be the highest priority and most actionable to study or create pilots to test. We left it to the session leads to select their own panelists and shape the conversations as they saw fit. In the end, we distilled those questions you see in the workshop report from the recorded panel discussions, doing our best to focus on the most actionable items.
What immediate actions do you anticipate for advancing the research agenda? Will you be commissioning research projects? Coordinating investigations? How will the workshop outcomes be communicated to policymakers, funders, and other key stakeholders?
At this juncture, we are releasing the report and the workshop videos into the world and publicizing them as widely as possible, with targeted outreach to policy makers, funders, and scholarly communication researchers. Our hope is that they will serve as a blueprint to spur the kind of investigation, piloting, and scenario modeling we point to in the report while also – crucially – attracting needed research funding for these efforts. Our working group does not intend to own or direct this work though we will continue our engagement, in part through the MIT Libraries Center for Research in Equitable and Open Access, which has the kind of research capacity that the MIT Press itself does not. We’ve also had some conversations with faculty who would like to involve graduate students in projects that address some of these open questions.
I truly hope this effort serves as a catalyst for the broader community. When I look back on various community projects I’ve been involved in over the course of a long and varied career in scholarly communications, it is especially satisfying to have played a role in seeding and shaping projects that then grow in others’ hands.
How will the success of this workshop and its report be measured over time?
Always a good question. And in this case, I think the answer is straightforward. Success will be evident in an increase in research on these critical questions, and by an increase in funding for this research. These are both things that can be measured. To a lesser extent, success will be measured by how this work impacts stakeholder conversations about the future of open research publishing and policy.
I think it’s always a good sign that you’re onto something important when what you thought was a lone voice turns out to be a chorus of voices, and maybe some of those voices started vocalizing the same concerns or ideas at the same time without having coordinated their efforts. I’m seeing more papers now basically saying the same thing – that it is imperative that we build this kind of evidence base and that we urgently need a scientific approach to the future of research publishing and open science policy.