The scientific community studies the impact of variables on processes, plans and runs experiments, and also critiques the work of their peers, so it is surprising that there isn’t a more rigorous understanding of what yields successful scientific outcomes. Since the 1940s, when massive investments in the US research community led to significant military breakthroughs, such as RADAR, the Manhattan Project, and improvements in weather forecasting, the government has continued to provide significant support for  basic and applied science. Society has benefited tremendously from these investments, both through anticipated and unexpected applications of the resulting discoveries. Yet with all these investments and the associated successes, our understanding of what environments or approaches yield the most likely successes is limited.

This month saw the inaugural International Conference on the Science of Science Innovation (ICSSI)  hosted at the National Academies in Washington, DC, attended by 150-200 people — primarily researchers and scholars focused on studying scientific methods. Having spent a lot of my time over the past decade focusing on library assessment, altmetrics, and assessing new forms of scholarly outputs, this meeting was refreshing to me for two reasons. First, the meeting was primarily of and for researchers; and second it was not primarily focused on bibliometrics. Analysis and perspectives from a variety of lenses were shared, including economists, funders, researchers, lawyers, and administrators, with each group bringing their own views on what success means.

Unfortunately, because of a conflict with the NASIG conference I was unable to attend the whole program, but what I did see was compelling. The meeting began with a panel on Innovation and Growth, moderated by Heidi Williams of Stanford University, which was a terrific start of the conference. The panelists’ thoughts ranged from Vannevar Bush’s ideas on the role of science in society, to the productivity and impact of research and development investments, to the impact on scholarly assessment and the barriers to new science being widely adopted.

 Dashun Wang Welcomes participants to the First ICSSI conference
Dashun Wang (Northwestern U.) Welcomes participants to the First ICSSI conference

The US invests roughly 2.5% of its GDP in research and development according to a report by Melissa Flagg at Georgetown University. While the overall total has been roughly consistent for the past few decades, the proportion provided by federal government funding has been on a steadily declining trajectory. This leads to a troubling situation in which an increasingly large portion of research being conducted in our economy is privately funded and those results aren’t necessarily being widely disseminated, as it likely would be as part of a publicly funded grant. Potentially, those results might also be captured (economically speaking) and constrained by the patent and license rights of those that are making those investments. If, as Benjamin Jones described during the opening panel, every dollar invested in R&D returns $5.00 to society, there are a lot of captured values when that research isn’t being funded by public resources, supported, and eventually released to the world in a way that benefits everyone, though notably not everyone equally. Of course, one should receive the benefits of one’s investments and corporate investment isn’t necessarily a negative, but the challenge for society grows as the amount of knowledge is increasingly held in private hands or worse, not ever publicly shared.

Interestingly, the ratio of US investments in research has gone down compared with that of the rest of the world from 2000 to 2019 according to the State of U.S. Science and Engineering, 2022 report from the National Science Foundation. This is not because the US is investing less — it has increased its R&D investments by roughly 7.1% annually over that time — but rather because the rest of the world is investing more and more. As a percentage of global R&D development investments, the US share has declined from 25.1% in 2000 to 21.2% in 2019. Importantly, this growth is not because of increased public investments, but rather because of the growth of significant investment in corporate R&D expenditure. While US government investments in R&D are significant and can have tremendous impact, as exemplified by the discovery and roll-out of the COVID-19 vaccines, the US government is no longer the dominant player in driving the direction of scientific discovery. In a previous Scholarly Kitchen post, I wrote about the implications of the disturbing trends in scientific investments; those trends show little sign of abating. A side effect of this decreased funding is the lack of a cohesive strategy — and importantly the limitations on the power to implement the strategy if it existed — on national research goals. Importantly, as Jones stressed, the societal benefits of this R&D investment does not accrue to society as a whole, but rather is retained by those private institutions making the investment.

During a panel of funders on the second day, Arthur “Skip” Lupia asked participants to reflect on this point: The average STEM research grant is roughly two times the median US household income. When you submit your federal grant, think about why this project is worth every penny a family earns in a year (or two families’ income, actually). Are you using those resources in the best possible way? What would they say about the results of your work, knowing that the equivalent of their families’ income went to serve your research goals? Related, Rush Holt questioned whether there was a ‘meaningful measure of the output of research funding’ or if one could be discerned. His concern was also focused on the value produced by scholarly research and the growing chasm between the scientific community and the broader society that both funds and benefits from the research being undertaken.

Stepping a bit deeper into the science of science weeds, the program also reported on research studies on the study of scientific discovery including what appears to be making a measurable impact. Some of this research also highlighted areas where we could collectively  better identify and address systematic bias in the process of research sharing. For example, it was clear from multiple presentations that outputs from female researchers are less likely to be recognized or achieve the same level of impact, an inequity which must be addressed.

Other interesting research presented included (with apologies to the presenters for my brief descriptions):

1)    Gender-diverse teams generate significantly better results than homogenous-gender teams – presented by Brian Uzzi. Based on machine processing of the text of papers to determine novelty, Uzzi and his team analyzed the demographic makeup of the teams producing those results. They found a 9.1% lift in novelty by gender-diverse teams regardless of the size of the group. A same-gender team must be doubled in size to achieve the same novelty level as a gender-diverse team.

2)    Brian Uzzi also described his team’s work on discerning the embedded innovation networks through machine-learning analysis of patent applications. Here, Uzzi and his team trained a machine-learning model that could discern which patents would match previous patent-granting and denial decisions. The team explored patent applications and also found  a network effect on patent applications, and how they require a network of embedded patents to achieve success. Generally, stand-alone patents are not as successful, whereas patents building upon networks of related elements building on supportive technologies are more successful. Uzzi’s team identified patterns that can be used to predict the potential success of a patent application.

3)    The process of scientific discovery is littered with failures and Yian Yin described the process of failure until a researcher ultimately achieves success. Yin and his colleagues’ model identifies the stages of progression or stagnation and describes the dynamics that lead to eventual success or failure. He tested this model against a dataset of NIH grant applications, venture-funded startups, and (fascinatingly) terrorists. He found that, in the research process, the results didn’t follow a normal distribution, because the progress was normally truncated by some deadline.

4)    Misha Teplitskiy presented on the impacts on idea dissemination through conference presentations. Using attendee’s noted presence at a conference session (through services like Sched), Teplitskiy identified the impact the presentation had on whether a participant would eventually circle back to read the paper associated with the presentation.

5)    Partisan use of scientific results in policy discussions were reported by Alexander Furnas. It might seem obvious that in our political environment, there are differences in how research is used and applied, with a significant gap among political sides’ use of scientific literature to buttress their arguments. Seeing this described by Furnas was compelling and adds to the body of knowledge about the differences in our political environment.  However, interestingly, this same distinction doesn’t apply equally outside of the US.

6)    A study of how productivity differs between similar scientists, based on their institutional affiliation was presented by Sam Zhang. Two similarly-credentialed researchers at two different institutions, one well-resourced and the other comparatively less-well-resourced produced results at different output levels, likely due to the availability (or not) of postdoctoral and student labor.

7)  The Turnover Replacement and Career Age Distribution study was discussed by Clara Boothby, who noted that, although we are in an era of demographic shifts, the career path for most scholars has remained relatively stable in most domains for the past 30 years. Boothby has studied the turnover in early career researchers through the publication records of 3.5 million researchers to discern career patterns across different domains, with the most active churn seen among early career researchers.

8)    Work on migration and its impact on research interest was presented by Christian Chacua, who found that, when researchers from Columbia moved to a new location, the focus of their research interests also changed. This was true even when their fields were notionally culturally-independent, such as geology.

9)   Intersectionality in scientific output was discussed by Dani Bassett, who began by reviewing a variety of literature showing differential citation for papers authored by people with “male” or “white” names. They and their team, showed that reference lists tend to include more papers with a white person as first and last author than would be expected if race and ethnicity were unrelated. To address this, they and their team created a tool to analyze the predicted gender and race of first and last authors in a paper’s reference list in a bibtext format prior to submission.

This list is refreshingly not dominated with studies of bibliometrics, of citation analysis, or of impact factors; not that these are necessarily problematic approaches to these questions. Rather the study of science has, arguably, for too long been focused on them. Citations have dominated discussions of the scientific study of research due to their simplicity and ubiquity.  We often measure what is measurable, not what is important. Citation, and the associated derivative measures, such as the Impact Factor, the h-index, and others, have become the dominant approach to measuring a scholar’s output. Collectively, we have used these metrics to determine the quality of a publication, the potential impact of a paper, or in their worst case the success, or failure of one’s career.

This is not to diminish the value that citation metrics may provide, as they can be valuable in their own context. Advances in machine reading, natural language processing, and computational analysis, as described in some of the sessions during the ICSSI meeting, can lead us to using the actual text of research outputs to discern quality, rather than using imperfect proxies.  Collectively we could use some innovative thinking about what it means to produce advances in science. If the ICSSI conference has advanced that thinking in novel ways, the meeting will have been a success. If the event is carried forward and if it continues to advance thinking on these questions, it could be transformative.

 

NOTE: A shout out to Christina Pikas for her detailed public notes of Day 1, Day 2, Day 3 and her summary of the conference, and the help they provided in preparing this post, as I couldn’t participate in every session.

 

Todd A Carpenter

Todd A Carpenter

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO). He additionally serves in a number of leadership roles of a variety of organizations, including as Chair of the ISO Technical Subcommittee on Identification & Description (ISO TC46/SC9), founding partner of the Coalition for Seamless Access, Past President of FORCE11, Treasurer of the Book Industry Study Group (BISG), and a Director of the Foundation of the Baltimore County Public Library. He also previously served as Treasurer of SSP.

Discussion