Artificial intelligence (AI) has been creeping into our lives, first taking on complex, but narrow, tasks like optical character recognition or games like chess and Jeopardy. In recent years, the science of AI has advanced to a point where computers are assuming more meaningful and significant roles, like personal assistants for your phone and self-driving cars.
A similar game-changing claim was made recently at the Frankfurt Book Fair by Meta, a Canadian startup based in Toronto. Their product, Bibliometrics Intelligence, proved far better than seasoned science editors at identifying and selecting the best manuscripts, Meta claimed in a joint press release on Oct 17th with Aries Systems:
Large-scale trials conducted by Meta in partnership with industry demonstrated that Bibliometric Intelligence out-performed tens of thousands of human editors by a factor 2.5x at predicting article-level impact for new manuscripts, prior to publication. It also performed 2.2x better than the same group of editors at identifying “superstar articles” – those that represent the top 1% of high-impact papers, prior to publication.
If Meta’s claim is true, the traditional model of editorial selection and peer review is ripe for massive disruption. As a senior editor quipped to me by email, “Now I can fire all my editors and run everything by algorithm!”
Intrigued by Meta’s truly significant claim, I wanted to read about their results in more detail. There was no link to the study in the press release, so I contacted Meta and Aries to request a copy of their paper. My request was denied on the basis that the company was in the process of publishing the results of the study and that I could get a copy of the paper once it was published. In its place, they sent me a brochure.
As a company heavily invested in selling their services to biomedical journal publishers, this response was more significant than claiming to upend the traditional editorial and peer review system. Why? Because scientific statements are supposed to be backed by evidence–not the promise of future evidence.
Meta could have dealt with this issue by releasing a manuscript of their findings, something that Academia.edu knew to do when claiming that their commercial repository increased the citation performance of uploaded papers. Nature knew this as well when they commissioned a citation study of open access papers published in Nature Communications. While I publicly disputed both studies, these organizations understood that they could not make bold claims in this market without a public document. No document, no claim. It’s that simple.
Biomedical publishers know this rule intimately, gave it a name (The Ingelfinger Rule), and often go to great lengths to time the release of a major study to coincide with a conference presentation and press release. If Meta was anxious to announce their results to coincide with the Frankfurt Book Fair, they could have deposited their paper into a preprint server like bioRxiv or the arXiv. It has never been easier or cheaper to get results out quickly.
Meta may indeed be conducting ground-breaking AI research that will truly transform the way scientific papers are selected and evaluated. Nevertheless, by following a marketing-first-results-later approach, this company has signaled to the scientific publishing community that it fundamentally does not understand scientific publishing.
To me, this was a major oversight.