I recently spent the better part of a work day reading Richard Poynder’s 87-page treatise on the current status of open access. Even as I printed it out, so as to protect myself from any digital distraction while reading, I wondered whether reading the full text was in fact the best use of my time. Was there an executive summary that might suffice? Could I skim it and just pick up the general gist of his argument? Truthfully, the response to both questions turned out to be No. It was a substantive piece and thoroughly documented, via footnotes as well as embedded links. Clearly, a thorough reading was going to require attention and time. Did I have either? 

I was not the only person who reacted to the length of Poynder’s “ebook.” Others were having to make the same decision about whether the time spent reading would be well-invested. Although I hadn’t realized something was in the works at the Scholarly Kitchen, Rick Anderson, Associate University Librarian at the University of Utah, had done some of the heavy lifting in evaluation (see here). On Twitter, a researcher asked for the TL;DR version and the author quickly referred him to Digital Koans where the selection of a single concluding paragraph summed up what the author felt was covered in the meaty essay. 

robot reading a book

But even so, someone else tweeted out that, no matter how worthwhile the content, he or she could hardly hand over a document of 87 pages to their provost and expect them to read something of that length. The time commitment required to consume the dense material would not seem justifiable, unless the topic was one with which the provost was already deeply concerned. 

This gives me pause, because how we view the task of reading, how much time we allocate to reading, and the criteria for determining what is worthy of being read continues to be a challenge. It is something with which many professionals wrestle on a daily or weekly basis, as Scholarly Kitchen Chef Karin Wulf notes here and here.

There was the librarian sighing over the need to read and review two articles on his Friday evening and wondering if there aren’t better ways to wade through the process. There were the diverse set of practices and attitudes across a group of eight academics, who were invited by a reporter from the  Times Higher Education (paywalled article) to discuss how they managed their reading. Given the demands of real life, how much reading is feasible? The group included Verity Archer of the Federation University in Australia, who referenced the concept of “time privilege”; the fact that those with the greatest flexibility in their schedules are usually the most privileged when it comes to reading. Those early career researchers who most need the time to read and absorb the literature are generally the ones most weighted down with teaching and administrative tasks. Women who are primary care-givers outside of the office will tend to push the work of reading into their leisure evening or weekend hours. If reading is part of one’s day job, in Archer’s view, then the available hours in the workday should allow for it. 

A chemist noted that, while making an attempt at weekly reading commitments in order to stay current, he’d largely given up searching databases and preprints, though he did praise the availability of the American Chemical Society’s mobile app, which supported access to personalized content which he would then read during his daily commute. Others referenced irregular reading habits unless faced with a grant or syllabus deadline, at which point they would do a spell of binge-reading. The hesitation associated with that practice was summed up well by David A. Sanders of Purdue when he wrote, “We should…resist the urge to promote research results that we have not personally evaluated.”  

A humanist quoted in the Times Higher Ed piece wrote that the question of determining what to read was “now infinitely more complex in the age of digital and computational possibilities”.  

As a related point, in May 2019, Science Magazine published a piece critical of those scientists who delegate the work of peer review to their research assistants. Those research assistants were essentially serving as readers and ghost writers of reviewed materials sent to a higher ranking academic or PI for assessment, because the PI apparently had no time to read them. 

One is forced to ask with a certain degree both of humor as well as exasperation whether anyone is doing any reading at all these days. Is it all “TL;DR”? And, with all the discussions of artificial intelligence in 2019, might AI be a reasonable solution to bridging the gap?  

The Danish AI company, UNSILO, recently reported on results from their 2019 survey on the acceptance and usage of AI in academic publishing. They found that publishers have hitherto focused on how AI might solve their own problems rather than those of the research community. As noted on page 8 of the report, “The primary perceived benefit of AI was that it could save time. This could be seen as evidence of a new realism among publishers, since the thinking is presumably to apply AI tools to relatively straightforward processes that could be completed faster with the aid of a machine, such as the identification of relevant articles for a manuscript submission, or finding potential peer reviewers who have authored papers on similar topics to a manuscript submission.” While I see this as a sensible use of AI by content and platform providers, the pragmatic reality suggests an uncomfortable possibility. There is no magic solution. AI isn’t currently up to the task. 

In the earlier instance of the librarian reading for purposes of peer review, there was a quick response from one of the founders of Scholarcy, an application offering summaries of full-text articles to the researcher. The tagline for the company is blunt “Read less, learn more,” and springs from the founders’ own frustrations in trying to handle the volume of content to be read in the PhD process. Among other functionalities noted in its marketing text, Scholarcy will highlight the important findings in a paper, eliminating the need for the reader to print out and laboriously highlight critical segments or sentences. The reader can customize specific aspects — the number of words, the level of highlighting, and the level of language variation (this last allows you to more easily cite the finding in your paper). Scholarcy will navigate the user to Google Scholar, to arXiv, and to other open source material referenced in the paper. There are additional functionalities and Scholarcy invites the visitor to their site to engage with their demo, a worthwhile use of 15 minutes. Their tool is recommended for researchers, librarians, publishers, students, journalists, and even policy wonks. 

For my own test, I pulled out a 43-page report (available here), written in 2003 as a final output associated with a government grant. Scholarcy parsed some of the findings appropriately but categorized other elements less successfully. As an example, none of the clips assigned by Scholarcy to appear under the heading of Funding actually referenced research funding or grants in any way. 

As a second test, I ran Scholarcy against the PDF version of Richard Poynder’s piece. This may not have been entirely fair, as Scholarcy appears to be targeted to those working in the life sciences and is primarily using extractive summarization to do its work. Whether due to the nature of the content or the document structure, the Scholarcy summarization did not appropriately convey the gist of the content. It did not even pick up the concluding paragraph that the primary author himself had selected for inclusion over on Digital Koans. 

Scholarcy is not a unique product. Other tools such as iris.ai and Smmry work along similar lines (although using my test documents, neither managed to do even as well as Scholarcy). In asking Should We Use AI to Make Us Quicker and More Efficient Researchers, an academic looking at Paper Digest wondered whether tools aimed at improving the efficiency of reading were in the best interest of the scholar or of the institution. Particularly in the context of extractive summarization tools, artificial intelligence might well cut off the flow of the very sentences most conducive to fueling creative thought.  

No matter which professional hat that any member of the information community may be wearing, figuring out what to read as well as finding the time needed to read it continues to be a human problem. It can’t yet be delegated to the machine.

Reading with concentration is work. Give it the time needed. Stop when you hit overload.   

Jill O'Neill

Jill O'Neill

Jill O'Neill is the Educational Programs Manager for NISO, the National Information Standards Organization. Over the past twenty-five years, she has held positions with commercial publishing firms Elsevier, ThomsonReuters and John Wiley & Sons followed by more than a decade of serving as Director of Planning & Communication for the National Federation of Advanced Information Services (NFAIS). Outside of working hours, she manages one spouse and two book discussions groups for her local library.

Discussion

3 Thoughts on "TL;DR (That is, Too Long; Didn’t Read)"

If librarians do not have time to read Poynder ebook, then why do librarians subscribe to Big Deals and OA Big Deals bundles, that researchers do not have time to read?

Thanks for taking Scholarcy for a spin. We’re currently focused on accurate summarisation of research papers – the ability to summarise any article of any size is some way off yet. Researchers around the world use and value our features including reference parsing and linking, figure and table extraction, and identification of study participant and statistical tests.

Comments are closed.