bubbleAlong with recent hair-pulling about fake news has come renewed awareness of the concept of “filter bubbles,” as many of us acknowledge the risk of political information “bubbles” following the US presidential election. Where we once bemoaned “filter failure” – or the challenge of sifting out relevant, quality content in the face of information overload – we now find ourselves considering the concept of the bubble as “fully functioning city state” (perhaps on a sliding scale of seriousness).

With all this talk lately about the global state of information – and worries about information curation, misinformation and information abundance – I’m left wondering what this means for scholarly information professionals, and our collective responsibilities around learned resource discovery.

Do we, as academic publishers, librarians, and technologists, owe our readers the filters and curated databases they demand to efficiently seek and retrieve information within the topical spheres defined by institutions of higher learning and research? Or do we instead withhold tools that potentially keep readers within their learned “bubbles”?

Mainstream search results and news feed alerts trade on demands for immediacy, serving up bite-sizes of information at the point of need. Retail information systems are largely focused on providing simple and fast answers, often with little regard for the original source, let alone its values or reputation. There is no longer one standard set of “true” Google search results, as our search experience is iteratively personalized based our individual usage – a major conduit for the information bubble we surround ourselves with.

That leaves us with the rather chilling thought, that we really only see what we want to see – logically, this would extend to our academic information practices. Are today’s students entering into their academic information experiences with expectations for living in a scholarly filter bubble?

Librarians and publishers have struggled with how to best facilitate the discovery of authoritative scholarly content – how much do we comply with open-web indexing rules, despite recent evidence of how Google can distort our information seeking pathways? Should our own search engines emulate the Google experience and to what extent? Do the likes of subject browse pages and recommendation features run the same risks for academic publishing as personalized search results and social media algorithms do for journalism?

In our collective responsibilities to facilitate the education and research processes, are we beholden to encourage researchers to consider implications of their work when viewed from the perspective of other fields? If so, how is this done in practicality? Is academic serendipity achieved with semantically-related recommended readings from TrendMD? Or interactive concept graphs from Google or Yewno, sitting alongside classic full-text search results with subject filters?

An unqualified approach to information is untenable within the rigors of scholarly research and learned publishing, yet our community is struggling to champion lifelong information literacy habits and adapt our products to the norms of the open web. Is that even possible? What can be done when our readers may not be applying Bayesian inference over traditional methods of evaluating the quality and accuracy of information they find? How do we fare in today’s information culture and how do we establish trust in our brands? Are the root problems – and therefore, the solutions – purely technological? I think not!

Relevance, accuracy and authority of a resource ranks higher with many students and faculty, who are less inclined to follow a link simply because of popularity. Therefore, the algorithms required by academics go beyond mainstream user expectations of search engines and society media. Does this point to a way to avoid the scholarly filter bubble? How do we satisfy these elevated information needs of academic users, who are undoubtedly influenced by mainstream information channels?

Facebook’s Mark Zuckerberg has been quoted as saying “a squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” However, in an academic setting, we are called to look beyond our own individual preferences or cultural status quo, we are expected to push ourselves beyond our spheres of comfort, to challenge the bounds of what is understood to be true.

But are filters for relevance, accuracy and authority entirely objective? “Truth” and “fact” can be seen on a sliding scale of subjectivity these days, which inevitably impacts the dynamics of authority within the academic publishing supply chain. I believe data scientists like Sebastien Dery who say that “there is still no universal algorithm for parsing and distilling a thorough, non ambiguous, understanding of text.”

All this adds up to my own professional conclusion, that academic information professionals are obligated to strike a healthy balance between what Eli Pariser calls “information vegetables” and “information dessert” – providing both the information we need to be rooted in scholarly principles and to push the boundaries of science, as well as the information we want within the confines of our organization or field of study.

As organizations vested by the value of knowledge and accountable to accuracy, we must uphold the standards of academic research and proven, documented fact preserved in the scholarly record. We must champion a culture where computational text analysis is a tool we apply in the course of our critical thinking and research analysis, but not one we rely on solely as a source of truth. We must exercise our own critical reading and push ourselves beyond the bubble occasionally, to consider diverse perspectives and question our assumptions and bias.

This community has an obligation to ensure human cognition compliments computer code, rather than technology trumping the type of intellectual judgement needed to exercise information literacy.

I would agree with my Chef peers who suggest that, as we conclude an unpredictable year that challenged our faith in the news media and introduced “post-truth” politics, we must consider how we respond to a culture that has valued emotion above proven, objective facts. Publishers, librarians, and all information professionals in the business of disseminating academic resources and research literature are, ultimately, dedicated to the maintenance of an informed democratic society, empowered with a wealth of knowledge, rooted in the scholarly record, and wise enough to burst their own bubbles from time to time.

Lettie Y. Conrad

Lettie Y. Conrad

Lettie Y. Conrad, Ph.D., is an independent researcher and consultant, leveraging a variety of R&D methods to drive human-centric product strategy and evidence-based decisions. Lettie's specialties sit at the intersection of information experience and digital product design. She currently serves as Product Experience Architect for LibLynx, Senior Product Advisor for DeepDyve, and a part-time lecturer for San Jose State's School of Information. Lettie is also an active volunteer with the Society for Scholarly Publishing and the Association for Information Science and Technology, among others.

Discussion

10 Thoughts on "Embrace the Bubble…or Burst it?"

Members of the AAUP can offer readers of their books at least one guarantee viz., that every book that bears their imprint has been subjected to a rigorous process of review that has these three components: 1) initial screening by a staff editor who has some degree of expertise in the discipline to which the book contributes; 2) external review by at least one expert, and usually two or more, in the subject of the book; and 3) final approval by an editorial board consisting of faculty members of the press’s parent university. Not any commercial academic publisher of books, or any publisher of academic journals, can make that same claim. Commercial book publishers have the first two components, but not the last. Journals, wherever published, have experts with general expertise in the subject matter and use other experts as peer reviewers for individual articles, but have no faculty editorial board whose members see every article accepted by the journal editor, nor a staff member who does initial screening. This unique process of review guarantees that at least the most thorough review procedure in existence anywhere has been applied, even though of course it cannot guarantee that no errors of fact remain in the final product.

Sandy, as much as I agree with you about the value of this normal scholarly review of manuscripts before a publishing decision can be made, there is a gap in this process that has concerned me ever since I began overseeing a book publishing program for a think tank. We find that scholars who agree with our stated POV know more about any particular aspect of a subject than those scholars who we know would be critical of the stated assumptions and line of reasoning in any specific case. We still try, of course, to get peer reviewers from a broad spectrum but with scholarly specialization and the preponderance of data and information on any arcane subject, it is almost impossible to acquire peer reviewing by anyone who is outside of a very narrow slice of academic knowledge. I imagine this dilemma is only going to get more pronounced in years to come and frankly I don’t see any easy way to avoid it. Thanks for your post, Sandy, and thanks Lettie for your reflection on this subject.

I have been wondering about this for a while. Thank you, Lettie, for your acute analysis, and the challenge to authentication that accompanies it.

That filters create bubbles is certainly true, but perhaps a bit too metaphorical. Our users are highly educated, hopefully therefore flexible in their search habits, depending on what they are looking for. Disagreement seems to be the order of the day, hard to ignore, compared to before. Science is adjusting.

Disagreement has always been an important and central part of scientific discourse. Every field i have been active in had its different factions with different theories, bouncing off of one another.

The problem with filter bubbles is that they don’t allow one to experience dissenting arguments, as discovery systems are designed to only show you things that reinforce your existing ideas. If users have to develop complex strategies to see information that contradicts their previously-held beliefs, then there is something fundamentally broken in the tools being offered.

What discovery systems are you referring to? I happen to be doing research on the controversy over the linear no-threshold model of low dose radiation risk, which is the subject of a US House versus DOE tug of war. Google Scholar search on “linear no-threshold” since 2012 brings up a nice mix of views.

Google Scholar does not use previous user behavior as a factor in its search results:
https://scholarlykitchen.sspnet.org/2015/10/12/guest-post-highwires-john-sack-on-online-indexing-of-scholarly-publications-part-2-what-happens-when-finding-everything-is-so-easy/

To me, this is a clear admission that doing so reduces the quality of the results returned. To answer your question though, every journal I work with sees a significant amount of traffic coming from regular Google (which does factor in previous user behavior). We also have an increased use of social media by researchers, again reinforcing filter bubbles. Further, each publisher has its own recommendation systems and search engine, and the criteria vary. Obviously they favor their own material (are there any publishers that search the open web from within a journal’s search page?), and often recommendations are based on user behavior rather than semantic analysis.

I agree with your concerns. My only point is that one need not use complex strategies in order to cast a wider net, when that is the goal. Google Scholar’s “Related Articles” feature is especially useful because it does not depend on the search term used. But then scientific controversies are often expressed outside of the polite halls of the scholarly literature, so it can become complex if one is really serious.

True, my understanding of the conflicts within the fields where I worked was greatly enhanced by personal knowledge of the individuals involved, an important reason why attending meetings was so important. Talking to someone at their poster session or in the bar is something that still has yet to be replicated online.

My only point is that one need not use complex strategies in order to cast a wider net, when that is the goal. Google Scholar’s “Related Articles” feature is especially useful because it does not depend on the search term used.

But if all researchers coalesce around one tool, then all further research will be subject to the biases of that tool and its underlying algorithms. We know that online businesses tend toward such consolidation (one Google, one Facebook, etc.). Such a concentration of discovery would be a bad thing for research.

Comments are closed.