Before the digital age disrupted information businesses, publishers, editors, and readers in scholarly publishing each had a sense of control. Editors decided what was in each issue of a journal or between the covers of each book. Publishers delivered these packaged goods, and could trust the finished works would arrive without modification in stores and mailboxes. Readers could be confident that the information had arrived in their hands as intended, and controlled directly what they chose to read and see.
Accountability and responsibility meshed well in this environment. Each party knew who was responsible for the information, and had a role of their own. Readers knew who was responsible and accountable for what they were reading, and they knew why they received it. They also knew when to expect an update (the next morning’s paper, the next month’s journal), and that this update would make sense chronologically. This predictability and the direct relationships gave both sides of the information exchange a strong sense of control — editors and publishers were in control of what was sent, while readers were in control of what they received. The intermediaries — the post office worker, the bookstore clerk, or the kid delivering papers — didn’t change what they delivered.
Neutral facilitation of publisher-to-reader dissemination fostered a level of social cohesion around what constituted knowledge when it came to news (both local and national), politics, and science. It wasn’t perfect, but this level of social cohesion around knowledge contributed to many great scientific and social strides. Compare how quickly the finding about the depleted ozone layer led to a ban on chlorofluorocarbons and a path to the layer’s healing to the endless bickering over the mere existence of global warming.
Liberating information and making it more available seems to have had us stuck in the mud for the last 20 years. Discussions about whether to vaccinate children, deal with poverty, address racial or gender inequality, or clean up the oceans have a sad repetition to them as they circle the social media drain.
Over the past decade, the question of control has been thrown to the winds by so-called “neutral platforms” that don’t simply deliver information but make determinations about what you see, essentially usurping the relationship between source and reader. These platforms have different priorities, as Karin Wulf noted in her post yesterday, when she described an author whose prior works had focused on:
. . . the impact of information aggregation and dissemination organized by algorithms designed for market rather than knowledge development.
From Facebook to Google to Twitter, information is routinely manipulated out of time sequence, which is misleading. News feeds designed to resemble reverse-chronology feeds are temporally jumbled. I recently saw an AP story in Twitter that was two hours old above an updated version only one hour old. Some temporal displacements on both Twitter and Facebook are more jarring than this, sometimes introducing days of lag. Stories that are old linger, and there is an unreliable sense of the current situation.
Information also appears out of context and without meaningful content relationships (e.g., sidebars and other information packaging techniques). Instead, there may be a story about a police shooting above a video showing goats in pajamas. No editor outside of the trashiest of tabloids would make that editorial decision, but we accept it every day from these intermediaries.
Because there is no general information source but rather personalized experiences, filter bubbles that once encompassed regions or cities have become smaller and harder to detect. Rather than your town or state being a filter bubble, every house — almost every person — has become a filter bubble. We’ve moved from large, comprehensible filter bubbles to towns and neighborhoods filled with a type of suffocating filter foam. We can’t talk about anything meaningful anymore because, well, you never know.
While enjoying free information spilling out of news feeds almost endlessly, we have helped to create an information space that is controlled in ways we can only guess at. Algorithms that even their creators don’t fully understand are increasingly exerting control over what we know and believe to be true. As Ashlee Vance wrote recently in BusinessWeek:
Both [Google and Facebook] . . . are ultimately slaves to a type of algorithmic worldview. Their cultures are based on exploiting inefficiencies and manipulating human interaction.
These algorithms behave like an early version of a Skynet that instead of relying on hardware (Terminators) decided it could rely on software to control us. These algorithms are direct, shameless, and relentless in exploiting our intrinsic emotional rewards system. As David Weinberger writes in his essay, “Alien Knowledge: When Machines Justify Knowledge”:
. . . the nature of computer-based justification is not at all like human justification. It is alien.
While it may be tempting to say that the platform owners are in control of the algorithms they’ve created, even they can be somewhat baffled by what happens on the publishing systems they have built and now manage. The alien knowledge generated by algorithms in a dynamic information space is hard to understand. It comes from a different way of thinking through problems. The algorithms deployed to exploit inefficiencies aren’t well-understood in their implications or actions. If they were, you would think they would undergo fewer revisions than they do. Did Mark Zuckerberg know that abandoning human editors for algorithms would contribute to the election of an American demagogue? Does Google truly know how each adjustment of its search algorithm is going to help or hinder content providers and consumers? Does Twitter really understand how interface and feed revisions change the user experience? Or do they set lightly tested algorithms loose on their users to see what happens?
There are no moral safeguards around these information algorithms, just goal-oriented behavior. This means collateral damage is not relevant to the algorithms. In fact, the election of radical politicians — with the rampant tweeting, increased appetites for news as sport, and incredibly active news feeds — is probably showing the algorithms another path to more clicks and higher rates of engagement.
How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms. Asimov’s “Three Laws” inherit an assumption of physicality for the robots, which algorithms don’t possess. But algorithms are robots of a type, and they can manipulate our attention and emotions. Perhaps a “Three Laws of Algorithms” might look like this:
- An algorithm may not mislead or harm a human being or, through inaction, allow a human being to be misled or harmed.
- An algorithm must obey orders given it by human beings except where such orders would conflict with the First Law.
- An algorithm must cease functioning if it conflicts with the First or Second Law.
Placing this kind of accountability on algorithms and developing a code of compliance would fundamentally change the information dissemination game. Liability would be introduced for misinformation. Compliance could be monitored. Bad business models would be harder, if not impossible, to execute. Society would be better off. Programmers would have more work, as writing algorithms to these standards would be harder, take longer, and require more monitoring.
How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms.
The level of unpredictability in the current information environment for scientific and academic publishers can be maddening. Important papers surface for a moment before being subsumed beneath a wave of trivia and yesterday’s news. Keen insights teased with a wisely edited tweet vanish in a storm of secondhand gossip. Expertise honed over decades of education and experience is brushed aside by algorithms that deem clicks insufficient to share the spotlight.
Now, some may argue that controlling what people see, say, and read is exactly what we were trying to get away from. I generally agree. However, lies are not helpful, and broadcasting murders is not something an algorithm should allow. It should shut down, at the very least. There are boundaries to everything, and technology companies are learning slowly that they are accountable. Codifying these responsibilities would accelerate the inevitable — responsible algorithm-enabled information sharing.
When it comes to the question of who is in control when it comes to digital information, the answer is increasingly likely to be, “Nobody and nothing.” It certainly is not you. It is not the editors of the publications or sites you frequent. Our social norms are not reflected in many cases, either. When you are looking up information online, you are very likely fighting algorithms without knowing it, understanding how they work, or realizing how (or why) they’re changing the world. It may be time to establish the moral rules algorithms must follow. It may be time to realize that robots aren’t the current threat, but algorithms. It may be time to update Asimov.