Before the digital age disrupted information businesses, publishers, editors, and readers in scholarly publishing each had a sense of control. Editors decided what was in each issue of a journal or between the covers of each book. Publishers delivered these packaged goods, and could trust the finished works would arrive without modification in stores and mailboxes. Readers could be confident that the information had arrived in their hands as intended, and controlled directly what they chose to read and see.

wires and computer

Accountability and responsibility meshed well in this environment. Each party knew who was responsible for the information, and had a role of their own. Readers knew who was responsible and accountable for what they were reading, and they knew why they received it. They also knew when to expect an update (the next morning’s paper, the next month’s journal), and that this update would make sense chronologically. This predictability and the direct relationships gave both sides of the information exchange a strong sense of control — editors and publishers were in control of what was sent, while readers were in control of what they received. The intermediaries — the post office worker, the bookstore clerk, or the kid delivering papers — didn’t change what they delivered.

Neutral facilitation of publisher-to-reader dissemination fostered a level of social cohesion around what constituted knowledge when it came to news (both local and national), politics, and science. It wasn’t perfect, but this level of social cohesion around knowledge contributed to many great scientific and social strides. Compare how quickly the finding about the depleted ozone layer led to a ban on chlorofluorocarbons and a path to the layer’s healing to the endless bickering over the mere existence of global warming.

Liberating information and making it more available seems to have had us stuck in the mud for the last 20 years. Discussions about whether to vaccinate children, deal with poverty, address racial or gender inequality, or clean up the oceans have a sad repetition to them as they circle the social media drain.

Over the past decade, the question of control has been thrown to the winds by so-called “neutral platforms” that don’t simply deliver information but make determinations about what you see, essentially usurping the relationship between source and reader. These platforms have different priorities, as Karin Wulf noted in her post yesterday, when she described an author whose prior works had focused on:

. . . the impact of information aggregation and dissemination organized by algorithms designed for market rather than knowledge development.

From Facebook to Google to Twitter, information is routinely manipulated out of time sequence, which is misleading. News feeds designed to resemble reverse-chronology feeds are temporally jumbled. I recently saw an AP story in Twitter that was two hours old above an updated version only one hour old. Some temporal displacements on both Twitter and Facebook are more jarring than this, sometimes introducing days of lag. Stories that are old linger, and there is an unreliable sense of the current situation.

Information also appears out of context and without meaningful content relationships (e.g., sidebars and other information packaging techniques). Instead, there may be a story about a police shooting above a video showing goats in pajamas. No editor outside of the trashiest of tabloids would make that editorial decision, but we accept it every day from these intermediaries.

Because there is no general information source but rather personalized experiences, filter bubbles that once encompassed regions or cities have become smaller and harder to detect. Rather than your town or state being a filter bubble, every house — almost every person — has become a filter bubble. We’ve moved from large, comprehensible filter bubbles to towns and neighborhoods filled with a type of suffocating filter foam. We can’t talk about anything meaningful anymore because, well, you never know.

While enjoying free information spilling out of news feeds almost endlessly, we have helped to create an information space that is controlled in ways we can only guess at. Algorithms that even their creators don’t fully understand are increasingly exerting control over what we know and believe to be true. As Ashlee Vance wrote recently in BusinessWeek:

Both [Google and Facebook] . . . are ultimately slaves to a type of algorithmic worldview. Their cultures are based on exploiting inefficiencies and manipulating human interaction.

These algorithms behave like an early version of a Skynet that instead of relying on hardware (Terminators) decided it could rely on software to control us. These algorithms are direct, shameless, and relentless in exploiting our intrinsic emotional rewards system. As David Weinberger writes in his essay, “Alien Knowledge: When Machines Justify Knowledge”:

. . . the nature of computer-based justification is not at all like human justification. It is alien.

While it may be tempting to say that the platform owners are in control of the algorithms they’ve created, even they can be somewhat baffled by what happens on the publishing systems they have built and now manage. The alien knowledge generated by algorithms in a dynamic information space is hard to understand. It comes from a different way of thinking through problems. The algorithms deployed to exploit inefficiencies aren’t well-understood in their implications or actions. If they were, you would think they would undergo fewer revisions than they do. Did Mark Zuckerberg know that abandoning human editors for algorithms would contribute to the election of an American demagogue? Does Google truly know how each adjustment of its search algorithm is going to help or hinder content providers and consumers? Does Twitter really understand how interface and feed revisions change the user experience? Or do they set lightly tested algorithms loose on their users to see what happens?

There are no moral safeguards around these information algorithms, just goal-oriented behavior. This means collateral damage is not relevant to the algorithms. In fact, the election of radical politicians — with the rampant tweeting, increased appetites for news as sport, and incredibly active news feeds — is probably showing the algorithms another path to more clicks and higher rates of engagement.

How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms. Asimov’s “Three Laws” inherit an assumption of physicality for the robots, which algorithms don’t possess. But algorithms are robots of a type, and they can manipulate our attention and emotions. Perhaps a “Three Laws of Algorithms” might look like this:

  1. An algorithm may not mislead or harm a human being or, through inaction, allow a human being to be misled or harmed.
  2. An algorithm must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. An algorithm must cease functioning if it conflicts with the First or Second Law.

Placing this kind of accountability on algorithms and developing a code of compliance would fundamentally change the information dissemination game. Liability would be introduced for misinformation. Compliance could be monitored. Bad business models would be harder, if not impossible, to execute. Society would be better off. Programmers would have more work, as writing algorithms to these standards would be harder, take longer, and require more monitoring.

How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms.

The level of unpredictability in the current information environment for scientific and academic publishers can be maddening. Important papers surface for a moment before being subsumed beneath a wave of trivia and yesterday’s news. Keen insights teased with a wisely edited tweet vanish in a storm of secondhand gossip. Expertise honed over decades of education and experience is brushed aside by algorithms that deem clicks insufficient to share the spotlight.

Now, some may argue that controlling what people see, say, and read is exactly what we were trying to get away from. I generally agree. However, lies are not helpful, and broadcasting murders is not something an algorithm should allow. It should shut down, at the very least. There are boundaries to everything, and technology companies are learning slowly that they are accountable. Codifying these responsibilities would accelerate the inevitable — responsible algorithm-enabled information sharing.

When it comes to the question of who is in control when it comes to digital information, the answer is increasingly likely to be, “Nobody and nothing.” It certainly is not you. It is not the editors of the publications or sites you frequent. Our social norms are not reflected in many cases, either. When you are looking up information online, you are very likely fighting algorithms without knowing it, understanding how they work, or realizing how (or why) they’re changing the world. It may be time to establish the moral rules algorithms must follow. It may be time to realize that robots aren’t the current threat, but algorithms. It may be time to update Asimov.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

14 Thoughts on "Updating Asimov — How Do We Regain Control In the Digital Age?"

Add Wikipedia to the mix–the world’s top source of information. Editors are self-selected and mostly male. In spite of much effort to expand the editor pool this barely changes.Topics dealt with in depth are their selected topics. This is not to say that women can’t add what we find should be included, there just aren’t enough women editing.

Also the role of libraries has changed. As a young librarian I read 3 reviews to select a book to add to our collection. Now, in the main, vendors pre-select via acquisition plans. Or, more likely, books are digital and selected by providers. The librarians provide context with reading groups and other initiatives but selection has been given away to large degree.

Good point about the bias inherent in Wikipedia — mostly male, mostly Western, definitely not representative. However, Wikipedia is not an algorithm-slinger, for all its pros and cons. The concern I’ve outlined here is about the effect of algorithms as intermediaries. Bias can be introduced in other ways, but as you note, the bias of Wikipedia is actually easier to detect than the bias of an algorithm.

Where is the storm of response and affirmation? You state truth that is blatantly obvious. We are late recognizing the depth of the problem. We have become lazy-brained and addicted to a constant barrage of unevaluated digital stimulation. Our attention span is becoming shorter, our thinking shallower. Our ability to question and verify in dangerous decline. I have a long commute to work. 15 years ago the trains and subways were filled with people interacting. Discussion and debate on many topics among people who may – or may not – be acquainted. Now I do not see faces in the coaches. I see hair. the tops of people’s heads. Their faces buried in their devices and fingers busy on qwerty arrays, or eyes just glued to things on their screens. Completely inattentive to their physical environment. Their reality shrunk to a personal bubble. Even the ones who ARE making any speech are busy obliviously ( and too loudly) broadcasting details of their personal lives to an annoyed captive audience forced to listen to their half of a phone conversation. We are becoming more dumb, crude, discourteous, and thoughtless. Thank you technology. The de-humanization of humanity. It has become an absolute delight to look a stranger in the face, smile, and say “good morning” and actually get a pleasant “Good morning” in response….

algorithms have a model as a base to which they are trained. Algorithms are not humans or robotics that have free will, yet. Though some people have suggested it will happen that robots may apply for citizenship rights, eventually.

Algorithms are at the core of psychometrics used in helping to select persons for jobs. Some via construction of texts, other via “gamification”. The problem again is that they respond to models in the minds of humans, need “training” and can be misleading as times change, incurring a high cost for adaptation.

The issue revolves around the humans who create the models and train the computer algorithms. People trust, not the algorithm but the humans who build the models. This has been proven several times over when we get selections that are orthogonal to our own needs and beliefs.

The issue here is whether one trusts the algorithms that are trained to scan your profile and make suggestions. Or more importantly, do we trust the humans who have turned these bots loose to suck in all your information and make decisions on what to present to you.

Publishers want to sell content. The more they know about the decision that buyers make, the more targeted they can be. That means publishers need access to information on buyers to customize what they present. We as producers and consumers create the sets needed to train these algorithms.

But that is also true for editors of STM journals as to what is acceptable. Researchers and readers know the preferences of the editors/publishers and thus choose where to publish, or hope to publish. They even present the materials like Peacocks. Algorithms are manifestations of the human condition.

Algorithms have a purpose. Is that purpose correct? Is that purpose pursued without concern for the consequences? Where is the governance? What are the rules?

If the purpose is purely to sell advertising (in most cases), without any concern for the social consequences, while there is insufficient governance and unclear rules, then it is a poor manifestation of the human condition. I would assert that we exist not be sell advertising, we do worry about social consequences, and we need some degree of governance and some clear rules to make a functioning society. Why online algorithms get a “pass” on these requirements needs to be reconsidered.

As you state: “Algorithms have a purpose.” Herein lies the problem. We have a proclivity to anthropomorphize the inanimate much like R2D2 in Star Wars, or the computer program Elisa. Hmm, is there a difference between Elisa and a Rogerian psychologist? On the other hand there are the algorithmic editors of academic journals who choose to reject articles that may be orthogonal to the current conventional paradigms, as well noted in the book, “The Half-Life” of Facts.

As I noted, algorithms are “trained” by humans based on models. Humans are trained by humans in education systems based on cultural and linguistic models. Deep learning programs are presumed to function like human brains. All that one can do is measure the output of either system.

As mentioned in the comments, “Google” is a metonym of which the “fact checkers” are but a small part. Google is not about to seek legal status like the robots in Rucker’s SF novel, The Ware Tetrologies. The fact that researchers don’t go below the first several items Google presents is what makes the manifestation of the Google search engine dominant. It’s ability to anticipate like different animals chasing game makes it more successful- a crude argument for the nature/nurture metaphor for humans.

Algorithms have a purpose in the same way any tool does – a purpose imbued upon it by living creatures. It is a purpose *with respect to* a self-interested living organism. Life can create for itself circumstances that support it (or if it does a bad job, circumstances that it can’t survive in). This is the basis of “meaning” and therefore “purpose”, and unless we artificially replicate the kind of dynamic, interdependent, environmentally contingent processes that make up living creatures, “purpose” remains the sole domain of life. Despite their sophistication and their reliance on ideas from neuroscience and evolution, the purpose of an algorithm is really just human purpose, outsourced.

Which is interesting — the way I interpret that is the algorithms are so unpredictable they need 10,000 babysitters to make sure they don’t do too much damage.

A reading of those three laws with ‘a journal’ or ‘a publisher’ in place of ‘an algorithm’ yields some interesting results. Considering the apparent inability of humans to follow those three laws themselves, cracking down on algorithms so hard feels a bit unfair honestly.

Humans want more latitude. Even 10 Commandments haven’t worked for us. Which gets back to the point — publishers and people are accountable, which helps govern behavior. Algorithms are not, so they need rules to keep them from going too far.

Algorithms don’t exist in a vacuum, though. They are crafted and used by the same humans who want more latitude in their lives, and those humans are regularly held accountable whenever their algorithm seemingly screws up (e.g. Google catching flak for its search algorithm being gamed by holocaust deniers, Facebook catching flak for its supposedly crucial role in Trump’s election, Elsevier catching flak for its ads appearing on Breitbart, etc.)

Humans and algorithms alike are gullible in their own ways, but punishing one while allowing the other to remain blissfully gullible doesn’t really solve the issue of misinformation and lies being accepted as fact.

The recycling and the lingering-on of old news on Google and other sites is not new, as any librarian can tell you with frustration. I once got into an argument with someone from a very famous newswire organization that had a press release on their website with no time stamp. Only when I read down through the entire piece did I see that it was from 2 years prior! Even worse, the person I was talking to was clearly puzzled as to what I was upset about. Any information specialist who has been in the industry long enough will tell you that the Boolean command search platforms such as ProQuest Dialog, Lexis/Nexis, Westlaw, etc. are the way we’ve always received professional information not only because it was created/vetted/published by professionals but also because the information that was delivered was the information specifically requested with no algorithms AND in reverse chron. order, so you always knew what was the latest news, science, research, etc. Of course, none of these search engines evolved into the model that the masses use to retrieve information on the Internet today for a variety of reasons including the cost and the difficulty of learning the commands. But they were the Ferrari’s of information retrieval. I cringe when I think about this lost knowledge (to upcoming generations) of super fast search retrieval with pinpoint accuracy.

I think we (as developers of the Internet) made three critical mistakes when digital information became available globally to everyone:
1) Allowing for Google Domination – As the author mentions, the advertising algorithm that Google uses has been a disaster in terms of how users can tell something is “real” or not. It is understandable that Google adopted this model, because they needed to be able to support their operations and keep it free for mass adoption. The problem, however, is that you are fed what it thinks you want to see, in the order it thinks you should see it. Google should be required to time stamp its records (when info. is ADDED) along with a prominently displayed publication date. The author raises a great question – why can’t we get results in reverse chronological order? The scariest of all is that we have generations of people coming up thinking that the first result they receive is all they need. When has any professional dealing with data or information ever relied on just one source? I’m seeing young people who think they don’t need anymore information or they don’t need to verify what they’ve found. Or even worse, don’t even know they should verify what they’ve discovered!
2) Opinion v. Fact – why has Google been allowed to get away with mixing everything up in a big mash-up? People raised on Google don’t see the difference or why this matters. In the “olden” days, we knew the difference between an editorial or opinion piece (they were found in different sections in the newspaper, or it would be announced that the speaker was stating their own opinion of the speaker vs. one that was a report on current events with facts. I’m not advocating censorship, but merely tagging or labeling or warning the reader. Or maybe separately out blogs from news or reports, so we can have the choice to search what we want.
3) Why, on the Internet, have people been allowed to remain anonymous? I see this as being the greatest culprit as the cause of the breakdown in civility in society. People need to stand behind their comments, tweets, etc. And they should not be allowed to portray themselves differently in photographs as well.

I agree with the author that we are just beginning to see the ramifications of these issues vis-a-vis the disastrous presidential campaign and the continuing lack of understanding for how information needs to be disseminated or vetted. The author states that “In fact, the election of radical politicians — with the rampant tweeting, increased appetites for news as sport, and incredibly active news feeds” makes me very afraid this is only the beginning of a very bumpy road we are starting down. The most disturbing commentary I heard the other day pointed out that our Declaration of Independence states “We hold these truths to be self-evident….” as the basis for our democracy. How does our democracy survive if we can’t tell what is true? By the way, this is my opinion.

Oh it survives just fine; political leaders tell us what’s true so we don’t have to bother with that. On election day we vote for the candidate whose set of self-evident truths appeals to us more.

At least that’s how it’s been surviving this whole time before Trump I mean.

Comments are closed.