In the immediate aftermath of the September 11, 2001, terrorist attacks on the US, one thing said stuck with me, but I can’t remember who said it. The line I remember is, “We are no longer outside of history.”
The summer of 2001 did feel ahistorical. The Cold War had been over for a decade. The Internet economy, while wrestling with a cyclical crash, had just delivered years of wage and job growth. The real estate market was booming. It was as if the United States and even the world had slipped some historical limitations as the 21st century dawned, with the Internet leading us out of the analog cave and into a brighter sunlit digital world. Broadband was becoming commonplace. Cell phones were everywhere. GPS was going commercial. Then, in an instant, familiar cave walls encircled us again in an ash cloud of reality that has yet to fully clear. Terrorist attacks of various kinds have only become more commonplace in cities around the world.
Distressingly, the Internet itself — that beacon that seemed to be leading us out of the analog darkness of the 20th century — has also fallen prey to the forces of historical dispute, becoming a venue to inspire attacks and recruit attackers. It is now even a source of attack, as I wrote about earlier this month. History has caught the Internet, too.
History has also caught up with science, which seemed to be ahead of history in many ways, especially as societies became congruous with the contours of scientific findings and initiatives. The history of science became history itself — moon landings, satellites, vaccines, increased food production, and so forth. Neil deGrasse Tyson states it eloquently for the United States in particular in a recent video:
This consonance between science and history stretches back farther, as Noah Yuval Harari wrote in Sapiens: A Brief History of Humankind:
During the last five centuries, humans increasingly came to believe that they could increase their capabilities by investing in scientific research. This wasn’t just blind faith – it was repeatedly proven empirically. The more proofs there were, the more resources wealthy people and governments were willing to put into science.
But that tie is under strain, as well. In the US, science is being actively suppressed by government officials, policymakers, business interests who donate to politicians, and politicians themselves. Science funding is now routinely threatened or marginalized. History has become more tribal; we no longer have our eyes on the horizon. History is no longer walking in stride with science.
The Internet was supposed to usher in a new era of enlightenment, and many hoped that these benefits would accrue strongly around science and scientific literacy. But the opposite seems to have transpired. In fact, many of the promises implicit in the Internet Age seem to have succumbed to historical forces that have proven stronger than any technology.
Yet, it has been hard to discern exactly why the Internet has become such a source of problems for science and society in such a short time. It seems this neutral platform has actually become a breeding ground for polarization, ideologues, and extremism. Is there something in the mix that’s fouling everything up?
Maciej Ceglowski, a computer scientist and painter from San Francisco, recently spoke and wrote about this, with the resulting essay proving equally thrilling and demoralizing. Entitled, “Build a Better Monster,” the presentation was given on April 18th at the Emerging Technologies for the Enterprise Conference in Philadelphia. The essay appears on Ceglowski’s blog, Idle Words.
Ceglowski’s technology observations are riveting and ring true.
Algorithms represent an alien intelligence, and perform in ways we can’t anticipate. Algorithms make mistakes we don’t expect (mistaking an image featuring static over a school bus for an ostrich, for example), or they make merciless determinations we would ourselves shy away from. Because of this, when algorithms are deployed to optimize clicks and views, they naturally drive people toward click-bait and to the extremes, where clicks and views are more probable.
One problem is that any system trying to maximize engagement will try to push users toward the fringes. . . . the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.
This has turned the social graph of the Internet into an information source that natively drives users toward extremist views. For the public’s understanding of science, this means that algorithms naturally drive citizens toward more outlandish views — about nutrition, vaccines, climate change, and so forth.
Money drives surveillance and social control online. Quoting Upton Sinclair (“It is difficult to get a man to understand something when his salary depends on his not understanding it”), Ceglowski sees money as the structural element driving surveillance and social control online, without much restraint. Money manifests as advertising at this point, with Google having the biggest share, Facebook trying to match it, and third-parties competing around the edges. When hackers probed for opportunities to make money with “fake news” in the 2016 US Presidential election, they tried news that would drive clicks from liberals and conservatives, but hit gold with conservatives, who were more willing to follow the clickstream into extreme positions and deliver eyeballs consistently to advertising-driven pop-up sites. As one hacker said:
[Trump supporters are far more receptive] because they are angry. For me, this is all about income, nothing more.
Decades ago, television changed the information baseline, from logical to emotional, satisfying to entertaining, making emotional appeals now all the more effective, as they are an expectation. With algorithms finding and exploiting a well-trained emotional rewards system, it’s no surprise that a foreign power and social media combined to enable the election of a “hard power,” television-obsessed, and television-made populist. As Ceglowski writes:
In a setting where attention is convertible into money, social media will always reward drama, dissent, conflict, iconoclasm, and strife. . . . admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face.
Technology companies have been irresponsible. When it comes to paying workers, paying taxes, or respecting flawed but effective institutions, Ceglowski notes that technology companies have routinely stashed profits overseas, destroyed local journalism and mid-tier publications of all sorts, and achieved a level of concentration that leaves us with Google, Facebook, and Amazon as essentially the companies controlling the working Internet.
A lot of what we call “disruption” in the tech industry has been just killing flawed but established institutions, and mining them for parts. . . . This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things,” some of what we knocked down were the load-bearing walls of our democracy.
With history exerting influence again through a few companies, alien intelligence represented in algorithms, commercial incentives to move people from banal truths to questionable and extremist views, and the realization that knocking things down is quick and easy while building things up takes time and is harder, the publishing strategist is facing different types of challenges.
The strategist can’t merely address the business issues that are typically the focus of strategy discussions:
- Will our journal survive Sci-Hub?
- Will the journal form persist at all?
- How will funder activism affect our businesses?
Strategists must now address superordinate questions:
- How do we know what it true?
- How do readers grapple with potential misinformation?
- How do we deal with predatory publishers, propaganda, and political polarization, all of which foul the information waters?
- Who is in control?
These are the vexing questions that history’s intrusion raises and social media, technological disruption, and media fragmentation exacerbate. We can’t assume any longer that we’re just dealing with tactical molehills that may trip us up, batting around business options in the face of generally trustworthy information from relatively trustworthy players in an information environment we control and manage. We have to look at the mountains, and think about things more broadly and more deeply simultaneously. We have to realize that big changes have occurred in the last decade, changes that have completely redefined the game.
It’s not a simple matter. In addition to a less reliable and more fragmented information landscape, science itself is beset because of extremism and disruption. It’s unclear how it will be funded, how it will be utilized to make social policy, and how its practitioners will be treated. Already, there is less funding, a greater willingness to brush aside evidence in the name of greed or anachronistic beliefs, and, in too many places, an aggressive disdain for scientists and facts.
It’s not a simple matter. In addition to a less reliable and more fragmented information landscape, science itself is beset because of extremism and disruption.
Can a fragmented industry tackle these issues? It’s unclear. Parochial concerns drive local strategies, no matter how “big picture” the attempts are. It’s the blind men and the elephant problem — everyone is affected, but each effect and interpretation has too much specificity to be useful at the level of the large animal in question. The elephant in the room is accuracy and respect for facts and research integrity.
What is the common perspective that can help? I think it’s ethics. It’s time for a revival of ethics in many areas of life — politics, business, science, authorship, academia, technology, and scientific publishing. This is also a conclusion Ceglowski arrived at:
. . . technology and ethics aren’t so easy to separate. . . like the tail that follows the dog, ethical concerns about how technology affects who we are as human beings, and how we live together in society, follow us into this golden future. No matter how fast we run, we can’t shake them.
The power of ethics is something demagogues fear. Principles aren’t compatible with manipulation. It’s worth remembering that the US Congress that took power with the latest US election tried to eviscerate Congressional ethics watchdogs as their very first move. President Trump has been skirting or flouting ethics rules, from nepotism to conflict of interest, leading to lawsuits that are making their way through the courts. Ethics are a pain point for the powerful.
The recent controversy about a book entitled Communism for Kids from MIT Press shows how ethical issues arise in publishing all the time, but now work differently due to social media. Amy Brand, a friend for many in the industry, was quoted in Publisher’s Weekly hitting on a familiar point of publishing ethics:
For Brand, who joined MIT Press in 2015, the book’s publication has proven “a revealing experience.” It has highlighted “the polarizing power of ideas and words,” reminded her “about the swarm mentality fueled by social media,” and driven home “the serious responsibility of being in a profession dedicated to protecting fundamental freedoms of expression.”
Facing this “swarm mentality” is perhaps the most vexing issue currently. But if trolling were combatted with ethical censure that stuck, we might see fewer swarms, less extremism for the algorithms to poach, and a better information space.
Ceglowski speaks to the tech industry about ethics, but there are lessons for us here, as well, I feel:
We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. Other professions all have a code of ethics. Librarians are taught to hold patron privacy, doctors pledge to “first, do no harm.” Lawyers, for all the bad jokes about them, are officers of the court and hold themselves to high ethical standards. . . . Young people coming into our industry should have a shared culture of what is and is not an acceptable use of computational tools. In particular, they should be taught that power can’t be divorced from accountability.
Where is the publishing industry in its ethical development? A bit lagging, I’m afraid. The prevalence and durability of predatory publishers represents a major ethical problem we can’t seem to address. Ethical lapses that do occur are local and handled often in an idiosyncratic and uncoordinated manner. In addition, it’s my impression that during the past decade, the watchdogs of publication ethics have dissipated, become process-oriented, or become confused. The Committee on Publication Ethics (COPE) is mostly used as a source of guidelines for how to handle various occasional problems, such as retractions, corrections, and expressions of concern. COPE’s Codes of Conduct are over a decade old. The International Committee of Medical Journal Editors (ICMJE) has faded as a voice of strong ethical guidance. The Directory of Open Access Journals (DOAJ) has been called out on numerous occasions now for sloppy practices itself.
When editors do take ethical stands, the social graph predictably responds to attack them. This occurred with the NEJM data-sharing editorial last year, which espoused views that actually aligned with how most researchers feel about sharing their data — that is, they want to control how they share it and with whom, and they resent being told they have to share what they busted their tails to develop and record. But open data advocates dismiss trialist preferences as “paternalistic.” Yet, there are significant unanswered ethical questions about how institutional review boards (IRBs), editors, reviewers, and readers can trust studies from shared data.
Add this desire to rapidly change data-sharing practices in science to the desire to publish faster, and it’s clear we may have absorbed the “move fast and break things” mentality ourselves.
In these contexts, we have to remember that history is now revisiting science, which means science has a new task — it has to prove itself and its value again. Last weekend’s Marches for Science represented science, scientists, and science advocates reinserting themselves into the political system, which historically they’ve been able to take for granted as allied with their motives, and for good reason — science has traditionally led to economic, social, and health improvements for everyone. But in a technology sphere that drives mainstream views to extremism and fact toward fiction, we need to also reinvent the ethical boundaries and enforcement mechanisms that put a brake on these things.
One thing worth noting about the Marches for Science — as far as they were organized via Facebook, there are now lists of people who attended, lists Facebook owns and keeps. Do you trust Facebook and its individual employees to not share those with the government in the future?
For publishers, improving the situation may mean reconsidering the trend of putting more information out without vetting and managing it carefully and responsibly. The information environment now favors fringe science, meaning there are clear downsides to putting it all out there. Even A/B testing on social media may need to be re-evaluated in light of how algorithms behave and potentially mislead, making us their unwitting accomplices in driving extreme behaviors.
We are at an important historical moment. As Ceglowski writes:
Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them. We have to do something.
Discussion
14 Thoughts on "The Information Landscape — How Do We Tackle the Problems Caused by Silicon Valley?"
Unfortunately I fear it will get much worse before it gets better. State education boards are dictating teaching of facts of teaching of discernment and creative curiosity. Then they work to manipulate the facts (e.g. slavery has been relabeled). Students naturally see no need for education under this system because they can find all the facts on their smart phones. Hence the tie-in to the article.
I wonder if its going to lead to the resurgence of print magazines, simply because of a matter of trust? You can believe more what’s written in print because its expensive to get it there, while on the internet, nobody knows you’re a dog, or a twitter bot?
That’s exactly how traditionally print-based pubs hope to spin the whole fake news alt facts boogieman.
Alas no, print being more expensive has no bearing on legitimacy and truthfulness of news (nor research findings, for that matter). If anything, it may even be more susceptible to fake-whatever if the individual/organization pushing its agenda has deep enough pockets, precisely because of that romantic notion of print media as inherently more trustworthy.
I think print does impose some barriers to entry. They are not insurmountable, but they are higher than the barriers for online. For instance, it’s harder to cross borders, making print propaganda both harder to insert into another country but also more traceable. Note that Russia used television (RT) and the Internet, both of which make borders less of an issue.
Good article, Kent. George Washington University holds an annual one-day conference on “Ethics in Publishing” each summer, which isn’t a publishing industry initiative but does provide a venue to discuss and debate many of these issues. I was thinking about these issues recently during the controversy about Milo Yiannopoulos and his book deal for “Dangerous,” which of course Simon & Schuster cancelled only after his remarks about pedophilia. Would I publish his book? No. Should it be published at all? That’s a call each publisher needs to calculate, under their own ethical framework (or, for many, P&L/ROI framework), but I wouldn’t want the government making that call. Would not publishing “Mein Kampf” have somehow slowed the rise of Hitler? Doubtful. Should the book have been banned after the Holocaust? No, as while it might inspire white supremacists it offers the rest of us a learning opportunity and a warning. “The Anarchist Cookbook”? What about a new method of using CRISPR technology allowing a terrorist cell to develop a virus that wipes out humanity (coming soon!)? While the ability for anyone to self-publish or simply post something on the web makes publishers’ ethical decisions carry a bit less weight, perhaps, it’s still important to have these conversations and develop strong ethical standards at our own presses, if not industry wide.
In seeking salvation in ethics, you and Maciej both seem to fundamentally misunderstand the matter. Ethics emerge over time as a distillment of aggregate individual morality, historically resulting in very little — if any — of ethics being even remotely universal.
Forcing a particular ethical code — whether in publishing or any other endeavor — on a global level would be unethical in itself. Individual morality can’t be policed, so ethics can’t be enforced neither; this is why COPE, ICMJE, etc can only issue guidelines and recommendations, engage in advocacy, and hope for the… well, whatever comes of it, because who knows what ‘best’ is when it comes to ethics.
What you appear to be advocating under the banner of ethics though is essentially laws. Laws poorly thought through, and in the service of the ‘lawmakers’ themselves.
We aren’t talking about individual morality, but professional ethics. It’s not at all unusual for professionals in a field to be held to a shared set of ethical standards. These are not laws, but professional standards that can lead to dismissal from the profession if breached. Publishing could do with some of these.
Professional ethics isn’t ethics either. It’s a misnomer for an arbitrary code of conduct kind of thing geared towards mitigating various legal liabilities. Put simply, professional ethics are derived from laws governing a profession; ethics are derived from morality and morality alone. So former can’t possibly have anything to do with the latter, but for some reason we refer to both as ethics.
Professional standards sounds about right, but not agreed upon nor consistently enforcable across the entire publishing industry still. But I am curious as to who’d you have set the standards, and especially who’d be handling dismissals?
You seem to think this is all mysterious and abstract. It’s actually common and concrete. Lawyers are a good example. They are officers of the court, and have ethical constraints on them as soon as they claim the job title. The same for physicians, teachers, morticians, and business owners. It’s not as uncommon or difficult to implement as you seem to want to believe.
Lawyers huh? No ethics whatsoever involved. Successfully defending a guilty person is unethical. Successfully prosecuting a wrongfully accused person even more so. Yet it’s part and parcel of the job as far as I can tell, and that’s just ethics 101 mind you.
You can call it professional ethics or ethical constraints, but the fact remains ethics seldom have anything to do with it. It’s not mysterious nor abstract; it’s simply not related to ethics except in name.
As if you expecting it makes it wrong…
I get that arguing semantics can be nitpicky and annoying, but the distinction is real nonetheless.
Anyway, to make up for it here’s another interesting lengthy essay re. the information landscape and its trust issues: https://failureofthefourthestate.wordpress.com
Thank you for the good and provoking article.
These fundamental issues posit also many difficult questions for those advocating open access, open science, open data, open peer review, and open whatnot. Often, I feel that we’re heading into a dystopia of “Silicon Valley -style science”.
We already have Fox News -style Retraction Watch and these Facebook-style like-buttons (read: altmetrics), but soon enough research articles are probably ranked by Amazon-style star-ratings.