A sad hallmark of the Internet Age has been the emergence of what have become known as “trolls” — individuals or bots that aim to derail or dominate conversations with shocking, inflammatory, ad hominem, profane, and/or hateful attacks. The personalities are not new — we’ve all known people who’ve held opinions that were questionable, if not even slightly dangerous. But in the old analog world bounded by space and time, the effects of these people could be measured in feet and minutes — you might have to listen to them a few feet away for a few minutes, but their impact would diminish as you moved away and moved on, and their influence was circumscribed by physical boundaries.
With the emergence of the Internet, trolls received an unexpected gift — a borderless, highly engineered megaphone with a “record” button, which allowed them to reach farther and repeatedly, without any real negative repercussions. Snide asides became memes. Potshots were echoed and passed around. Ironically, instead of cultivating more discourse, the Internet has potentially restricted and redefined it, placing the trolls at the top of the pyramid with their shameless embrace of the power granted to them by digital means. And because the substrate they use is engineered, trolls also have access to new tools that make them even more pernicious.
Their power may be as much based on economics as anything else — that is, there is no reason for many Internet companies to address the troll problem. From the moment the Digital Millennium Copyright Act (DMCA) absolved platforms of liability for content posted using their tools, the trolls have had an open field on which to work. Recently, in an interview on Kara Swisher’s Recode Decode podcast, there was an exchange between Scott Galloway, an NYU business school professor and founder of L2, and Swisher, about this “platform” distinction (Galloway is in italic, Swisher in bold):
I think Facebook and Google both face the same issue, and that is they want to sell advertising against content and then say, “But we don’t have the responsibilities of a traditional media company.”
That’s right, they’ve abrogated the responsibility.
It’s total BS.
Thank you for saying that.
What if I were McDonald’s and 80 percent of the beef I was serving before the election day was fake beef, and people ended up getting encephalitis and making bad decisions?
Right. You’d get sued out of existence.
I’d say, “Wait, wait, wait. Hold on, I’m not a fast food restaurant. I’m a fast food platform, so I can’t be responsible for the beef I serve.”
I adore you right now for saying that. They abrogate their responsibility like the 12-year-old they pretend to be. Then suck up all the money.
“We’re a platform, not a media company.” No you’re not. You run content, you run advertising against it. Boom, congratulations, you’re a media company.
Right, and they have the responsibility.
You have some onus of the wonderful things that come along with being a media company, including 90 percent gross margins, influence of unbelievable magnitude, but there is a level of responsibility and wow have they let us down.
They really have. I agree. Thank you. We are in the same club. What’s interesting is they pretend they can’t fix it. “We don’t understand. It’s so hard.” They suddenly become stupid.
They can’t fix it in a cheap automated way. That’s what they’re saying.
If the Washington Post can fix it — what, Facebook doesn’t have the resources of the Washington Post?
Unfortunately it involves humans, which aren’t as scalable as bots.
This exchange lays bare the raw economics of the media platform businesses Google, Facebook, Twitter, and others have created. The lack of human editors and journalists in Facebook’s operation has been widely condemned for quickly allowing bots and trolls to undermine the Facebook News Feed and infiltrate it with misinformation. In the commenting systems I’ve seen work well in scholarly publishing, there is always a human element on hand, either abetted by computers or not. Yet, despite the failings of his media company, Mark Zuckerberg was able to fall back on the “Facebook is a technology company, not a media company” defense. In our industry, platforms like ResearchGate and Academia.edu siphon off valid content but assume none of the liability or responsibility for it. This all is somewhat ironic, as most media companies are trying to become technology companies not to evade social or journalistic responsibility but to make themselves more relevant. Perhaps they need to embrace the same ethics these technology companies are promulgating to truly succeed.
As I’ve written before, the platforms that amplify and extend trolls gain significantly from the clicks and shares they inspire — by exploiting our tendency to believe that others we disagree with are merely misguided and can be educated, to our emotional reactions that lead to shame-sharing and other non-productive reactions. Well, non-productive for us — for the platform/media companies peddling trollery, every click is money in the bank.
These are the economics that have gutted traditional media, from newspapers to books to journals, eroding important bulwarks against untruths and propaganda sources. After all, Facebook, Google, and Twitter are free, right? But hidden costs are still costs, and not all costs are monetary. In addition to a weakened fourth estate and less reporting of local news like city council meetings and police misconduct, national news coherence has been splintered by trolls propped up by billionaires in order to drive profitable world views. It’s no wonder that Americans in recent surveys said that civility has decreased in the recent past, while their trust in institutions is also down. After all, trolls and hackers have now become politically powerful and operate at the institutional level in some cases.
Trolls also entertain us, via shock, anger, righteous indignation, or depraved amusement, adding another layer to the trend we’ve seen for decades to view information as entertainment. As Neil Postman wrote in his book from the mid-1980s (Amusing Ourselves to Death), media now:
. . . offers viewers a variety of subject matter, requires minimal skills to comprehend it, and is largely aimed at emotional gratification. . . . Entertainment is the supra-ideology of all discourse on television. No matter what is depicted or from what point of view, the overarching presumption is that it is there for our amusement and pleasure.
Trolls thrive in a space where media platforms that aren’t subject to the liabilities of media, where privacy policies expand within social constraints that are so powerful they may afford abuse of rights, and where there is a tendency to treat information as entertainment, even if there is pain and shame involved. This combination of factors has allowed trolls a broader and more unstoppable reach than ever before — the ability to be more immediate while also being harder to stop. As Dave Winer notes in a recent essay about trolling:
The Internet is to trolling what airplanes are to global travel. Sure you could do it before, but now you can do it so much better. And the tools for trolling keep getting better. Mail lists were the ultimate sporting venue for trolls, because they gave everyone an equal voice. At any time a troll could halt the discussion and make everyone pay attention to him. Without moderation all mail lists become dominated by trolls, eventually. This is a fundamental rule of Internet discourse.
Ironically, these media giants are also completely vulnerable in one common way — they need users. If everyone were to simply walk away from Google search, Facebook, and Twitter for a year, these companies would dry up and wither into husks of their former selves. To some extent, they live by the trolls, but they absolutely die via user dissatisfaction. User masochism, inertia, or disinterest may be involved to some extent in the appeal of these platforms. We certainly behave as if the way the world is cannot be changed, even as it changes all around us. We are in a trance cast by technology and, more importantly, by the economics of free information, where we don’t actually count for much.
Is user dissatisfaction even relevant anymore? These media companies, ahem, platforms are now so powerful that they can constrain competition by, at the most gentle, extracting rents from potential competitors, or, at their most fierce, stifling competition outright. Scholarly and academic publishers pay related rents to Google (SEO services) and Facebook (Instant Articles), both directly and indirectly. To watchdogs in Europe, these media platforms have become so pervasive and encompassing that German legal groups are claiming changes to Facebook’s privacy policies amount to extortion, with one lawyer from the group stating:
Whoever doesn’t agree to the data used, gets locked out of the social network community. The fear of social isolation is exploited to get access to the complete surfing activities of users.
Users may simply have switching costs — even switching-off costs — that they perceive as too high. Personally, I gave up Facebook last year, and I have lived to tell about it. With Facebook and similar social applications being more about performance (entertaining yourself and others) rather than connection, their value again underscores our penchant for preferring entertainment venues.
The status quo is not effective if we want civil discourse at a distance using modern technology.
This has led to a pervasive aversion to commenting areas of sites — either establishing them, participating in them, or running them. It even led to a post here recently asking if we should just stop with commenting, period. Safeguards in the first generation of commenting tools were mainly focused on preventing swearing and offensive language, if safeguards existed at all. If comments required moderation, some popular journals could actually be overwhelmed (or feel overwhelmed) by the volume of comments, especially if a study involved things that were either controversial (global warming, vaccines), cute (dogs, cats, babies), or gross (feces, tapeworms). Even then, the anxiety created by comments was real because the gating systems were weak and ineffective. Many younger users are switching to Instagram and relying on “dark social” (Facetime, texting) rather than use systems where murders have been broadcast and trolls have bullied children into suicide.
As noted above, human moderation is not impossible. This blog’s comments are moderated by humans. But at scale and without the community boundaries of a professional blog in a niche area, the moderation demands can mount. However, the expenses are only relatively high compared to algorithms. Human moderation can be supplemented by requirements that algorithms can check, as well. There are ways to check the trolls at the door, even to have a bouncer. My company has recently introduced commenting with multiple layers of moderation, some configurable, and nice, soft “fails” for commenters who don’t pass muster so as to not stir their ire. So far, the experience has been very good. This is not the only example, but it shows that new approaches that address the problems identified in v1.0 of commenting are coming to market.
We are left with the core issues — technology and economics. We have lowered the price of trolling to zero, while creating vast incentives for media platforms to enrich themselves by allowing it. We are susceptible to it, and it has become a source of miserable entertainment. We have been made nervous about the whole concept of online interactions and commenting, even if new methods of interaction and commenting are coming onto the market and human mediation’s value has been confirmed. This means there are economic choices to be made. The status quo is not effective if we want civil discourse at a distance using modern technology. We want information to be free (referring to cost and barriers both), yet somehow we end up paying potentially higher societal, economic, and civil prices than any subscription fee or moderator’s salary would justify. Are we willing to pay for high-quality human moderators to ensure post-publication discourse is civil, contributory, and useful? Are we willing to try new methods that address the deficiencies of v1.0 of social commenting?
Or do we want to turn back, allowing the bridge to the future to be dominated by trolls and the economics that allow them to thrive?
Update: A story in the New York Times shows that the newspaper industry is beginning to respond to the overt power Google and Facebook wield. They are asking for a limited exemption from antitrust laws in order to bargain collectively with these media platforms. Given that these sources deal better with trolls and are key to identifying what it true outside the echo chambers of social media, I certainly wish them well. (HT to JH for the link.)
11 Thoughts on "The Trance of Dysfunction — Why Trolls Have Come to Dominate Discourse"
Maybe a switch in emphasis is what’s needed. It was reasonable for publishers and bloggers to initially view themselves as content creators, with the coments section not their business; but I think those who have come to view themselves as moderators of a community are better positioned to survive the trolls. And the payoffs are real. I’m thinking of Rod Dreher’s blog at the American Conservative, which is notable for its commenting community. Rod puts in the effort to moderate, and as a result people from all sides of issues are drawn to his blog, and as a result those people are exposed to his perspective and to what it means to engage with that perspective seriously.
There are also people on Facebook who manage to create well-moderated and productive communities. I belong to several, both groups and less formal communities based around a particular person’s posts. But in those cases too, human moderation is essential.
I was one of those people who thought the internet would magically reform human nature. That people would pay attention to the content of each other’s arguments rather than their demographics; that shy people would be able to engage in conversations as readily as extroverts; that somehow mean people would become nice and aloof people engaged and all that. But it’s just recreating the same old, isn’t it? The people willing to moderate and engage with commenters get the nice comments sections. The people willing to put time into curating their social media feeds get the pleasant, reliable timelines. Why this sort of thing surprises me every time, I don’t know.
I think the lesson here is that good information requires work of some kind — research, selection, rejection, synthesis, attention, and care. Business and media models that cast these aside in the naive hope that information (words, data, images) is somehow itself pure and reliable are flawed. Business and media models that bake these and more into them are more reliable and take us farther.
In short, I agree. Thanks for stating it so nicely.
I wonder if Socrates would be considered a troll in modern social media? His method consisted, basically, of showing that people who espoused one belief also held other beliefs inconsistent with it, thus exposing them to public embarrassment.
Excellent post. It used to be that pornographers were the first to find ways to leverage new technologies and media; they had a distinct market advantage in that they didn’t care who they offended and expected an extremely low return on ads and emails. As disgusting and disreputable as pornographers are they were completely honest about what they were selling. Trolls, particularly political trolls, have found a larger and more lucrative market. Where a relatively small number of internet users were willing to pay for sexual content the market for people looking to have their political views confirmed or be outraged by some other view is massive. The difference is that trolls are selling us in the guise of providing information with no concern about the facts or consequences of their content. Of course they’re like mushrooms and will pop up in another part of the yard. Of course dealing with them will be difficult and require ongoing attention and resources but ‘it’s hard’ is not an adequate response to the issue of trolls and fake news, it wasn’t adequate with Lenz v Universal and it’s not adequate now.
Trolls simply do not pose a threat in moderated forums. The solution is staring the media right in the face.
The media generally uses moderated forums. The larger issue is that platforms get a pass due to the DMCA, which allows them to represent themselves not as media companies, and therefore pursue business models that encourage trolling. As noted in the exchange on Recode Decode in the post, you’re correct — the answer is staring them in the face. The issue is that they have no incentive to change, because for these platforms, it is not a problem.
I agree that without moderation it can be difficult to cultivate a meaningful and respectful exchange of ideas. But even with moderation it’s possible to build an echo chamber that is unreceptive to ideas that are outside the norm or maybe not phrased with the proper amount of patience and restraint. For most commenting areas, another approach is to simply require that actual names be displayed and not usernames. To the troll, after all, the real allure of the Internet is the ability to stay hidden.
A very good point. It can even go beyond name, especially in the scholarly space. For our product, we validate that the user is not only named properly, but has the academic, publication, or membership credentials necessary to make a comment. While not foolproof, this also makes it less likely that someone without any potential downside for misbehavior is commenting.
What makes trolling work — both for the platforms and the individuals doing it — is that there is no price to pay. When there is something at stake (and using your real name is part of raising the stakes), people tend to behave better.
Not that I’ve tried it, but…apparently it is pretty easy to create or steal an identity. This has been done not only in commenting space, but for purposes of fraudulent peer review…thus, a known risk in our community. There are efforts underway to recognize social media posts and comments as part of an author’s production. I know, I know: ORCID. Though by now there is probably a way to hack that, too. And what about the effort of monitoring all those comments…presumably by the same (overworked) gatekeepers who are currently serving as reviewers and editors for highly vetted content. Is anyone teaching internet “manners” — any more than they seem to be teaching adherence to copyright rules?
Props to anyone who is willing to do the work of defining, building, and curating an online community.
…but there are also many valid reasons why people want to be anonymous online. E.g. if expressing certain views will expose you to bullying (or worse) from your family or community. There’s a big literature on this.
Wow, lots of good thoughts here. It is so difficult to get people to understand that …”the lesson here is that good information requires work of some kind — research, selection, rejection, synthesis, attention, and care.” The foundation of good journalism perhaps? From a B2B perspective, the value of online discourse and the sharing of information lies in the fact that the primary goal in content curation (which is what I do) is to share interesting information with my clients that may be of value to them. Like you said, that requires hard work. Thanks for this article!