Last week, authors from the MIT Media Lab published an impressive study entitled, “The spread of true and false news online” in the journal Science. Sure to become an Altmetric darling because it deals with Twitter, the engine of Altmetric, the study has been generally viewed as a major step toward a better understanding of how social media can infect our society by exploiting human nature. Unfortunately, the authors did not gather the data needed to analyze a potentially major contributory factor, perpetuating a blindspot that underscores the “unsafe at any speed” dangers of social media as it currently manifests in our culture.

oblivious man on raliroad tracks

Examining approximately 126,000 news stories from 2006 to 2017, the study’s authors analyzed about 4.5 million tweets by about 3 million people.

The authors laudably take down the term “fake news” as “irredeemably politicized . . . the term has lost all connection to the actual veracity of the information presented.” Instead, the authors use the clearer terms “true news” and “false news.”

The authors refer to news stories as “rumors” on occasion, a nice echo of Yuval Noah Harari’s description of human communication superiority being built on “gossip.” The authors point out the “conceptualization of truth or accuracy as central to the functioning of nearly every human endeavor,” which means truth and reliability are vital to gossip uniting people in a trust fabric. By analyzing the language associated with likes and retweets, they found that “true stories inspired anticipation, sadness, joy, and trust” while false stories “inspired fear, disgust, and surprise.”

With gossip being a major part of what makes humans cooperative animals capable of dominating their environments, there is a lot at stake when the trust network of news, gossip, or rumors is violated systematically.

The framing concept of a “cascade” is key to the study’s approach:

A rumor cascade begins on Twitter when a user makes an assertion about a topic in a tweet . . . Others then propagate the rumor by retweeting it. A rumor’s diffusion process can be characterized as having one or more cascades.

Cascades were quantified along a number of axes:

  • Depth — the number of retweet hops from the origin tweet over time, where a hop is a retweet by a new unique user
  • Size — the number of users involved in the cascade over time
  • Breadth — the maximum number of users involved in the cascade at any depth
  • Structural virality — an interpolation between a single, large broadcast and spread through multiple generations of cascade

Using the cascade framework, the authors found that false rumors experienced fewer cascades than true rumors, but that the cascades of false rumors were deeper, broader, bigger, and faster. In essence, this part of the study confirms Jonathan Swift’s words from more than 300 years ago:

Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect.

In this study of Twitter, it took true news about six time as long to reach 1,500 people as it took false news. If nothing else, we now have a clearer indication of the speed differential between flying and limping.

It was interesting to note that the truth had more cascades while the cascades of lies were fewer but more impactful. The authors attribute this largely to novelty-seeking behavior of the users, and their analysis to reach this conclusion is cleverly constructed and reasonable. However, the counterpoint still seems strange — if truth has more cascades, indicating human activity at its root, what would be giving the cascades of more incendiary information a boost?

Another strange twist in the data suggested there might be more afoot here. It was the finding that users who spread false news had significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were verified significantly less often, and had been on Twitter for significantly less time. Reading the study, you can almost feel a hidden and unnamed influence at work, a presence hovering just over the shoulders of the researchers.

The authors sought to eliminate bots from the study in order to isolate human behavior. To check their findings, the researchers analyzed the data with bots included and excluded, and the results were the same. However, this might not be as reassuring as they think. There is a shared variable the researchers weren’t able to explore — the Twitter algorithm.

This suggests that Twitter’s algorithm itself must have some hand in the spread of click-bait information.

To me, the inexplicable highs and lows suggest that Twitter’s algorithm itself must have some hand in the spread of click-bait information, which could also be reasonably described as information that is more likely to provoke “fear, disgust, and surprise.” It has been widely recognized that social media drives users to extreme information because this leads to more clicks and activity, both of which support the underlying advertising-driven business model. We recently learned that Facebook’s algorithm intervened in favor of Donald Trump’s presidential campaign because his ads were more provocative, which meant they sold for less than Hillary Clinton’s as the algorithm judged they’d deliver more clicks (i.e., more ad revenue).

It’s seems entirely feasible to me that Twitter — with its scrambled information presentation, weighting of tweets to drive clicks, and advertising-based business model — must behave in largely the same way. This would negate elements of human trust — number of followers, number you’re following, activity level, verified status, and time on the platform — in favor of novelty of the tweets (“fear, disgust, and surprise”), all to drive clicks and ad dollars. It would also give bots a substrate to distort, so that eliminating the cause (bots) would leave the effect (a radicalized algorithm).

I asked the researchers about this via email, and they responded that they did not evaluate the effect of Twitter’s algorithm, and do feel this is “an interesting and important question.”

I believe it’s interesting and important not just for the direct interventions the algorithm might have had, but also because bots, while apparently eliminated from the study, may have affected Twitter’s algorithm, influencing everyone’s experience and the data used in this study in unseen ways. The editorialists commenting on the research note that algorithms and bots interact, writing:

Bots are also deployed to manipulate algorithms used to predict potential engagement with content by a wider population. Indeed, a Facebook white paper reports widespread efforts to carry out this sort of manipulation during the 2016 U.S. election.

Lacking direct evidence via this study of the effect of Twitter’s algorithm on the acceleration, spread, and penetration of false news, what circumstantial evidence can we use to at least get a sense of the potential effect of algorithmic interference?

Twitter started with a simple reverse-chronological list of tweets when it launched in 2006, and this continued until 2014, when Twitter began to recommend tweets and accounts you might find interesting. In 2015, Twitter introduced “While You Were Away,” a summary of some tweets that had appeared since you last visited that the algorithm thought might be worth surfacing to you. Then, in February 2016, Twitter started reordering timelines using the “While You Were Away” algorithm more broadly. Twitter saw growth after this change, as well as increases in tweets and retweets, indicating the algorithm is driving a lot of the Twitter experience. Twitter also showed its first-ever profit after these changes.

The exact components of the algorithm remain secret, with Twitter describing it as “based on accounts you interact with most, tweets you engage with, and much more.” That “much more” is what gives pause.

In an interview in March 2017, Deepak Rao, who oversees the Twitter timeline, talked with Slate, with reporter Will Oremus writing:

Twitter knows more about its users than it ever did before, such as how much they value recency or how they react to seeing multiple tweets in a row from the same person. The company has tried out new features that group tweets about a given topic or hashtag within your feed. It has even experimented with showing you occasional tweets from people you don’t follow, if Twitter’s ranking system shows that you’re likely to want to see them. Twitter can now evaluate the efficacy of such new features by comparing their effects on user behavior to the effects of the ranked timeline and “In case you missed it,” another newish feature. “Our algorithm changes on an almost daily to weekly basis,” Rao said.

Due to the trajectory of user growth, the majority of activity analyzed in the study occurred during the “algorithm era” of Twitter — twice as many users were active on the platform cumulatively from 2014-2017 than from 2010-2013. On top of this, Twitter has reported that the algorithm’s interventions made users post more tweets, retweet more often, and favorite more tweets, underscoring how skewed by the Algorithm Age of Twitter this study must be. All of this suggests to me that any effect seen in the current study would be heavily influenced by algorithm behavior and its interactions with humans.

Despite the researchers working hard to eliminate bots from the tweets analyzed, other researchers have found that bots leave traces on algorithms and how algorithms optimize to reach their programmed goals — clicks and advertising. A report from the Oxford Internet Institute’s Computational Propaganda Research Project in 2017 found that:

. . . techniques used include automated accounts to like, share and post on the social networks . . . can serve to game algorithms to push content on to curated social feeds. They can drown out real, reasoned debate between humans in favour of a social network populated by argument and soundbites and they can simply make online measures of support, such as the number of likes, look larger – crucial in creating the illusion of popularity.

Because of black box algorithms and how they’re baked into social media sites and their business models, there may be no way for anyone studying a social network these days to know where the algorithm’s effects begin and end. In the advertising-driven business model of social media, algorithms put an unknown amount of weight on the scale. This may explain how newer users with fewer followers and fewer tweets starting fewer cascades dominated in this study.

If you need a fresh reminder of the dangers of social media algorithms, Zeynep Tufecki’s latest column in the New York Times would be a good source. Analyzing YouTube’s algorithm and how it feeds radicalization efforts, Tufecki writes:

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

Twitter is profitable now. Its algorithm played a big role in that. How big a role does it play in the spread of false news? Despite a large study of how people use Twitter, the real actor — the algorithm driving the spread, velocity, and penetration of false news — may have eluded analysis yet again.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

3 Thoughts on "Blindspot — Was a Key Factor Missed in the Study of Viral Lies?"

Excellent points — well made. Since the details of the algorithm(s) of any advertising search business have to remain ‘black-boxed’ it will likely be impossible for regulators to validate or confirm the veracity of any algorithm-driven publicity/news service. Another factor that justifies long-term confidence in subscription models where user privacy can be more reliably enforced.

Comments are closed.