Reports & Papers
from Belfer Center for Science and International Affairs, Harvard Kennedy School

Guns, Incels, and Algorithms: Where We Are on Managing Terrorist and Violent Extremist Content Online

Download
guns and missiles burst forth from a laptop screen

Executive Summary

Ten years ago, U.S. national security agencies grew concerned about a relatively new and powerful weapon used by terrorists: the World Wide Web. What had begun as an effort to connect end users from across the world to share information and to serve as a force of human liberation, instead began to be used as a tool for destruction of life. Terrorists were exploiting technology companies’ lax content moderation policies to recruit new members, spread violent extremist ideology, and plan terrorist attacks. In 2012, Twitter’s General Manager declared the firm “the free speech wing of the Free Speech Party,” and large U.S. technology companies were broadly reticent to make changes to their content moderation policies in the early days of their development.

By 2015, a gargantuan effort to eliminate ISIS commenced – mostly driven by the U.S. government – culminating in U.S. Cyber Command’s Operation GLOWING SYMPHONY, led by General Paul Nakasone, which reportedly foiled the majority of ISIS’ online presence and networks in 2016. Technology companies became much stricter about terrorist content online, but the problem of identifying and removing such content persisted.

Today, the online terrorism landscape looks much different to a decade ago. White supremacist and “incel” (involuntary celibate) violent extremist content litters the Web. Terrorist attacks are frequently committed by hate-fuelled lone-wolf “internet warriors” who have been inspired by non-Islamic terrorist and violent extremist content and radicalizing material online. Yet, technology companies and governments have not managed to keep pace with the dynamic threat.

This is not to say that they haven’t tried. In 2019, a terrorist attack committed (and live-streamed, going viral) by an “online warrior” white supremacist at two mosques in Christchurch, New Zealand, galvanized technology companies and governments to do more to combat terrorist content beyond just Islamic terrorism, culminating in an ambitious multilateral initiative, The Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online, an unprecedented diplomatic achievement and step forward in managing the problem.

Technology companies and governments have spent the past decade trying to better address the evolving threat of terrorist and violent extremist content online (TVEC). However, there are few studies examining just how effective these efforts have been, where we are today in managing the problem, and wherein lie gaps for improvement.

This paper argues that companies’ efforts to deal with TVEC have been hampered at the outset by a tendency to define TVEC extremely narrowly. Still, only a tiny proportion of content that could reasonably be categorized as TVEC is included in most definitions. An outsized focus on pre-identified Islamic extremists and terrorist groups means that other types of violent extremists and terrorists (e.g., white supremacists, incels), and those unaffiliated with a group (e.g., lone-wolf actors) are overlooked. This paper also explores the idea of ethical obligations and norms as an alternative to a legally required definition.

On the technical side, this paper finds that even if there was consensus on the legal and ethical questions surrounding TVEC, the technical tools currently available are no panacea. Trade-offs across efficiency, scalability, accuracy, and resilience are persistent. Current technical tools tend to disadvantage minority groups and non-English languages. They are also less robustly implemented across small and non-U.S./European firms, generally either because they are left out of inter-firm initiatives or because they lack resources and capability. This paper does not claim to cover every issue relevant to TVEC; however, it highlights several important gaps that could be addressed by policymakers and tech companies and identifies avenues for future research.

It concludes the following:

  1. A uniform and broader definition of TVEC should be formulated by policymakers and implemented across technology companies, to encompass specified actions or activities and unaffiliated actors, beyond just designated Islamic terrorist entities.
  2. Not all technical tools are created equal. Multilateral and cross-company initiatives to combat TVEC should be inclusive of smaller firms and non-U.S. and non-European firms.
  3. Development of TVEC identification and management tools that are well-trained across different languages and cultural contexts is needed to ensure equitable standards in managing TVEC.
  4. A standard of success needs to be established for machine learning (ML) tools to guide progress towards an ideal ‘North Star’. As it stands, ML tools are not yet good enough to “algorithm our way out of the problem,” and a combination of tools is required, as no tool yet deals with the full extent of TVEC in all its forms.
  5. Legal regimes of corporate social responsibility that emphasize saving lives in the real world by managing TVEC online would liberate technology companies from their obligations to shareholders to engage in practices that maximize engagement and profit at all costs.
  6. Policymakers should be wary of unintended consequences of well-intentioned policies, such as relocation to smaller and less-regulated platforms after being de-platformed.
  7. Public-private cooperation is critical in managing the threat of TVEC from a national security perspective.
Recommended citation

Armstrong-Scott, Gabrielle and James Waldo. “Guns, Incels, and Algorithms: Where We Are on Managing Terrorist and Violent Extremist Content Online.” Belfer Center for Science and International Affairs, Harvard Kennedy School, June 12, 2023