Reports & Papers
from Belfer Center for Science and International Affairs, Harvard Kennedy School

Disinformation Threat Watch

In this image made on Friday, April 27, 2012, pages of rival Taiwan newspapers Apple Daily, top half, and The China Times, bottom, are seen depicting each other’s owners in a fight for ownership of a major chunk of Taiwan’s media outlets. (AP)
In this image made on Friday, April 27, 2012, pages of rival Taiwan newspapers Apple Daily, top half, and The China Times, bottom, are seen depicting each other’s owners in a fight for ownership of a major chunk of Taiwan’s media outlets.

The Disinformation Landscape in East Asia and Implications for US Policy

Download the Full Publication

Executive Summary

 

Purpose of Our Research

We chose to study disinformation in East Asia in order to better understand the global landscape of disinformation and gather lessons learned for U.S. policymakers. While the 2016 presidential election highlighted the impact of disinformation on American politics, disinformation is a global challenge and has a long legacy in Taiwanese and South Korean politics. Most academic research, however, focuses primarily on American or European experiences with disinformation in the past decade. By expanding our regional focus to Asia, we aimed to identify how disinformation has been used as a political tool and how other democratic countries have responded to this threat.

 

Understanding Future Trends in the U.S.

The disinformation threat landscape in Taiwan and South Korea foreshadows trends that may impact U.S. politics. Taiwan’s experience suggests that malicious actors will continue to leverage disinformation in increasingly creative ways. As distribution channels and tactics become more widely available, state and non-state actors enjoy reduced barriers to disseminating false information. Foreign interference in Taiwan’s mid-term and mayoral elections indicates that disinformation may feature in off-cycle years and U.S. state and local elections. The U.S. should anticipate that foreign actors will identify and manipulate cultural divisions with calculated precision, and that multiple countries will deploy disinformation as a foreign influence tool.

The prominence of misinformation and disinformation in South Korea’s domestic politics highlights the danger of creating a political culture that fosters rumors, speculation, and false stories. Unchecked, disinformation may become a regular feature of the campaign cycle with domestic politicians and interest groups engaging in a “race to the bottom” in order to compete. Without a bipartisan effort to educate the American public and condemn disinformation, all parties may unintentionally create conditions in which it flourishes.

Finally, cross-sector efforts to combat disinformation in Asia demonstrate how a solution will require coordination across different industries, government entities, and groups in civil society. Individual parties face unique weaknesses and cannot curtail the problem alone. For example, the technology companies that host communication platforms have little incentive to change their terms of service to eliminate disinformation. Government cannot monitor, regulate, and prosecute all potential cases of disinformation without infringing on civil liberties. Civil society groups may develop innovative solutions, but often lack the platform or access to scale. An effective response will engage the strengths of many actors and recognize the limitations that each encounters.

 

Recommendations for U.S. Policymakers

Taiwan and South Korea provide useful lessons learned for Western democracies facing similar threats. This paper concludes with a series of proposals for U.S. policymakers to combat disinformation.

 

Awareness

  • Alert the American people of a broader influence campaign, citing clear, accessible evidence to explain the threat and implications for American values. 
  • Adapt government communication strategies to media trends, identifying non-traditional distribution channels to increase awareness and message scope. 
  • Declassify intelligence to increase public trust in the government’s assessment. 

 

Research Support

  • Increase government transparency and facilitate public access to opensource intelligence that can be used for quantitative and qualitative disinformation research.
  • Increase funding for Artificial Intelligence research, including research programs that harness AI to identify and block sources of disinformation should be encouraged.
  • Encourage technology companies to collaborate with civil society groups who have developed technical solutions to reduce disinformation on social media platforms, including providing these groups with data, API access, or reduced operating fees. 
  • Enhance international coordination and support a global initiative to identify emerging threats and potential solutions.

 

Policy Action

  • Mandate transparency around online and print advertising.
  • Establish a strategy to combat foreign influence at large and clarify ownership at the federal and state level. 
  • Develop an interagency response plan to address disinformation as early and forcefully as possible. 
  • Build more formal agreements with social media companies to enhance collaboration, including special reporting channels for election-related disinformation, designated points of contact, and threat sharing. 
  • Empower journalists and citizens to independently validate information by encouraging greater transparency at all levels of government and avoiding rhetoric which broadly delegitimizes traditional sources of information. 
  • Support federal grants and technology training programs to build civil society’s capacity to combat disinformation. 
  • Lead a bipartisan effort to call out disinformation that threatens the integrity of American elections.

 

Recommended citation

Crowley, Bo Julie, Casey Corcoran and Raina Davis. “Disinformation Threat Watch.” Belfer Center for Science and International Affairs, Harvard Kennedy School, May 2019