Policy Brief - Belfer Center for Science and International Affairs, Harvard Kennedy School
Technology Factsheet: Deepfakes
Download the Full Document:
Executive Summary
Deepfakes can be defined as synthetic auditory or visual media developed using deep learning, a subfield of machine learning (ML), that appear to be authentic and are often created with the intent of deceiving audiences. Synthetically generated media widely varies in technical sophistication and application, rang- ing from low quality “cheap fakes” to more high quality “deepfakes,” and has the ability to challenge and influence perceptions of reality. The development of synthetic audio-visual content is not novel: Hollywood filmmakers have employed computer-generated imagery (CGI) since the 1970s to temporarily—for the duration of the film—suspend disbelief among audiences. Advances in ML have made sophisticated synthetic media cheaper and easier to produce (particularly thanks to a proliferation of free and open source software for generating deepfakes. Even technologically unsophisticated actors are now able to create and distribute deepfakes.
Deepfakes have been used to spread disinformation and misinformation about public officials and political issues, non-consensually alter pornographic content, and develop amateur entertainment on social media applications. While experts disagree over the challenges that deep fakes pose for society, there is growing general concern over how deepfakes contribute to the evolution of the internet disinformation age. Certain experts have even sounded the alarm, declaring deepfakes a “looming crisis in national security, democracy, and privacy.”
The first United States federal legislation on deepfakes was signed into law in December 2019 as part
of the National Defense Authorization Act (NDAA), requiring a comprehensive report of foreign weaponization of deepfakes, among other mandates. Additionally, there are several proposed legislations regarding deepfakes pending on the floor of the House of Representative and the Senate. Virginia, Texas, California, and New York have also targeted deepfakes with recent legislation. Given the existing and anticipated concerns regarding deepfakes, there is a clear need for U.S. legislators and policymakers to continue to deepen engagement with the privacy, safety, and security risks that exist for this technology.
For more information on this publication:
Belfer Communications Office
For Academic Citation:
Davis, Raina . “Technology Factsheet: Deepfakes.” Edited by Chris Wiggins, Joan Donovan and Amritha Jayanti. Policy Brief, Belfer Center for Science and International Affairs, Harvard Kennedy School, Spring 2020.
- Recommended
- In the Spotlight
- Most Viewed
Recommended
In the Spotlight
Most Viewed
Report
- Belfer Center for Science and International Affairs, Harvard Kennedy School
Strengthening Ukrainian Resiliency in the Medium to Long Term
Book
- Simon & Schuster
SPIES: The Epic Intelligence War Between East and West
Analysis & Opinions
- Project Syndicate
What Caused the Ukraine War?
Download the Full Document:
Executive Summary
Deepfakes can be defined as synthetic auditory or visual media developed using deep learning, a subfield of machine learning (ML), that appear to be authentic and are often created with the intent of deceiving audiences. Synthetically generated media widely varies in technical sophistication and application, rang- ing from low quality “cheap fakes” to more high quality “deepfakes,” and has the ability to challenge and influence perceptions of reality. The development of synthetic audio-visual content is not novel: Hollywood filmmakers have employed computer-generated imagery (CGI) since the 1970s to temporarily—for the duration of the film—suspend disbelief among audiences. Advances in ML have made sophisticated synthetic media cheaper and easier to produce (particularly thanks to a proliferation of free and open source software for generating deepfakes. Even technologically unsophisticated actors are now able to create and distribute deepfakes.
Deepfakes have been used to spread disinformation and misinformation about public officials and political issues, non-consensually alter pornographic content, and develop amateur entertainment on social media applications. While experts disagree over the challenges that deep fakes pose for society, there is growing general concern over how deepfakes contribute to the evolution of the internet disinformation age. Certain experts have even sounded the alarm, declaring deepfakes a “looming crisis in national security, democracy, and privacy.”
The first United States federal legislation on deepfakes was signed into law in December 2019 as part
of the National Defense Authorization Act (NDAA), requiring a comprehensive report of foreign weaponization of deepfakes, among other mandates. Additionally, there are several proposed legislations regarding deepfakes pending on the floor of the House of Representative and the Senate. Virginia, Texas, California, and New York have also targeted deepfakes with recent legislation. Given the existing and anticipated concerns regarding deepfakes, there is a clear need for U.S. legislators and policymakers to continue to deepen engagement with the privacy, safety, and security risks that exist for this technology.
- Recommended
- In the Spotlight
- Most Viewed
Recommended
In the Spotlight
Most Viewed
Report - Belfer Center for Science and International Affairs, Harvard Kennedy School
Strengthening Ukrainian Resiliency in the Medium to Long Term
Book - Simon & Schuster
SPIES: The Epic Intelligence War Between East and West
Analysis & Opinions - Project Syndicate
What Caused the Ukraine War?