The overarching question imparting urgency to this exploration is: Can U.S.-Russian contention in cyberspace cause the two nuclear superpowers to stumble into war? In considering this question we were constantly reminded of recent comments by a prominent U.S. arms control expert: At least as dangerous as the risk of an actual cyberattack, he observed, is cyber operations’ “blurring of the line between peace and war.” Or, as Nye wrote, “in the cyber realm, the difference between a weapon and a non-weapon may come down to a single line of code, or simply the intent of a computer program’s user.”
The Digital Platform Governance: Proposals Index (DPGP Index) is a database of primarily U.S.-centric proposals aimed at mitigating the harms and risks created by social media platforms. The goal of the DPGP Index is to provide a tool for researchers across the public, academic, and private sectors to query and search a variety of proposed solutions from experts all over the world, all in one location.
Navigating the Index
Click through the tabs to explore the Index, which currently includes a total of 155 proposals. The Index sorts proposals into three purpose-oriented categories; full descriptions of these categories are provided below.
-
Category Overview: Mitigating Misinformation and Disinformation at Scale
The spread of mis- and disinformation is not a new problem. However, the internet exacerbates the issue, increasing its development, reach, and impact.
Researchers have generally pointed to four technological drivers that fuel the spread and development of mis- and disinformation.
- First, information now travels at an unprecedented speed and scale. Platforms provide users with the ability to generate and share content instantly. This process can sometimes lack up-front friction, allowing information to go viral without first being validated.
- Second, the increasing volume, specificity, and granularity of consumer data enables microtargeting. As consumers use digital platforms and the internet, greater amounts of their data are being collected, by an increasingly concentrated set of companies. Access to this data enables bad actors to target specific segments of the population with individualized mis- and disinformation campaigns.
- Third, algorithmic curation of news creates echo-chambers and boosts false content. Platform algorithms personalize news feeds for every user, based on data that helps predict a user’s opinions and world views. This leads to the creation of echo-chambers, where users are increasingly walled off from news that runs counter to their political perspectives, while simultaneously being shown content that reaffirms their beliefs.
- Fourth, deep-fakes, frictionless account creation, and automation may increase the credibility of false content and amplify its reach. Technologies that are used to create deep-fakes are becoming available to the everyday user, making it easier to spread high-quality false content. These deep-fakes can now be created with only a few hundred pictures, making most people with public social media accounts susceptible targets. Frictionless account creation and open developer interfaces make it possible to start inauthentic and automated accounts, which can be used to amplify false information and manipulate public opinion.
Additionally, researchers point to three human and societal drivers that contribute to the spread of mis- and disinformation, and are fed by the design of digital platforms.
- First, the national information sphere is becoming increasingly fragmented, increasing its partisan makeup. In 1995, three major network television stations (ABC, NBC, and CBS) served as gatekeepers to information, with 62% of Americans receiving their news from network programs everyday. These networks distributed largely nonpartisan, general news content designed for broad appeal – providing Americans with a shared understanding of daily events. By 2019, that number had fallen to 27%, and 52% of Americans now say they prefer to receive their news from digital platforms.
- Second, a “demand” for self-validating content drives consumption of mis- and disinformation. Psychological vulnerabilities, the lack of digital literacy, and the decline of local journalism and public trust in institutions has led users to demand content that supports their personal biases. This demand opens the door for sensationalistic content and mis- and disinformation.
- Third, a lack of liability for the creation and amplification of mis- and disinformation prevents the problem from being solved. The creators of false content are often not held accountable for the impact that their content has on society, despite profiting from the practice. Further, Section 230 provisions prevent digital platforms from facing accountability for the amplification of mis- and disinformation.
Given the identified drivers of mis- and disinformation, recommended solutions generally fall into one of seven categories. These are not necessarily proposals that the Democracy and Internet Governance Initiative is championing, but rather illustrate avenues that have been explored by some experts.
- Reduce the speed at which false information is spread.
- Limit the availability, concentration, and abuse of user data.
- Alter the advertisement-based curation of content.
- Limit the effect that deep-fakes, bots, and anonymity have on the spread and impact of false content.
- Increase transparency and awareness around the fragmented media environment.
- Reduce “demand” for self-validating content by addressing psychological vulnerabilities, and increasing digital literacy and trust in institutions.
- Increase liability for the creation and amplification of mis- and disinformation.
You can see the full list of proposals for this category by navigating to the Mitigating Misinformation and Disinformation tab.
-
Category Overview: Countering Extremism and Radicalization Online
Society has been dealing with radicalization and incitement to violence for a long time. The internet has made the problem worse, providing speed and scale to the radicalization process.
Researchers have generally pointed to two platform-driven factors that can contribute to online radicalization:
- First, platform design. The process that leads an individual from mainstream views to more extremist, radical views is gradual.
- Content Algorithms dictate what users see. Through these algorithms, at-risk users can receive increasingly more radical information that feeds their growing extremist views; self-radicalization becomes easier.
- “Group” Features allow like-minded users to organize communities online. At-risk users are recommended to join groups of other users who share their extremist views. These groups can validate and provide social standing for extreme, violent thoughts.
- Encrypted Communication Tools provide private messaging to users. Increasingly radical users can use these tools to coordinate and plan violent actions without being surveilled by law enforcement.
- Second, platform business incentives. Platforms are primarily funded through advertising revenue, and the rates that advertisers pay are based on monthly average user engagement. Users may be more likely to stay engaged when the online discourse is contentious, so platforms can be incentivized to adjust their algorithms and highlight eye-catching content. This can inevitably lead to the platform design issues highlighted above. Additionally, platform usage is artificially boosted by bots – disincentivizing platforms from making it harder to create new accounts.
Two additional, non-platform driven, factors further exacerbate the issue of radicalization and incitement to violence online:
- First, lack of digital literacy. User inability to easily tell fact from fiction makes them more susceptible to radical content. Additionally, bots and fake accounts contribute to this issue, by making extreme views appear more popular or mainstream.
- Secondly, hurdles to law enforcement interoperability. Logistical issues stymie local, federal, and international authorities, as well as platform researchers, from coordinating. When authorities cannot communicate and coordinate across jurisdictions, content that incites violence can go unnoticed, disrupting law enforcement’s abilities to surveil bad actors and decreasing proactive measures.
Given the identified drivers of radicalization and incitement to violence online, recommended solutions generally fall into one of four categories. These are not necessarily proposals that the Democracy and Internet Governance Initiative is championing, but rather illustrate avenues that have been explored by some experts.
- Increase platform accountability.
- Increase digital literacy.
- Increase information visibility.
- Increase cooperation to counter extremism.
You can see the full list of proposals for this category by navigating to the Countering Extremism and Radicalization tab.
- First, platform design. The process that leads an individual from mainstream views to more extremist, radical views is gradual.
-
Category Overview: Addressing Networked Harassment, Diminishing Press Freedoms, and Chilled Speech
Researchers have generally pointed to two variables that contribute to diminishing press freedom and “chilled speech” in the United States:
- First, new business and market dynamics. Advertising revenue served as a vital source of profit for newspapers and news outlets. With the internet, digital platforms absorb that revenue and news outlets are forced to pivot to new forms of profitability. This impacts freedom of the press in two ways:
- Smaller news outlets are dying out. Local and small-shop journalism are going extinct because of lack of revenue.
- News outlets are beholden to the information preferences of digital platforms. This results in an increase in sensational news, clickbait titles and content, and quick-consumption formats – anything that is preferential to digital platform recommendation algorithms (e.g. attention grabbing content). Claims of political bias from the right also fall under this category.
- Second, online harassment of journalists/news outlets. Journalists are harassed. Although this phenomenon is not new, it is strongly exacerbated by online dynamics. With the rise of social media, journalists have found themselves in the spotlight, whereas passive platform users benefit from a shield of anonymity and limited traceability. With that in mind, it has become easier to target journalists with threats and harassment.
Given the identified drivers of networked harassment, diminishing press freedoms, chilled speech online, recommended solutions generally fall into one of four categories. These are not necessarily proposals that the Democracy and Internet Governance Initiative is championing, but rather illustrate avenues that have been explored by some experts.
- Increase market power and resources to media outlets.
- Increase newsroom responsibility.
- Increase platform duties and liability.
- Increase cooperation to counter harassment.
You can see the full list of proposals for this category by navigating to the Addressing Harassment, Diminishing Press Freedoms, and Chilled Speech tab.
- First, new business and market dynamics. Advertising revenue served as a vital source of profit for newspapers and news outlets. With the internet, digital platforms absorb that revenue and news outlets are forced to pivot to new forms of profitability. This impacts freedom of the press in two ways:
The Index is currently a dynamic, non-exhaustive list of proposals. We encourage suggested additions and/or edits to the existing entries via the form provided in the Submit a Proposal, Edit, or Comment tab.
For ease of use, the complete DPGP Index can also be accessed here as a Google Spreadsheet
The DPGP Index is a product of the Democracy and Internet Governance Initiative (DIGI), a joint initiative between Harvard University's Belfer Center and Shorenstein Center. The initiative is focused on collaborating with a range of senior stakeholders across government, business, and civil society to address the growing public concerns about digital platforms.
Mitigating Misinformation and Disinformation at Scale
The DPGP Index currently includes 79 proposed solutions in the Mitigating Misinformation and Disinformation at Scale category, which are listed below under seven primary subcategories:
1. Reduce the speed at which false information is spread
The lack of friction for the sharing and posting of content allows for false information to spread rapidly, without first being verified for individual users. To counteract this problem, researchers have proposed solutions that generally focus on slowing the spread of information and increasing user abilities to recognize false content.
-
Legislation
Social Media NUDGE Act of 2022 (S.3608)
- Date Proposed or Announced: 02/09/2022
- Summary: This bill directs the National Science Foundation to research content-neutral interventions that could keep users from sharing and believing disinformation. Examples of some interventions could include asking users to read articles before sharing them.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Countering Online Harms Act (H.R.6937)
- Date Proposed or Announced: 05/19/2020
- Summary: This bill requires the FTC to conduct a study on how artificial intelligence (AI) can be used to identify and slow the spread of disinformation, manipulated media, and other types of harmful or criminal content. Based on the study, the FTC is then required to make recommendations on AI policies and legislation that could be adopted to address these harms.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Protecting Against Public Safety Disinformation Act (H.R.7282)
- Date Proposed or Announced: 06/18/2020
- Summary: This bill directs the DHS’ Office of Intelligence and Analysis to assess the impact of disinformation by foreign actors on the U.S.’ ability to respond to terrorism and other security threats. The bill also requires the DHS’ Under Secretary for Science and Technology to research disinformation operations and develop recommendations to counter and slow the spread of such disinformation.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Modernize FARA to Cover Dissemination of Information on Social Media, as proposed by Ellen Goodman, and expanded upon by Erin Simpson and Adam Conner (link)
- Date Proposed or Announced: 02/26/2020
- Summary: Experts have called on Congress to update the Foreign Agents Registration Act. The update would require news organizations run by foreign entities to disclose their ownership and role in advancing state interests on digital platforms -- a requirement they currently face when advancing interests in other aspects of American culture. If implemented, this transparency could increase user awareness, making users less susceptible to disinformation and thus decreasing the spread of false content. Such an update could potentially be implemented through legislation or through new Justice Department guidance that clarifies the meaning of key terms and the Department’s interpretation of FARA. One proposal for the modernization of FARA was written by Joshua Fattal, now an attorney-advisor in the Office of the General Counsel at DHS. A link to this proposal is provided.
- Status: Third Party Legislative Proposal
-
Self-Governance
Label, Fact-check, and Delete False Content, as implemented by many Platforms, including Facebook (link)
- Date Proposed or Announced: 12/15/2016
- Summary: Platforms have experimented with placing labels and warnings on potentially false content. These steps have shown early promise at reducing the spread of mis- and disinformation. As an example, Facebook is one of many Platforms that have enacted this style of solution. A link to one of their company statements announcing this proposal is provided.
- Status: Partial Implementation
Labeling the Accounts of State-controlled News Organizations, as implemented by many Platforms, including Facebook (link)
- Date Proposed or Announced: 06/05/2020
- Summary: Experts have encouraged Platforms to label accounts associated with state-controlled news organizations. If implemented at scale, experts argue that users will better understand where content is originating, and might be less susceptible to disinformation from foreign outlets, thus slowing its spread. For example, such a policy could slow the spread of disinformation from Russian media about Moscow's invasion of Ukraine. Facebook is one of many Platforms that have enacted this style of solution. A link to one of their company statements announcing this proposal is provided.
- Status: Partial Implementation
Prompting Users to Consider Accuracy, as proposed by MIT and Google researchers (link)
- Date Proposed or Announced: 05/18/2021
- Summary: In a recent research paper, MIT and Google researchers suggest that shifting users’ attention to the accuracy of content could increase the quality of news that they subsequently share online. This could be done through simple prompts, such as a popup that asks users to consider the accuracy of the information presented in an article once it has been opened.
- Status: Partial Implementation
Read Before Share, as implemented by many Platforms, including Twitter (link)
- Date Proposed or Announced: 09/24/2020
- Summary: Many Platforms now prompt users to read articles before sharing them with their followers. Twitter tested such a proposal in the form of a simple popup prompt. According to the tests, the popup successfully led to users opening articles 40 percent more often. Twitter even saw the overall proportion of people opening articles before sharing them increase by 33 percent. Twitter has since implemented this prompt more broadly, as have other Platforms, including Facebook and Instagram. A link to an official Twitter statement regarding it's initial test results is provided.
- Status: Partial Implementation
Sharing and Forwarding Limits, as proposed by Frances Haugen (link)
- Date Proposed or Announced: 10/18/2021
- Summary: Experts have suggested that companies like Facebook and Twitter modify their “share” or “retweet” functions, so content can only be shared a limited number of times before being slowed down through intentional friction. As an example, after a post has been shared twice, a third user could be required to copy and paste the link into a new post before being allowed to re-share it. This proposal was tested within Facebook in 2019 and was highlighted by whistleblower Frances Haugen in 2021, but was not implemented more broadly due to internal concerns about reducing user engagement. A campaign called #OneClickSafer has since been launched by the Center for Humane Technology, which advocates for sharing limits as a way to reduce the harms of frictionless sharing. A link to an interview with Frances Haugen, when she brought this proposal public is provided.
- Status: Partial Implementation
Platform Promotion of Reputable Sources via Crowdsourced Judgments of News Source Quality, as proposed by Gordon Pennycook and David G. Rand (link)
- Date Proposed or Announced: 01/28/2019
- Summary: During instances where disinformation is spreading widely (such as in emergencies or elections), Platforms can promote reputable information sources, including local police accounts and emergency responder accounts, and restrict the visibility of trending posts that promote disinformation. They could also lift up content from higher quality sources. Gordon Pennycook and David G. Rand propose that Platforms can determine which sources are "high quality" by crowdsourcing user judgements.
- Status: Partial Implementation
Platform Promotion of Reputable Sources via Proxy Measures for Reliability, as proposed by Bhadani et al. (link)
- Date Proposed or Announced: 02/03/2022
- Summary: During instances where disinformation is spreading widely (such as in emergencies or elections), Platforms can promote reputable information sources including local police accounts, and emergency responder accounts, and restrict the visibility of trending posts that promote disinformation. They could also lift up content from higher quality sources. Bhadani et al. propose that Platforms could use proxy measures to determine a website's news reliability, such as the political diversity of the site's audience.
- Status: Untested Proposal
Platform Promotion of Reputable Sources via Third-Party Organizations, such as NewsGuard (link)
- Date Proposed or Announced: 03/05/2021
- Summary: During instances where disinformation is spreading widely (such as in emergencies or elections), Platforms can promote reputable information sources, including local police accounts and emergency responder accounts, and restrict the visibility of trending posts that promote disinformation. They could also lift up content from higher quality sources. Platforms can determine the credibility of certain sources by cooperating with third party organizations, which have built extensive databases of website credibility ratings. One example of such a third party organization is NewsGuard, which launched in March of 2018. A link to their first press release, announcing their company, is shared.
- Status: Partial Implementation
Platform Promotion of Reputable Sources via Internal Scoring or Decision Mechanisms, as implemented by Facebook through their news ecosystem quality score (link)
- Date Proposed or Announced: 01/07/2021
- Summary: During instances where disinformation is spreading widely (such as in emergencies or elections), Platforms can promote reputable information sources including local police accounts and emergency responder accounts, and restrict the visibility of trending posts that promote disinformation. They could also lift up content from higher quality sources. In the past, Platforms have scored sources internally and used those scores in their curation decisions during times of extreme duress. An example of this includes Facebook's use of its "news ecosystem quality score" in news feed decisions following the 2020 U.S. Presidential Election. Facebook's use of this scoring system was reported by the New York Times, and a link to that article (first released November 24, 2020, but updated on January 7, 2021) is shared.
- Status: Partial Implementation
Platform Nudges to Consider Content From Across the Political Spectrum, as outlined by Thornhill, et al, using BalancedView (link)
- Date Proposed or Announced: 12/15/2016
- Summary: Platforms could nudge users to share or believe less disinformation. One way Platforms can do this is by introducing users that engage with a news story to other articles from across the political spectrum. This could nudge users to consider a conflicting perspective, and to more deeply engage with the debate. Thornhill, et al. show that a tool called BalancedView, which presents users with a selection of articles from across the political spectrum, may change the behavior of users to more fully engage with news stories. This could reduce the likelihood that users are deceived by, and further spread, disinformation. A link to the research by Thornhill, et al. is provided.
- Status: Partial Implementation
Fake News Detection Algorithms, as recommended by Darrell West (link)
- Date Proposed or Announced: 12/18/2017
- Summary: Platforms can develop algorithms that detect and flag misinformation. The spread of flagged content can then be slowed. One scholar to make the case for using fake news detection algorithms was Darrell West, Vice President and Director of Governance Studies at the Brookings Institution, in a 2017 report on combating disinformation. The report noted that in two separate studies, researchers determined that using meta-data in combination with text analysis allowed them to identify posts, stories, and sites engaging in misinformation. One study found an accuracy rate of over 99% when the model was tested on data consisting of 15,500 Facebook posts and over 909,000 users. A link to the report by Darrell West is provided.
- Status: Partial Implementation
Circuit Breakers, as proposed by Ellen Goodman, and expanded upon by Erin Simpson and Adam Conner (link)
- Date Proposed or Announced: 02/26/2020
- Summary: Driven by the example of the New York Stock Exchange, which limits the trading of a stock after it drops by more than 7%, some researchers have proposed implementing slight limits to the spread of viral content until it can be fact checked. These "circuit breakers" would introduce friction for users looking to interact with content that is going viral. Some proposed means for introducing this friction include placing pop-up warnings on the content in question and under-weighting its algorithmic spread. A link to a paper by Erin Simpson and Adam Conner, which expands on the idea of "circuit breakers" originally outlined by Ellen Goodman, is provided.
- Status: Partial Implementation
Forwarding Limits, as implemented by WhatsApp (link)
- Date Proposed or Announced: 07/19/2018
- Summary: WhatsApp has placed limits on the number of times a message can be forwarded to large groups. Since 2020, WhatsApp messages with five or more previous forwards could be resent to just one new conversation. WhatsApp claims this change led to a 70% global reduction in the number of frequently forwarded messages. A link to a WhatsApp communication outlining its policy on forwarding limits is provided.
- Status: Partial Implementation
Self-Governing Groups, as proposed by Jaron Lanier (link)
- Date Proposed or Announced: 05/26/2022
- Summary: Jaron Lanier has proposed that social media should be restructured. If implemented, his proposed restructuring would have users join small "groups" (no larger than 100-200 members), which would be self-governing in terms of content moderation. The smaller size of the groups would keep the discussion intimate and, according to Lanier, help reduce the spreading of false content to the wider public. A link to a paper by Lanier, where he outlines this proposal, is provided.
- Status: Untested Proposal
Content Moderation AI, as described by Tarleton Gillespie (Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media by Tarleton Gillespie)
- Date Proposed or Announced: 08/24/2021
- Summary: Given the scale of content shared on social media, most platforms have implemented algorithms to identify and flag potential mis- and disinformation. Platforms can develop such algorithms in-house or outsource the work to companies in this space. Some offerings provided by companies in the space include products that identify problematic images, audio, language, or more. Others offer non-AI solutions, providing human-based content moderation. Tarleton Gillespie describes various methods that Platforms use to moderate content in his book "Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media."
- Status: Full Implementation
Crowdsourced Fact-Checking, as studied by Allen, et al. (link)
- Date Proposed or Announced: 09/01/2021
- Summary: Professional fact-checking efforts are difficult to scale, and some Platforms have tested crowdsourced fact-checking options, such as Twitter’s Birdwatch program. Some research has indicated positive results from this approach. One study by Jennifer Allen, Antonio Arechar, Gordon Pennycook, and David Rand found that judgment of small crowds of politically balanced laypeople, on average, correlated with the judgments of professional fact-checkers, showing promise behind the idea of crowdsourcing. A link to this study is provided.
- Status: Partial Implementation
2. Limit the availability, concentration, and abuse of consumer data
The increasing amount and concentration of user data provides bad actors with the ability to micro-target mis- and disinformation campaigns toward individuals who are most vulnerable to the campaign messages. Solutions have generally focused on reducing the collection, concentration, and abuse of user data.
-
Legislation
Protecting Democracy From Disinformation Act (H.R.7012)
- Date Proposed or Announced: 05/26/2020
- Summary: This bill amends the Federal Election Campaign Act of 1971. Specifically, the bill prohibits third parties from using behavioral data to target political advertisements. It also requires Platforms to maintain records of political advertisements. This is, in part, an intervention designed to reduce the ability of bad actors to grow their base through targeted messaging.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Filter Bubble Transparency Act (S.2024)
- Date Proposed or Announced: 06/10/2021
- Summary: This bill requires Platforms to notify users if they employ an algorithm to curate content based on individual data. It also requires Platforms to provide users with an “input-transparent” curation option, or a curation option that only uses information that users provide directly to the platform (e.g., through likes, follows, searches, etc.). This could limit bad actors from being able to abuse a Platform to amplify disinformation and harmful content.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Consumer Online Privacy Rights Act (S.3195)
- Date Proposed or Announced: 11/04/2021
- Summary: This bill would: (1) prohibit certain deceptive, harmful, and intrusive data practices; (2) provide users the right to access, delete, and correct their collected data; (3) give users the right to data portability; and (4) grant users the right to prevent their collected data from being shared with third parties. If enacted, the bill would also require the FTC to create an internal bureau to enforce these new laws.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Online Privacy Act of 2021 (H.R.6027)
- Date Proposed or Announced: 11/18/2021
- Summary: This bill would establish various rights for users, including: (1) the ability to choose how long their collected data is stored, and (2) the right to be informed of data collection by third parties. A Digital Privacy Agency would also be created to enforce these rights, and states and individuals would be given the ability to bring civil lawsuits to hold Platforms accountable. In a 2021 update to the bill, several provisions and additional privacy protections were added, including creating an Office of Civil Rights in the proposed Digital Privacy Agency, and authorizing state privacy regulators to enforce the legislation alongside state attorneys general.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Data and Algorithm Transparency Agreement (DATA) Act (S.1477)
- Date Proposed or Announced: 04/29/2021
- Summary: This bill prohibits Platforms from collecting, sharing, or selling sensitive data about users without their express consent. The bill also establishes a private right of action that allows individuals to sue Platforms for violating this rule. Platforms rely on information they collect on users to target advertisements and match individuals with content they are likely to engage with. Restricting their ability to collect that data limits the effectiveness of ranking and recommendation algorithms, and could make it harder for targeted disinformation narratives to find their audiences.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
SAFE DATA Act (S.2499)
- Date Proposed or Announced: 07/28/2021
- Summary: The bill would provide Americans with more choice and control over their data, and directs businesses to be more transparent and accountable about their data practices. The bill would also enhance the Federal Trade Commission’s (FTC) authority and provide additional resources to enforce the Act. The bill contains substantive language to address consumer control, business privacy assessments, and the role of the FTC.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
American Data Privacy and Protection Act (H.R.8152)
- Date Proposed or Announced: 06/21/2022
- Summary: This bill would: (1) establish a strong national framework to protect consumer data privacy and security; (2) grant broad protections for Americans against the discriminatory use of their data; (3) require Platforms to minimize their collection of user data to what is reasonably necessary and proportionate for specific products and services; (4) require Platforms to comply with loyalty duties with respect to specific practices, while ensuring consumers don’t have to pay for privacy; (5) require Platforms to allow consumers to turn off targeted advertisements; (6) provide enhanced data protections for children and minors, including what they might agree to with or without parental approval; (7) establish regulatory parity across the internet ecosystem; and (8) promote innovation and preserve the opportunity for start-ups and small businesses to grow and compete.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Data Broker List Act of 2021 (S.2290)
- Date Proposed or Announced: 06/24/2021
- Summary: This bill establishes new requirements for data brokers, who make a business out of acquiring personal data and then selling said data to other companies. Data brokers are prohibited from (1) acquiring such information by fraud, (2) using such information for a specified prohibited purpose such as fraud or identity theft, or (3) selling such information to a third party that the broker should reasonably know intends to use the information for a specified prohibited purpose. All data brokers must also register with the Federal Trade Commission (FTC) on an annual basis.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
-
Self-Governance
Data Collection Restrictions by Service Providers, as implemented by many providers, including Apple (link)
- Date Proposed or Announced: 01/27/2021
- Summary: Companies that facilitate access to social media Platforms, such as Apple’s iOS, Google’s Android, and other web browser companies, can prevent the collection of their users’ data. Mozilla’s Firefox has limited data collection on their browser, and Apple has required users to opt-in for data-tracking and data sharing with brokers. These moves have hampered Platforms’ abilities to collect data from their users. Other search engines like DuckDuckGo facilitate searches while blocking trackers, ensuring that user data is more private and less accessible to companies employing trackers and cookies. A link to an Apple statement announcing their policy, implemented through its iOS 14.5 update, is provided.
- Status: Full Implementation
Data Minimization, as incorporated in EU data protection standards (link)
- Date Proposed or Announced: 01/25/2012
- Summary: Industry best practices could be developed around how to limit the collection of user data to the scope of specific products and services. Data minimization is core to principle data protection and consumer privacy, and has been enshrined in the laws of many countries, perhaps most notably in the European Union and more recently in California's Consumer Privacy Act. It is also a key component of privacy bills currently under discussion in Congress.
- Status: Partial Implementation
Data Transfer Project, as implemented by Facebook, Google, Microsoft, and Twitter (link)
- Date Proposed or Announced: 07/20/2018
- Summary: This industry-led system would allow users to move their data between services, limiting the concentration of data held by any single provider. This process is often referred to as "data interoperability." A framework for the Project was laid out in a 2018 whitepaper, and its members include Facebook, Google, Microsoft, and Twitter. A paper providing an overview of the Project is linked.
- Status: Partial Implementation
3. Alter the advertisement-based curation of content
Platform algorithms personalize news feeds for every user, based on data that helps predict user opinions and world views. Researchers claim this can lead to the creation of echo-chambers, where users are increasingly walled off from news that goes against their political perspectives. These algorithms are also optimized to maximize revenue, which provides false content with a reach advantage over true content. Proposals to counteract these problems focus on increasing consumer choice.
-
Legislation
American Innovation and Choice Online Act (S.2992)
- Date Proposed or Announced: 10/18/2021
- Summary: This bill prohibits large online digital companies from engaging in certain acts through their Platforms. Specifically, they cannot: (1) give preference to their own products, (2) unfairly limit the availability of competing products from another business, or (3) discriminate in the application or enforcement of their terms of service.
- Status: Passed Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Open App Markets Act (S.2710)
- Date Proposed or Announced: 08/11/2021
- Summary: The bill prohibits a online Platforms from: (1) requiring developers to use an in-app payment system owned or controlled by the company as a condition of distribution or accessibility; (2) requiring that pricing or conditions of sale be equal to or more favorable on its app store than another app store; or (3) taking punitive action against a developer for using or offering different pricing terms or conditions of sale through another in-app payment system or on another app store.
- Status: Passed Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
ACCESS Act of 2022 (S.4309)
- Date Proposed or Announced: 05/25/2022
- Summary: This bill would require platforms with over 100 million users to create application programming interfaces (APIs) “that make user data portable and services interoperable.” Users and competitor platforms would be given a method to access and transfer data, with reasonable security requirements. Providing users with a way to take their data to other Platforms could erode the monopolistic positions that Big Tech companies hold in the global social media market. This could force major market participants to compete and innovate. This could potentially result in new business models and changes to the advertisement-based curation of content.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
-
Federal and Local Agency Programs
Antitrust Enforcement Against Big Tech Companies, as proposed by Lina Khan (link)
- Date Proposed or Announced: 01/2017
- Summary: Experts have supported proposals to break up large technology companies, arguing that doing so would increase competition in the short run, and increasing consumer choice. In 2017, then-Yale law student and current Chair of the FTC, Lina Khan, wrote a Yale Law Review article articulating such a policy.
- Status: Partial Implementation
-
Self-Governance
Reverting to a Chronological Newsfeed, as argued for by Frances Haugen (link)
- Date Proposed or Announced: 10/05/2021
- Summary: Some Platforms have offered their users a chronological newsfeed option, which curates content by time rather than metrics for engagement. If a user utilizes such an option, the most recent posts will be seen first. Prior to the introduction of algorithmic content curation, this was the only way to view information on most social networks, and many have argued for a return to the chronological feed. One of the most high profile advocates of this move is Facebook whistleblower Francis Haugen, who included it in her testimony before the US Senate.
- Status: Partial Implementation
Channel Switching, or Consumer Choice for Content Curation, as suggested in testimony by Stephen Wolfram (link)
- Date Proposed or Announced: 06/25/2019
- Summary: Data scientists have proposed that platforms could create multiple curation options, and provide users with the opportunity to choose which algorithm they would like to sort their feed. Allowing users to opt out of the default recommendation system and into one organized chronologically, by diversity, or optimized for some other characteristic (such as partisan leanings) would erode the centrality of engagement-based ranking to platform business models, and increase awareness and transparency for users. One prominent advocate of this proposal is Stephen Wolfram, founder of Wolfram Research, who outlined the idea in his 2019 testimony before the US Senate.
- Status: Untested Proposal
Data Portability & Interoperability, as enshrined in the EU's GDPR (link)
- Date Proposed or Announced: 04/14/2016
- Summary: Providing users with a way to take their data to other Platforms would erode the monopolistic positions that Big Tech companies hold in the global social media market. This could force major market participants to compete and innovate. This could potentially result in new business models and changes to the advertisement-based curation of content. Some examples of ongoing tests related to data portability include the Green Button Initiative, the Open Banking Implementation Entity (OBIE), and the Financial Data Exchange (FDX). Perhaps the most sweeping instance of its application is the right to data portability enshrined in the EU's GDPR.
- Status: Partial Implementation
Permit Middleware Companies to Operate on Platforms, as proposed by researchers at Stanford's Cyber Policy Center (link)
- Date Proposed or Announced: 01/2020
- Summary: Researchers in the Stanford Cyber Policy Center's Working Group on Platform Scale have proposed middleware — “software that rides on top of an existing platform and can modify the presentation of underlying data” — as a structural intervention to mediate the relationship between users and platforms. Middleware programs could apply filters or rerank content provided by platforms, creating competition in the market for content curation and giving users the power to choose how they experience a social network. This working group included Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed, and Marietje Schaake. A link to a report by this working group is provided.
- Status: Partial Implementation
4. Limit the effect that deep-fakes, automation, and inauthentic accounts have on the spread and impact of false content
The rise of new technologies, like “deep-fakes,” has increased the believability of false content. Researchers have also pointed to the openness of social media platforms that allow for web programmers to automate accounts, which facilitates the creation of bots. Inauthentic accounts amplify and spread mis- and disinformation. As such, some proposals have focused on increasing regulation around the development of deep-fakes, and the continued crackdown on inauthentic accounts. However, it is important to note that proposals to enact real name policies have been criticized by experts in human rights.
-
Legislation
DEEP FAKES Accountability Act of 2019 (H.R.3230)
- Date Proposed or Announced: 06/12/2019
- Summary: This bill aims to combat disinformation by setting regulatory requirements around the creation of deep-fakes. Specifically, it requires producers of deep-fakes to comply with digital watermark and disclosure requirements. It establishes new criminal offenses related to: (1) the production of deep-fakes which do not comply with related watermark or disclosure requirements, and (2) the alteration of deep-fakes to remove or meaningfully obscure such required disclosures. It also establishes civil penalties for breaking these requirements, and permits individuals to bring civil actions for damages.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Deepfake Task Force Act (S.2559)
- Date Proposed or Announced: 07/29/2021
- Summary: This bill requires the Department of Homeland Security to coordinate with the White House Office of Science and Technology Policy to establish a Deepfake Task Force, in order to study and better understand threats from digital content forgeries. The Task Force would also be called on to establish a plan for how to reduce the proliferation of such forgeries.
- Status: Passed Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Deep Fake Detection Prize Competition Act of 2019 (H.R.5532)
- Date Proposed or Announced: 10/18/2021
- Summary: This bill directs the National Science Foundation to carry out prize competitions to incentivize research into technology that advances the detection of deep-fake audio, images, and video. While this bill has not been passed, Congress ultimately authorized $5m for I-ARPA to hold a similar competition.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Bot Disclosure and Accountability Act of 2019 (S.2125)
- Date Proposed or Announced: 07/16/2019
- Summary: This bill would require social media Platforms to enact policies that force the creators of bot accounts to disclose these accounts. The bill then requires Platforms to label accounts that have been identified as bots, and implement processes for identifying and removing undisclosed bot accounts. The bill also bans the use of bots in political campaigns.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Social Media Accountability and Account Verification Act (H.R.4653)
- Date Proposed or Announced: 07/22/2021
- Summary: This bill requires social media companies to remove deceptive and fraudulent accounts from their Platforms. It also requires these companies to establish a process for users to report suspicious accounts. When such reports occur, companies must act expeditiously to remove or disable the account, or notify the user who filed the report that the account in question was deemed to not be deceptive.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
-
Self-Governance
Increase Awareness of Tools for Verifying Content Legitimacy, such as those identified by the RAND Corporation (link)
- Date Proposed or Announced: 02/19/2019
- Summary: Various third party tools exist that can help users judge the legitimacy of content, such as images or videos, in an effort to identify deepfakes. However, these third party tools are not widely used, and public awareness that they exist could be increased through the help of platforms. The RAND Corporation keeps a database of online tools available for countering the spread of disinformation, including tools with verification and credibility scoring functions.
- Status: Partial Implementation
Real Name Policies, as implemented by Facebook (link)
- Date Proposed or Announced: 2004
- Summary: Some platforms have adopted policies that require users to adopt the names they use in real life when creating an online profile. The intention of this proposal is to remove anonymity online. These policies are often criticized for making it harder for members of minority and marginalized groups to retain their privacy. The most notable implementation of this proposed solution is Facebook's authentic name policy, which has been in effect since its founding in 2004.
- Status: Partial Implementation
Know Your Customer Requirements for Platforms, as proposed by Jonathan Haidt (link)
- Date Proposed or Announced: 04/11/2022
- Summary: Jonathan Haidt argues that platforms should be required to verify some information about their users before sharing their content, similar to how banks are required to "know their customers." He said: "Banks and other industries have 'know your customer' rules so that they can’t do business with anonymous clients laundering money from criminal enterprises. Large social-media platforms should be required to do the same. That does not mean users would have to post under their real names; they could still use a pseudonym. It just means that before a platform spreads your words to millions of people, it has an obligation to verify (perhaps through a third party or nonprofit) that you are a real human being, in a particular country, and are old enough to be using the platform."
- Status: Partial Implementation
User Challenges & Honeypots, as employed by cybersecurity firms like Crowdstrike (link)
- Date Proposed or Announced: 1999
- Summary: Many sites ask users to verify that they are not a bot via a technical challenge, like a ReCAPTCHA image challenge. However, these present user friction and have been shown to be only mildly effective at identifying bots. A "honeypot" is a generic term for a trap to catch bots. The term was first used by Lance Spetzer, when he published his article "To Build a Honeypot" in 1999. A link to a brief primer on honeypots by cybersecurity firm Crowdstrike is provided.
- Status: Partial Implementation
Private Access Tokens, as developed in collaboration by Apple, Google, Cloudflare, and Fastly (link)
- Date Proposed or Announced: 03/7/2022
- Summary: Described as a way to end CAPTCHAs, private access tokens are a new technology that can help platforms verify that a user is human. Private Access Tokens also provide websites with minimal additional information about their users -- this would allow websites to quickly know who their real users are without providing a large privacy breach. This technology is being developed in collaboration between Fastly, Cloudflare, Apple, and Google. An IETF working group is also working to develop and standardize the technology. A link to a Fastly communications post, outlining information about Private Access Tokens is provided.
- Status: Partial Implementation
Greater Contextual Information About Accounts, as identified by First Draft (link)
- Date Proposed or Announced: 11/28/2019
- Summary: To help users identify potential bot accounts, Platforms could make account information more transparent for users. For example, Platforms could make the age of an account, and whether that account had previously used different names, public, which could help users assess an account's authenticity. First Draft has produced a guide on behaviors and characteristics that help indicate whether an account is a bot or not. A link to this guide is provided.
- Status: Partial Implementation
5. Increase transparency and awareness around the fragmented information ecosystem
The national information ecosystem was once dominated by a few large television outlets. These outlets acted as gatekeepers of information, sharing news that had a broad appeal. The internet has led to the emergence of a more crowded and niche news ecosystem – one that targets specific segments of the population, is more partisan, and more vulnerable to the spread of false information. Solutions have focused on increasing Platform transparency in order to allow researchers to better understand the effects of this change in the information ecosystem, and raising user awareness of how Platforms operate.
-
Legislation
Platform Accountability and Consumer Transparency Act of 2021 (S.797)
- Date Proposed or Announced: 03/17/2021
- Summary: This bill would require Platforms to provide researchers with access to data on their services. To access the data, researchers would outline their research proposal and data requests in an application filed with the National Science Foundation. The NSF would approve select studies, and Platforms would then be required to share the requested data with the chosen researchers. The bill would also prohibit social media platforms from blocking independent researchers from indexing or scraping public data from their sites. The law would be enforced by a new office in the FTC.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Algorithmic Justice and Online Platform Transparency Act of 2021 (S.1896)
- Date Proposed or Announced: 05/27/2021
- Summary: This bill would require platforms to: (1) publish public reports on their content moderation practices, (2) maintain detailed records describing their algorithmic processes for review by the FTC, and (3) provide the public with plain language descriptions of how their algorithms operate.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Social Media Disclosure and Transparency of Advertisements (DATA) Act of 2021 (H.R.3451)
- Date Proposed or Announced: 05/20/2021
- Summary: This bill would require Platforms to maintain a library of all advertisements published on their sites. The library must include information on: (1) the methods that were used to target the attention of users, (2) the advertisement's targeted audience, (3) the number of views it generated, (4) the advertisement's conversion rate, and (5) the advertisement's cost. Platforms are required to make these libraries available to academic researchers. The bill also directs the FTC to convene a multi-stakeholder working group to help define storage and access requirements for these libraries.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Algorithmic Accountability Act of 2022 (H.R.6580)
- Date Proposed or Announced: 02/03/2022
- Summary: This bill would require Platforms to perform impact assessments for any "automated decision systems" that are used on their sites, such as content review algorithms. Platforms would be required to engage with relevant internal and external stakeholders when carrying out these assessments. They would also be required to take action to mitigate any negative impacts that their systems might have on consumers. The FTC would be placed in charge of enforcement.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Amend the Computer Fraud and Abuse Act (CFAA), as proposed by researchers at the Knight First Amendment Institute and incorporated into the Platform Accountability and Transparency Act (link)
- Date Proposed or Announced: 01/19/2022
- Summary: It is not clear if scraping public data from social media platforms is legal, and the process may expose researchers to risk under the Computer Fraud and Abuse Act by violating Platforms’ terms of service. As a result, the Knight First Amendment Institute, the Berkman Klein Center, and researchers with the Netgain Partnership have called for amendments to the CFAA in order to provide a specific safe harbor for the independent scraping or indexing of social media platforms by academic researchers. This amendment was incorporated into the Platform Accountability and Transparency Act.
- Status: Third Party Legislative Proposal
-
Self-Governance
Increased Researcher Access, as argued for by researchers in the Harvard Misinformation Review (link)
- Date Proposed or Announced: 12/09/2020
- Summary: Platforms can voluntarily increase the quantity of data they make available to researchers through services such as Twitter’s research API or Facebook’s FORT Analytics API. It is generally thought that these tools do not provide enough data for researchers, and scholars and advocates have been pushing for more access. In 2020, a group of 43 researchers published an essay in the Harvard Misinformation Review laying out their case for why data access is important, what information they want, and what they could do with it. A link to this Harvard Misinformation Review article is provided.
- Status: Partial Implementation
More Thorough Reporting, as discussed by Keller and Leerssen (link)
- Date Proposed or Announced: 01/09/2020
- Summary: Platforms currently provide some reports on their content moderation practices. These reports often provide information on content flags and takedowns. However, they have drawn criticism for omitting key information, such as the overall amount of harmful content that has been discovered on Platforms. Additionally, critics argue that Platforms fail provide access to any content once it has been removed. This information could be valuable to researchers hoping to better understand the digital ecosystem. Researchers have urged Platforms to expand their voluntary reporting to include some of this information. Two critics of current Platform reporting practices are Daphne Keller and Paddy Leerssen. The two co-authored a paper surveying the field of platform research and pointing out the shortcomings of transparency reports. A link to this paper is provided.
- Status: Partial Implementation
Data Donation for Researchers, as discussed by Hansen Shapiro, et al. (link)
- Date Proposed or Announced: 02/19/2019
- Summary: Platforms can provide users with the ability to opt-in to data-sharing arrangements in which users give consent to having their browsing data collected and provided to researchers. An example of this is Mozilla Rally. Data donation is one of the models included in a report published by the Netgain Partnership on "New Approaches to Platform Data Research" by Elizabeth Hansen Shapiro, Michael Sugarman, Fernando Bermejo, and Ethan Zuckerman. A link to this report is provided.
- Status: Partial Implementation
6. Reduce “demand” for self-validating content - addressing psychological vulnerabilities and increasing digital literacy and trust in institutions
Mis- and disinformation campaigns are often effective because they exploit psychological vulnerabilities, everyday lack of digital literacy, and the declining public trust in institutions. Responding to these issues will require broad, societal level changes.
-
Legislation
Digital Citizenship and Media Literacy Act (H.R.8216)
- Date Proposed or Announced: 06/23/2022
- Summary: This bill creates a Department of Commerce program for awarding grants to state and local educational agencies, public libraries, and nonprofit organizations in order to promote media literacy and digital citizenship. The bill authorizes $20M in grants for each of fiscal years 2023, 2025, 2027, and 2029.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Educating Against Misinformation and Disinformation Act (H.R. 6971)
- Date Proposed or Announced: 03/08/2022
- Summary: This bill establishes a commission to: (1) research how misinformation spreads, and (2) create a national strategy for how the government can better promote information and media literacy. The Department of Education would ultimately assess the effectiveness of the commission's study and deem whether to implement it.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Veterans’ Cyber Risk Awareness Act (H.R.2326)
- Date Proposed or Announced: 04/01/2021
- Summary: This bill directs the Department of Veterans Affairs (VA) to conduct a communications and outreach campaign to educate veterans about cyber risks. These risks can include disinformation, identity theft, scams, and online fraud.
- Status: Passed Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
To Establish the Digital Literacy and Equity Commission (H.R.6373)
- Date Proposed or Announced: 01/12/2022
- Summary: This bill establishes a commission to study digital literacy and equity, including the state of digital literacy and information literacy in the United States.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Trust in Public Service Act (H.R. 3609)
- Date Proposed or Announced: 05/28/2021
- Summary: This bill aims to improve public trust in the Federal Government by establishing "customer experience" as a central measure of performance for agencies and the Federal Government. The bill also seeks to improve the digital accessibility of federal agencies and their work by improving user accessibility and user experience.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Transparency in Government Act of 2021 (H.R. 2055)
- Date Proposed or Announced: 03/18/2021
- Summary: This bill establishes and modifies several reporting, disclosure, and other requirements across the government. The goal of these modifications is to expand public access to information and increase public trust in government.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Protecting Our Democracy Act (H.R. 5314)
- Date Proposed or Announced: 09/21/2021
- Summary: This bill looks to (1) prevent abuses of presidential power, (2) restore checks and balances and accountability and transparency in government, and (3) defend elections against foreign interference. The bill hopes to restore accountability and transparency in government as a method to increase public trust in institutions.
- Status: Passed House in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Investment in News by Platforms, as implemented in Google's News Initiative and Meta's Journalism Project (link)
- Date Proposed or Announced: 04/28/2015
- Summary: Researchers argue that Platforms could provide assistance to news organizations, helping them develop more sustainable business models, which could in turn increase digital literacy and trust in institutions. Google already operates a "News Initiative," which offers resources and training for publishers to help them reach new audiences and generate revenue. The News Initiative is a more expansive follow-on to the Google Digital News Initiative, which was started in 2015 in collaboration with 8 major European publishers to support journalism in Europe. Facebook launched a similar effort in 2017, called the Facebook Journalism Project. A link to a Google site related to its News Initiative is provided.
- Status: Partial Implementation
News Organization Reforms to Increase Transparency, as outlined by The Trust Project (link)
- Date Proposed or Announced: 11/16/2017
- Summary: It has been proposed that news organizations could increase public trust by being more transparent with the public. A prominent advocate for newsroom transparency is the Trust Project, which created a list of 8 indicators that signal a newsroom produces reliable and ethical journalism. By enacting reforms that correspond with these indicators, media organizations could potentially begin to increase public trust in their content. A link to the Trust Project's list of indicators is provided.
- Status: Partial Implementation
Increase Political Diversity at News Organizations, as recommend by UT's Center for Media Engagement (link)
- Date Proposed or Announced: 08/16/2021
- Summary: News organizations have been urged to employ journalists with a range of political ideologies. Researchers believe that having a greater diversity of political opinions and backgrounds on journalistic teams increases public trust that a diversity of perspectives will be represented in a news organization's content. This is also believed to increase trust that media organizations will report accurately on social and political issues that concern people of diverse backgrounds. In a study on how the media can better connect with conservative and right-leaning audiences, the University of Texas's Center for Media Engagement found that newsrooms should consider increasing the political diversity of their team. A link to the University of Texas study is provided.
- Status: Partial Implementation
Investments in Community Engagement and the Revival of Local Journalism, as funded by the Knight Foundation (link)
- Date Proposed or Announced: 02/2019
- Summary: Researchers argue that resurrecting local media outlets across the United States will increase a sense of investment, transparency, and trust in the community. One major private funder of local journalism is the Knight Foundation. A link to a Knight Foundation statement showing this style of investment at play in real life is provided.
- Status: Partial Implementation
Creating a System of Data Co-governance, as studied by researchers at Monash University (link)
- Date Proposed or Announced: 02/02/2022
- Summary: One proposal for building trust in government is to improve community engagement and transparency in data governance. This style of proposal could entail requesting the public for feedback on, and contribution to, data that is used for public purposes, or even opening the design process for data governance and accountability mechanisms to pulic comment. In one study of this approach, academics at Monash University in Australia developed a participatory design process for co-creating data governance processes. A link to their study is provided.
- Status: Untested Proposal
Participatory Budgeting, as studied by the World Bank (link)
- Date Proposed or Announced: 08/1989
- Summary: This proposal seeks to build public confidence and restore trust in government spending and institutions by giving communities control over a portion of the public budget. Since its first implementation in Brazil in 1989, it has been implemented in many countries across the world and recognized as good practice by the World Bank (a link to this report is provided), as well as the UN and OECD. This proposal is primarily suggested for local communities, as cities and counties will have differing needs for public dollars. Over time, the proposal could be scaled to the state and federal levels.
- Status: Partial Implementation
Nudges, as popularized by Thaler and Sunstein (Nudge by Richard Thaler and Cass Sunstein)
- Date Proposed or Announced: 04/08/2008
- Summary: Social media nudges can be used to combat some of the psychological vulnerabilities that make users more susceptible to misinformation. Nudge theory, or libertarian paternalism, was popularized by Cass Sunstein and Richard Thaler in their 2008 book, Nudge, and forms the basis for more recent suggestions to combat the spread of disinformation. Nudges could include prompting users to read articles before sharing them on social media, heuristic nudges around sharing behavior, or nudges that prompt critical thinking.
- Status: Partial Implementation
Friction Prompts, as argued by Jigsaw product manager Justin Kosslyn (link)
- Date Proposed or Announced: 11/16/2018
- Summary: Researchers argue that increasing friction to various Platform activities, such as the sharing of content, can decrease user psychological susceptibilities to misinformation. One prominent technologist to advocate for more friction on the internet is Justin Kosslyn, product manager for Google's Jigsaw. A link to an article by Kosslyn is provided.
- Status: Partial Implementation
Social Media Inoculation Tactics, as studied by Lewandowsky & van der Linden (link)
- Date Proposed or Announced: 0 2/22/2021
- Summary: Users can be made less susceptible to misinformation by presenting them with arguments or facts that counter a campaign's narratives. These types of interventions are grouped broadly under the category of "innoculation theory" or "prebunking." Researchers have proposed that social media companies could help present users with counter-messaging by tweaking their algorithms to push news and headlines from reliable sources that counter or discredit specific misinformation campaigns. Among many other researchers, Stephan Lewandowsky and Sander van der Linden found innoculation to be a promising tool for combating disinformation. A link to research by Lewandowsky and van der Linden is provided.
- Status: Partial Implementation
7. Increase liability for the creation and amplification of mis- and disinformation
The lack of accountability for the creators and amplifiers of mis- and disinformation allows the problems that they cause to persist. As such, proposals have broadly focused on increasing the liability of creators and amplifiers of mis- and disinformation.
-
Legislation
Justice Against Malicious Algorithms Act of 2021 (H.R.5596)
- Date Proposed or Announced: 10/18/2021
- Summary: This bill amends Section 230. It removes liability protection for Platforms that make personalized recommendations using an algorithm based on user-specific information.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Health Misinformation Act of 2021 (S.2448)
- Date Proposed or Announced: 07/22/2021
- Summary: This bill reforms Section 230 of the Communications Decency Act. Specifically, the bill would create civil liability for Platforms that allow their algorithms to promote health misinformation.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Protecting Americans from Dangerous Algorithms Act (S.3029)
- Date Proposed or Announced: 10/20/2021
- Summary: This bill amends Section 230, removing liability protection for claims relating to violations of individuals’ civil rights, neglect to prevent violations of civil rights, or acts of international terrorism. The bill provides exceptions for Platforms who use algorithms based on obvious or easily understandable criteria, such as curating content chronologically.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Expanding 'Learned Profession' Notions of Fiduciary Duty to Tech Professions, as suggested by Jack Balkin (link)
- Date Proposed or Announced: 10/16/2018
- Summary: Researchers have called on the private sector to establish ethical codes for data scientists and software engineers. Among the first to propose a version of this idea was Jack Balkin, a professor at Yale Law School, who wrote that technology platforms should be considered "information fiduciaries" toward their users, with similar duties of care, loyalty, and confidentiality that doctors and lawyers owe to their clients. Such a proposal could require the creation of a professional licensure for enforcement purposes or be self-imposed by industry actors. A link to Balkin's research is provided.
- Status: Untested Proposal
Sanctioning Bad Platform Actors by Cloud Computing Providers, as performed by Amazon Web Services (link)
- Date Proposed or Announced: 0 1/10/2021
Summary: Platforms rely on cloud services to provide computing infrastructure and data storage. Companies that deliver these services, like Amazon Web Services (which controls nearly half of the global cloud infrastructure), could leverage their power to change the behavior of platforms that do not follow the provider’s terms of service. The power of these companies was seen on display in January 2021, when Amazon dropped Parler over its role in the January 6th insurrection – forcing Parler offline for several days. A link to a New York Times article describing this action is provided.
- Status: Partial Implementation
Countering Extremism and Radicalization Online
The DPGP Index currently includes 52 proposed solutions related to the Countering Extremism and Radicalization Online category, which are listed below under four primary subcategories:
1. Increase platform accountability
Currently, business incentives lead platforms to prioritize addictive product features and relatively contentious content. Proposals focus on incentivizing Platforms to recalibrate their decision making to err on the side of public safety.
-
Legislation
Digital Platform Commission Act of 2022 (S.4201)
- Date Proposed or Announced: 05/12/2022
- Summary: This bill establishes a Federal Digital Platform Commission (FDPC). The FDPC would regulate digital platforms in order to promote: (1) access to digital platforms for civic engagement and career opportunities, (2) access to government services and public safety, (3) competition and consumer welfare, (4) a robust and competitive marketplace of ideas, (5) protection for consumers, and (7) assurance that the algorithmic processes of digital platforms are fair, transparent, and safe. This regulation would ensure that Platforms would be held accountable by a government body, including for issues related to safety.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
See Something, Say Something Online Act of 2021 (S.27)
- Date Proposed or Announced: 01/22/2021
- Summary: This bill amends Section 230, requiring Platforms that detect suspicious activities to share that information with the relevant authorities.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Social Media Accountability and Account Verification Act (H.R.4653)
- Date Proposed or Announced: 07/22/2021
- Summary: This bill requires social media companies to remove deceptive and fraudulent accounts from their platforms. It also requires social media companies to establish a process for users to report suspicious accounts. When such reports occur, Platforms must expeditiously act to remove or disable the account, or notify the user who filed the report that the account in question was deemed to not be deceptive.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Accountability for Online Firearms Marketplaces Act of 2021 (S.2725)
- Date Proposed or Announced: 09/13/2021
- Summary: This bill removes a Platform's Section 230 protections for any content related to the operation of an online firearms marketplace. The bill defines an "online firearms marketplace" as a digital Platform that (1) facilitates firearm-related transactions, (2) advertises or otherwise makes available proposals for transferring firearms, or (3) provides digital instructions for how to program a 3D printer to make a firearm.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Platform Accountability and Consumer Transparency Act (S.797)
- Date Proposed or Announced: 03/17/2021
- Summary: This bill would require Platforms to provide researchers with access to their data. To access the data, researchers would outline their research proposal and data requests in an application filed with the National Science Foundation. The NSF would approve select studies, and Platforms would then be required to share the requested data with the chosen researchers. The bill would also prohibit social media platforms from blocking independent researchers from indexing or scraping public data from their sites. The law would be enforced by a new office in the FTC. Researchers believe that increasing Platform transparency could ultimately result in increased accountability, either directly, through changes to business decisions that stem from an increase in public awareness, or indirectly, through more targeted government interventions that could result from better knowledge of Platform-related issues.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Data and Algorithm Transparency Agreement (DATA) Act (S.1477)
- Date Proposed or Announced: 04/29/2021
- Summary: This bill prohibits Platforms from collecting, sharing, or selling sensitive data about users without their express consent. The bill also establishes a private right of action that allows individuals to sue Platforms for violating this rule.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Justice Against Malicious Algorithms Act of 2021 (H.R.5596)
- Date Proposed or Announced: 10/18/2021
- Summary: This bill amends Section 230, removing liability protection for Platforms that make personalized recommendations using an algorithm based on user-specific information. This could allow Platforms to be held accountable for the spread of inciting or violent content.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Protecting Americans from Dangerous Algorithms Act (S.3029)
- Date Proposed or Announced: 10/20/2021
- Summary: This bill amends Section 230, removing Platform protections against civil lawsuits for claims relating to violations of individuals’ civil rights, neglect to prevent violations of civil rights, or acts of international terrorism. The bill provides exceptions for Platforms who use algorithms based on obvious or easily understandable criteria, such as curating content chronologically. This could allow Platforms to be held accountable for the spread of inciting or violent content.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Safe Social Media Act (S.1630)
- Date Proposed or Announced: 05/13/2021
- Summary: This bill requires the FTC, in coordination with the CDC, to issue a report on the use of social media platforms among individuals under the age of 18. The report must research the frequency of usage among this population and the mental health effects linked to such usage. The report must also issue policy recommendations designed to increase the safety of social media use for minors.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Federal Big Tech Tort Act (S.2917)
- Date Proposed or Announced: 09/30/2021
- Summary: This bill restricts Section 230, removing some Platform protections against lawsuits. Specifically, the bill allows lawsuits against a Platform when a user under the age of 16 suffers harm from a mental health issue that is attributed to the use of that Platform's services.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Stop Shielding Culpable Platforms Act (H.R.2000)
- Date Proposed or Announced: 03/18/2021
- Summary: Under current law, digital Platforms are not considered the publishers or speakers of content created by 3rd parties and shared on their sites. However, this bill defines Platforms as the "distributor" of such information, opening them up to legislation that currently regulates the larger media industry. This could result in Platforms being held accountable for more content shared on their sites.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
DISCOURSE Act (S.2228)
- Date Proposed or Announced: 06/24/2021
- Summary: This bill amends Section 230 of the Communications Decency Act. Specifically, the bill allows Platforms to be held accountable for third-party content that they moderate or censor.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
-
Self-Governance
Change Algorithms to Promote Less Divisive Content, as proposed by Aviv Ovadya (link)
- Date Proposed or Announced: 05/17/2022
- Summary: Most social media platforms currently use engagement-based ranking for content, which rewards and promotes content that leads users to spend more time and attention on the site. However, this has been shown to incentivize the production of more sensationalist and divisive content. Some scholars propose that rather than switching to chronological or other non-curated feeds, platforms instead optimize their recommendation algorithms for certain socially positive outcomes. One such proposal was made by Aviv Ovadya, a fellow at the Harvard Belfer Center's Technology and Public Purpose Project, in a report advocating for "bridging-based ranking." This system would prioritize content that leads to positive interactions between members of different groups, thus bridging rather than deepening division and polarization.
- Status: Untested Proposal
Develop Independent Review Mechanisms, as Facebook attempted with its Oversight Board (link)
- Date Proposed or Announced: 11/15/2018
- Summary: Independent content review bodies, such as Facebook’s Oversight Board, could provide a method for holding Platforms accountable for moderation decisions regarding the flagging and removal of material.
- Status: Partial Implementation
2. Increase digital literacy
Together, the government and the private sector can increase digital literacy, making users less susceptible to radicalization and incitement to violence online.
-
Legislation
Educating Against Misinformation and Disinformation Act (H.R. 6971)
- Date Proposed or Announced: 03/08/2022
- Summary: This bill establishes a commission, which would be called upon to (1) research how misinformation spreads and; (2) create a national strategy for how the government can better promote information and media literacy. Specific duties of the commission include (1) implementing a national strategy to promote information and media literacy; and (2) identifying programs and resources on information and media literacy for use in elementary, secondary, higher education, and adult education programs. The Department of Education would ultimately assess the effectiveness of the commission's study and deem whether to implement its recommendations.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Digital Citizenship and Media Literacy Act (H.R.8216)
- Date Proposed or Announced: 06/23/2022
- Summary: This bill directs the Assistant Secretary of Commerce to award grants to state and local educational agencies, public libraries, and nonprofit organizations in order to promote media literacy and digital citizenship. The bill authorizes $20M in grants for each of fiscal years 2023, 2025, 2027, and 2029.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Strengthening Research in Adult Education Act (S.1126)
- Date Proposed or Announced: 04/14/2021
- Summary: This bill requires the Institute of Education Sciences and the National Center for Education Research to carry out research on: (1) state and local adult education programs that have successfully increased literacy and educational attainment, (2) the characteristics and academic achievements of adult learners, and (3) access to adult education, including digital literacy development.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Veterans’ Cyber Risk Awareness Act (H.R.2326)
- Date Proposed or Announced: 04/01/2021
- Summary: This bill directs the Department of Veterans Affairs (VA) to conduct a communications and outreach campaign to educate veterans about cyber risks. These risks can include disinformation, identity theft, scams, and fraud spread via the internet or social media.
- Status: Passed Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
To Establish the Digital Literacy and Equity Commission (H.R.6373)
- Date Proposed or Announced: 01/12/2022
- Summary: This bill establishes a commission to conduct a study on digital literacy and equity, including the state of digital literacy and information literacy in the United States.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Reserve Telecommunications Capacity for Public Use, as proposed in S.2195, The National Public Telecommunications Infrastructure Act of 1994 (S.2195)
- Date Proposed or Announced: 06/22/1994
- Summary: In 1994, Sen. Daniel Inouye proposed a Senate bill that would carve out 20% of telecom capacity for government and nonprofit content. This capacity could have been used by the government to create content that increases digital literacy. Ben Tarnoff has argued that this carve-out (had it been implemented) could have helped create a public media that would have served as a counterbalance to private content online.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Nudges, as popularized by Thaler and Sunstein (Nudge by Richard Thaler and Cass Sunstein)
- Date Proposed or Announced: 04/08/2008
- Summary: Nudge theory, or libertarian paternalism, was popularized by Cass Sunstein and Richard Thaler in their 2008 book, Nudge, and forms the basis for more recent proposals for how to combat the spread of disinformation. Social media nudges and content flags help draw user attention to potentially false information. These nudges and flags could help increase user digital literacy simply by drawing greater attention to the content being shared. Examples include prompting users to read articles before sharing them on social media, heuristic nudges around sharing behavior, or nudges that prompt critical thinking. The concept is being increasingly studied by social media researchers as a tool to stem the flow the disinformation and has garnered some support from U.S. policymakers.
- Status: Partial Implementation
Platform Efforts to Increase Digital Literacy, as implemented on Facebook (link)
- Date Proposed or Announced: 08/02/2018
- Summary: A variety of programs have focused on increasing digital literacy for Facebook users. Third-party programs include “Get Digital!,” “MediaWise for Seniors,” “MediaWise First Time Voter Project,” “News Co/Lab Video Series and Free Online Course,” and PEN America's "Knowing The News" Program. Facebook has also begun producing its own educational materials, including its “Digital Literacy Library.”
- Status: Partial Implementation
Media and Information Literacy Guide, as implemented by Twitter in partnership with UNESCO (link)
- Date Proposed or Announced: 10/24/2019
- Summary: Twitter released a new handbook for educators. The handbook is easy-to-read, informative, and aims to help educators teach digital media literacy skills to their students.
The resource contains best practice guidelines on media literacy from UNESCO, as well as a reading list curated by UNESCO’s program specialists, which is intended to guide educators through current teaching literature on this topic.
- Status: Full Implementation
3. Increase information visibility
In order to better counter the problems of online radicalization and incitement to violence, policymakers need to fully understand how digital platforms and their algorithms work. For this reason, mandates for data sharing with policymakers or researchers have been proposed.
-
Legislation
Platform Accountability and Transparency Act (link)
- Date Proposed or Announced: 12/09/2021
- Summary: Under this act, independent researchers are given a process to submit research proposals to the National Science Foundation. If the requests are approved, social media companies would then be required to provide any data necessary for the research, subject to certain privacy protections. This act would also establish a Platform Accountability and Transparency Office in the FTC, which would process data requests from qualified researchers, develop privacy and cybersecurity protocols, and send the requests to platforms.
- Status: Not Yet Formally Introduced
- Partisan Status: Bipartisan Group of Sponsors
Algorithmic Justice and Online Platform Transparency Act of 2021 (S.1896)
- Date Proposed or Announced: 05/27/2021
- Summary: This bill would require platforms to: (1) publish public reports on their content moderation practices, (2) maintain detailed records describing their algorithmic processes for review by the FTC, and (3) provide the public with plain language descriptions of their algorithms.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Algorithmic Accountability Act of 2022 (H.R.6580)
- Date Proposed or Announced: 02/03/2022
- Summary: This bill would require Platforms to perform impact assessments for any "automated decision systems" that are used on their sites, such as content review algorithms. Platforms would be required to engage with relevant internal and external stakeholders when carrying out these assessments. They would also be required to take action to mitigate any negative impacts that their systems might have on consumers. The FTC would be in charge of enforcement.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Social Media Disclosure and Transparency of Advertisements (DATA) Act of 2021 (H.R.3451)
- Date Proposed or Announced: 05/20/2021
- Summary: This bill would require Platforms to maintain a library of all advertisements published on their sites. The library must include information on: (1) the methods that were used to target the attention of users, (2) the advertisement's targeted audience, (3) the number of views it generated, (4) the advertisement's conversion rate, and (5) the advertisement's cost. The bill requires that these libraries be made available to academic researchers. The bill directs the FTC to convene a multi-stakeholder working group to help define storage and access requirements for these libraries.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
Digital Services Oversight and Safety Act of 2022 (H.R.6796)
- Date Proposed or Announced: 02/18/2022
- Summary: This bill requires Platforms to publish transparency reports about their content moderation practices. The bill also establishes a new "Bureau of Digital Services Oversight and Safety" within the FTC. This Bureau would provide oversight on content moderation practices and conduct studies on the harm caused by "intentional manipulation" on Platforms.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Channel Switching, or Consumer Choice for Content Curation, as suggested in testimony by Stephen Wolfram (link)
- Date Proposed or Announced: 06/25/2019
- Summary: Data scientists have proposed that platforms should create multiple curation options, and provide users with the opportunity to choose which algorithm they’d like to sort their feed. Allowing users to opt out of the default recommendation system and into one organized chronologically, by diversity, or optimized for some other characteristic would erode the centrality of engagement-based ranking to platform business models, and increase awareness and transparency for users regarding how their content is displayed. One prominent advocate of this proposal is Stephen Wolfram, founder of Wolfram Research, who outlined the idea in his 2019 testimony before the US Senate.
- Status: Untested Proposal
Increased Researcher Data Access, as argued by Nathaniel Persily and Joshua Tucker (link)
- Date Proposed or Announced: 12/01/2021
- Summary: Platforms can voluntarily increase the amount and type of data they make available to researchers through services such as Twitter’s research API or Facebook’s FORT Analytics API. While academics and researchers have been asking for and using platform data for over a decade, one compelling case for the necessity of research and data-sharing was made by Nathaniel Persily, a law professor and Co-Director of the Project on Democracy and the Internet at Stanford Law School, and Joshua Tucker, a professor and Co-Director of the Center for Social Media and Politics at NYU.
- Status: Partial Implementation
Data Donation for Researchers, as discussed by Hansen Shapiro, et al. (link)
- Date Proposed or Announced: 02/01/2021
- Summary: Platforms can provide users with the ability to opt-in to data-sharing arrangements in which users give consent to having their browsing data be collected and provided to researchers. An example of this is Mozilla Rally. Data donation is one of the models included in a report published by the Netgain Partnership on "New Approaches to Platform Data Research" by Elizabeth Hansen Shapiro, Michael Sugarman, Fernando Bermejo, and Ethan Zuckerman. A link to this report is provided.
- Status: Partial Implementation
Standardize Transparency Reporting, as studied by New America OTI (link)
- Date Proposed or Announced: 12/09/2021
- Summary: Platforms often voluntarily release Transparency Reports, which provide some insight into their business and moderation processes. However, each Platform uses their own format and report different sets of statistics that are often calculated in different ways. Without common standards, Platforms are free to create measures as they see fit. The resulting information that is released may be misleading or fail to provide regulators and researchers with what they need to deduce meaningful insights. Standardizing these reports would make it easier to compare between Platforms, perform empirical research based on these disclosures, and hold Platforms accountable to a shared standard. Researchers at the New America Foundation's Open Technology Institute have compiled a list of current transparency reporting metrics and policies across 6 major social media platforms, which provides useful insight into the differences in reporting and the gaps that currently exist.
- Status: Untested Proposal
Third-party Audits of Platform Data Disclosures, as implemented by Meta (link)
- Date Proposed or Announced: 05/17/2022
- Summary: When Platforms release data and information on their content moderation activities, their disclosures are often met with skepticism by experts. To increase faith in the disclosures, it has been proposed that Platforms should undergo audits by a third-party. These audits would shed light on existing practices and processes, as well as the operation of algorithms. The first platform to submit to any kind of audit for its transparency disclosures was Meta in May 2022.
- Status: Partial Implementation
4. Increase cooperation to counter extremism
If local law enforcement wants to understand the spread of harmful online content among residents of their communities, they need to proactively reach out to Platform companies, and are often given confusing, hard to understand data in response. Local law enforcement also often lack adequate training, equipment, software, and data reporting standards to effectively respond to online harms. They often do not have teams devoted to cyber issues, and lack robust triage mechanisms to prioritize the high volume of issues as they arise.
-
Legislation
No Social Media Accounts for Terrorists or State Sponsors of Terrorism Act of 2021 (H.R.1543)
- Date Proposed or Announced: 03/03/2021
- Summary: This bill directs the executive branch to prohibit social media platforms from allowing designated terrorists from creating user profiles. The determination of designated terrorists would be based on the Treasury Department's list of designated nationals and blocked persons.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
No Publicity for Terrorists Act of 2022 (H.R.6918)
- Date Proposed or Announced: 03/03/2022
- Summary: This bill amends 18 U.S.C. 2339B, which outlaws organizations from providing support to designated foreign terrorist organizations. The ammendment would require Platforms to remove any accounts associated with individuals on the Treasury Department's list of designated terrorists within 24 hours of discovery.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Republican-Only Sponsors
Require Platforms to Report Data to Law Enforcement, as implemented in Germany through its NetzDG law (link)
- Date Proposed or Announced: 06/18/2020
- Summary: In Germany, an updated version of its NetzDG law on content moderation requires platforms to provide data on content that violates criminal law after they take it down. Experts have argued that a similar provision could be implemented in the U.S., providing authorities with a method to track and counter potential criminals.
- Status: Third Party Legislative Proposal
Institute Data Retention Requirements, as proposed by Jonathan Zittrain, Elaine Sedenberg, and John Bowers (link)
- Date Proposed or Announced: 04/13/2021
- Summary: Extremist content hosted by platforms often contains evidence important to law enforcement investigations and legal proceedings. While removal of such content by platforms is often positive, it can have the unintended effect of weakening the ability to hold extremists accountable if the data is not provided to the authorities. Researchers have thus called for governments to implement data retention requirements on Platforms, which would provide authorities with access to removed content for investigations and trials. One version of this proposal was drafted by Jonathan Zittrain, Elaine Sedenberg, and John Bowers and published by the Knight 1st Amendment Institute, arguing for the creation "digital poison cabinets" of removed content that could be provided to academic researchers.
- Status: Third Party Legislative Proposal
-
Self-Governance
Platform Request Forms for Law Enforcement, as implemented across the tech industry (link)
- Date Proposed or Announced: 07/10/2017
- Summary: Digital platforms, including Facebook, have deployed forms and request processes for law enforcement looking to gain access to user information or data. EFF publishes a series of reports titled "Who Has Your Back?" that catalogue the policies of major platforms across the tech industry with regard to their handling of requests from law enforcement.
- Status: Full Implementation
Platform Reporting Schemes, as implemented in the NCMEC's CyberTipline (link)
- Date Proposed or Announced: 2008
- Summary: Since 2008, U.S. law has required electronic service providers, including major social media platforms, to proactively report apparent instances of child exploitation appearing on their sites to the National Center for Missing and Exploited Children's (NCMEC) CyberTipline. In the context of online extremism, the GIFCT facilitates voluntary communication across platforms to share news of incidents that might result in incitement online. Platforms could deal with the rise in terrorist and extremist content by expanding and formalizing the communication procedures already in place through cooperative mechanisms like the GIFCT.
- Status: Full Implementation
Global Internet Forum to Counter Terrorism (link)
- Date Proposed or Announced: 06/26/2017
- Summary: In 2017, Facebook, Microsoft, Twitter, and YouTube came together to form the Global Internet Forum to Counter Terrorism (GIFCT), which supports member companies and civil society organizations in preventing, responding to, and learning about terrorist attacks. It also provides a Knowledge Sharing Platform that offers tools for smaller platforms with less resources to track and combat extremist content. It additionally funds an academic research outlet, the Global Network on Extremism and Technology (GNET), which is led by the International Centre for the Study of Radicalization (ICSR) at King’s College London.
- Status: Partial Implementation
Work with Companies that Monitor and Track Extremists (link)
- Date Proposed or Announced: 2002
- Summary: Some companies exist to help Platforms monitor and track extremist activity on the internet. One such organization, Search for International Terrorist Entities (SITE), currently works with governments, NGOs, and private sector/tech companies to better manage threats of extremism. Platforms could expand their cooperation with organizations such as SITE, increasing their access to resources and data.
- Status: Partial Implementation
-
Public/Private Partnerships
Engage Multi-Stakeholder Coalitions to Address Root Causes of Domestic Terrorism, as advocated by the White House under President Biden (link)
- Date Proposed or Announced: 6/15/2021
- Summary: The fourth pillar of the White House’s National Strategy for Countering Domestic Terrorism focuses on dealing with the long-term factors that contribute to the spread of domestic terrorism. This includes confronting racism in America, providing care and intervention to those who pose a danger to themselves or others, and investing in policies to foster civic engagement and promote tolerance. The government cannot do this alone, and the White House has proposed a collaborative effort between the public and private sectors, led by the Domestic Policy Council, to tackle this issue.
- Status: Partial Implementation
Invest in Training for Governments and Community Organizations, as implemented by the US Institute for Peace (link)
- Date Proposed or Announced: 2012
- Summary: Ensuring that counter-extremism units are competent in the use of modern technologies, such as digital platform data, is vital to their long-term success. Community organizations and governments often lack the education and understanding to use and interpret analytic tools, and proposals have called for increased government training efforts. One such training effort is run by the U.S. Institute for Peace, which provides Peace Tech Exchanges to build digital literacy in organizations across the world.
- Status: Partial Implementation
Prioritize Interoperability of Information Sharing, as suggested by DHS (link)
- Date Proposed or Announced: 09/20/2019
- Summary: Major challenges interfere with the sharing of information between agencies and Platforms. One such challenge is a lack of standardized measurements and information products, even between departments within the same organizations, which makes industry-wide interoperability challenging. More standardization would make it easier for government and private actors to cooperate and make it possible to empirically evaluate whether counter-extremist campaigns are having their intended effects. Such standardization was argued for in DHS's 2019 Strategic Framework to Counter Terrorism and Targeted Violence.
- Status: Partial Implementation
Develop and Disseminate Counter-messaging, as suggested by DHS (link)
- Date Proposed or Announced: 09/20/2019
- Summary: In some countries, Facebook and Google have implemented features that redirect users searching for extremist content to counter-narratives. The hope is this could deradicalize such users. Experts argue that providing this service at scale could help reduce widespread radicalization online. These messages could be developed in partnership with community organizations, who may be more aware of local frustrations. DHS advocated for a wider adoption of these practices in its 2019 Strategic Framework to Counter Terrorism and Targeted Violence.
- Status: Partial Implementation
Partner with Advertisers to Deliver Counter-Messaging, as proposed in the 2017 Digital Forum on Terrorism Prevention (link)
- Date Proposed or Announced: 01/03/3018
- Summary: Advertisers and SEO firms have expertise in targeting and scaling campaigns. Platforms, NGOs, and the government could partner with these businesses to more effectively deliver counter-extremist messaging. One version of this policy solution was advocated by participants in the 2017 Digital Forum on Terrorism Prevention, hosted by U.S. Countering Violent Extremism Task Force, DHS Office for Terrorism Prevention Partnerships, Tech Against Terrorism, the George Washington Program on Extremism, and Fifth Tribe.
- Status: Untested Proposal
-
Federal and Local Agency Programs
DHS’s Targeted Violence and Terrorism Prevention Grant Program (link)
- Date Proposed or Announced: 04/2019
- Summary: Managed by DHS’s Center for Prevention Programs and Partnerships (CP3), the Targeted Violence and Terrorism Prevention Grant Program is the only federal grant program dedicated to enhancing the capabilities of local communities in order to prevent targeted violence and terrorism.
The FY21 grants totaled $20M across 37 grants. They prioritized the prevention of domestic violent extremism, including through efforts to counter online radicalization and mobilization to violence.
- Status: Full Implementation
Federal Grant Programs, as administered by FEMA/DHS (link)
- Date Proposed or Announced: 10/26/2001
- Summary: Secretary Mayorkas has designated combating domestic violent extremism as one of six National Priority Areas for the first time in FEMA grant program history. Of the total $415 million allocated in funding for the State Homeland Security Program and the $615 million allocated for the Urban Area Security Initiative, 30% is required to be spread across spending for these priority areas. These grants are part of DHS' Homeland Security Grant Program, which was created in 2001 by the Patriot Act.
- Status: Full Implementation
Fusion Centers, as implemented by state and local governments in collaboration with DHS (link)
- Date Proposed or Announced: 07/22/2004
- Summary: Fusion Centers are state-owned and operated centers that serve as focal points in states and major urban areas for the receipt, analysis, gathering, and sharing of threat-related information. This information is shared between State, Local, Tribal, Territorial (SLTT), federal, and private sector partners. The idea for joint action across multiple levels of government grew out of a recommendation made in the 2004 report of the 9/11 Commission.
- Status: Full Implementation
Joint Terrorism Task Forces (JTTFs), operated by the FBI (link)
- Date Proposed or Announced: 1979
- Summary: Led by the FBI and the DOJ, JTTFs are locally-based multi-agency partnerships between various federal, state, and local law enforcement agencies tasked with investigating terrorism and terrorism-related crimes. JTTFs have been in existence long before 9/11, growing out of collaboration between NYPD and the FBI in 1979 on bank robberies and later expanding to other areas of criminal activity, to ultimately include terrorism.
- Status: Full Implementation
Streamline Bureaucratic Processes, as suggested by DHS (link)
- Date Proposed or Announced: 09/2019
- Summary: Among the priority actions outlined in the Department of Homeland Security’s Strategic framework for Countering Terrorism and Targeted Violence, this policy would seek to remove barriers within government agencies in order to speed the adoption and creation of technology through public-private partnerships.
- Status: Partial Implementation
Work With and Recruit Technologists, as proposed in the 2017 Digital Forum on Terrorism Prevention (link)
- Date Proposed or Announced: 01/03/2018
- Summary: Technologists outside of the national security establishment can and do engage in independent efforts to counter extremism online. An example is Fifth Tribe’s open source dataset on extremist Twitter content. Experts argue that government officials could consider working with independent technologists and host hackathons and other events to bring these individuals together and generate new ideas. This was another proposal described in the outcome document of the 2017 Digital Forum on Terrorism Prevention.
- Status: Partial Implementation
Addressing Networked Harassment, Diminishing Press Freedoms, and Chilled Speech
The DPGP Index currently includes 24 proposed solutions related to the Addressing Networked Harassment, Diminishing Press Freedoms, and Chilled Speech category, which are listed below under four primary subcategories.
1. Increase market power and resources for media outlets
Shifting market dynamics have resulted in two problems: (1) the loss of local and small news outlets, and (2) the rise of more sensationalized media content, as driven by increased platform distributor power. To counteract these problems, solutions have focused on increasing the market power, financial resources, and content visibility of media outlets.
-
Legislation
Journalism Competition and Preservation Act of 2021 (S.673)
- Date Proposed or Announced: 03/10/2021
- Summary: This act provides a temporary safe harbor for publishers of online content to collectively negotiate with online Platforms regarding the terms on which their content may be distributed. A similar act was made into law in Australia, and was found to increase the bargaining power of media organizations there.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Local Journalism Sustainability Act (H.R.3940)
- Date Proposed or Announced: 06/16/2021
- Summary: This bill provides individuals and businesses with tax credits to support local newspapers and media. Individual taxpayers could claim an income tax credit of up to $250 for a local newspaper subscription. The bill also provides local newspaper employers a payroll tax credit for journalist wages.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Future of Local News Act (H.R.3169)
- Date Proposed or Announced: 05/12/2021
- Summary: This bill establishes the Future of Local News Committee to examine, report on, and make recommendations related to the state of local news and the ability of local news to meet the information needs of the people of the United States. The committee must recommend mechanisms that the federal government can create and implement to support the production of local news, such as the possible creation of a new national endowment for local journalism, or the reform and expansion of the Corporation for Public Broadcasting.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
Tax on Targeted Advertising Revenues, as proposed by Free Press (link)
- Date Proposed or Announced: 02/26/2019
- Summary: Free Press has proposed a plan to tax digital platforms for their advertising revenues. The plan would set aside the tax income to help fund a “Public Interest Media Endowment” that supports independent, community-based, and investigative journalism. A link to Free Press' proposal is provided.
- Status: Third Party Legislative Proposal
Increase Per Capita Spending on Media, as proposed by Geoffrey Cowan (link)
- Date Proposed or Announced: 01/2010
- Summary: There have been calls for the U.S. government to increase its per capita spending on local news. Currently, the U.S. government spends $1.34 per capita to fund public media, whereas the UK spends close to $80 and Denmark spends more than $100. One report that points out the comparative lack of public spending on news media in the U.S. was written by Geoffrey Cowan at the University of Southern California. A link to this report is provided.
- Status: Third Party Legislative Proposal
-
Self-Governance
Digital Platform Media Support Programs, as implemented by Google (link)
- Date Proposed or Announced: 04/28/2015
- Summary: The Facebook Journalism Project and Google News Initiative represent philanthropic efforts by digital platforms to (1) invest in local newsrooms, (2) train journalists on how to leverage platforms to spread their content, and (3) partner with news publishers. These programs have directed hundreds of millions of dollars to media organizations, but critics say they are not enough to make up for the billions in revenues that newsrooms have lost to the companies. A link to Google's News Initiative is provided.
- Status: Partial Implementation
Private Philanthropy for Local Media, as funded by the Knight Foundation (link)
- Date Proposed or Announced: 02/19/2019
- Summary: Private philanthropies like the Knight Foundation have unilaterally funded programs to help local media outlets. These programs focus on (1) safeguarding the first amendment, (2) investing in research to produce new local media revenue streams, (3) training talent and leadership, and (4) researching positive ways to utilize technology. Some nonprofit newsrooms receive roughly 40% of their revenue from foundations like the Knight Foundation.
- Status: Partial Implementation
Facilitate Subscriptions and Donations to News Outlets, as funded by Google's News Initiative (link)
- Date Proposed or Announced: 04/28/2015
- Summary: Some experts argue that small donors and subscribers will be an essential part of any successful media financing model. During the COVID-19 pandemic, many local news outlets launched successful fundraising campaigns from the communities they served. Given this success, experts have urged officials and Platforms to help news outlets grow their reader base -- this growth could then translate to higher donations. This is one of the specific areas in which Google's News Initiative assists its partners.
- Status: Partial Implementation
2. Increase newsroom responsibility
Newsrooms play an important role in journalist safety, both online and offline. Effectively responding to journalist harassment online will require an increase in the duties and responsibilities of newsrooms.
-
Legislation
There are no concrete proposals for legislation that would increase newsroom responsibilities in regards to journalist safety.
Some advocates have called for new or amended legislation that would allow journalists or newsrooms to take legal action against online harassers, similar to Ireland’s Harassment, Harmful Communications and Related
Offences Bill. -
Self-Governance
Calls for Newsrooms to Provide More Employee Training, as advocated and facilitated by the Coalition Against Online Violence (link)
- Date Proposed or Announced: 2021
- Summary: Many loose “proposals” or calls-to-action are directed towards newsrooms to better equip their journalists with training and legal protection as they engage online. One organization that both advocates and provides resources for this type of trainings is the Coalition Against Online Violence.
- Status: Partial Implementation
Increase Mandated Digital Security Measures, as identified by the Committee to Protect Journalists (link)
- Date Proposed or Announced: 06/30/2019
- Summary: The Committee to Protect Journalists advises journalists to take digital security measures such as using a password manager, enabling two-factor authentication, using end-to-end encrypted messaging applications, securely storing reporting documents, and regularly updating devices. In addition to these recommendations, the Technology and Social Change Project at Harvard's Shorenstein Center recommends requiring an annual digital security checkup for all reporters. Newsrooms may provide reporters with greater training on information technology in these areas to protect their privacy online. Newsrooms can also increase wide adoption of these measures by mandating them for their journalists.
- Status: Partial Implementation
Provide Greater Internal Support Systems for Harassed Journalists, as suggested by UNESCO (link)
- Date Proposed or Announced: 05/2022
- Summary: Newsrooms may create new resources to aid reporters in responding to targeted attacks or other forms of harassment by providing online reputation management and SEO assistance, monitoring journalists’ social media profiles and reporting threats to staff at Platforms, building better email filters for hate mail, and providing relocation support in instances when a reporter’s address is publicly shared. This solution was recommended by UNESCO in a 2022 report on harassment of journalists.
- Status: Partial Implementation
Develop a Culture of Support, as argued by April Glaser (link)
- Date Proposed or Announced: 03/04/2022
- Summary: Providing a supportive culture at newsrooms could help journalists dealing with harassment. Experts have proposed that newsrooms communicate to reporters that take hateful attacks and harassment against their staff seriously. These experts argue that newsrooms should also express public support for reporters, put in place clear protocols for communicating incidents, and connect reporters with therapists. One advocate of this policy solution is April Glaser, a researcher at the Harvard Shorenstein Center's Technology and Social Change Project.
- Status: Partial Implementation
News Organizations Training Journalists in Trauma Risk Management, as implemented by the BBC (link)
- Date Proposed or Announced: 2002
- Summary: Experts have called on news organizations to train networks of their journalists in Trauma Risk Management (TRiM). These journalists could then provide support to colleagues undergoing harassment. TRiM was developed within the UK Armed Forces to mitigate the impact of exposure to traumatic events on service members, but it has been found to be effective in applications across other hierarchical organizations where individuals may be exposed to trauma. The BBC has been providing this program to its journalists since 2002.
- Status: Partial Implementation
3. Increase platform duty and liability
The harassment journalists and public figures face online occurs primarily via, and as a result of content spread on, digital platforms. As such, effectively responding to journalist harassment online will require increasing the duties and liability of digital platforms.
-
Legislation
SAFE Tech Act of 2021 (S.299)
- Date Proposed or Announced: 02/08/2021
- Summary: This bill reforms Section 230 of the Communications Decency Act. Specifically, the bill allows users to sue social media companies for enabling cyber-stalking, targeted harassment, and discrimination on their platforms.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Expand Privacy and Security Functionalities for Journalists, as proposed by Caroline Sinders and Vandinika Shukla (link)
- Date Proposed or Announced: 05/05/2021
- Summary: Researchers at Harvard's Business and Kennedy Schools have suggested that platforms create a new user category for journalists. Users categorized as journalists would then have access to increased privacy and security functionalities, such as more nuanced privacy settings, the ability to delete content they are tagged in (not just block or mute the content), and more personalized ways of filtering content. The Harvard Researchers, Caroline Sinders and Vandinika Shukla, published some of their insights in a 2021 Slate article. A link to this article is provided.
- Status: Partial Implementation
Modify Default Settings for Journalists, as suggested by Caroline Sinders, Vandinika Shukla, and Elyse Voegeli (link)
- Date Proposed or Announced: 01/05/2021
- Summary: Many of the basic steps journalists can take to protect themselves involve changing default settings on their public profile, such as restricting who can view their content, or ensuring their account is not searchable via email. Platforms could be proactive by modifying the default settings for journalists, ensuring the most protective settings are automatically selected. Caroline Sinders, Vandinika Shukla, and Elyse Voegeli outline this proposal in the article linked.
- Status: Partial Implementation
Maintain Online Anonymity for Journalists, as endorsed by UN Special Rapporteur David Kaye (link)
- Date Proposed or Announced: 05/22/2015
- Summary: Experts have advised journalists to keep separate professional and personal social media accounts to protect their private lives. In order to keep these accounts separated, journalists have been encouraged to: (1) use different names for their private accounts, (2) not publish identifying photographs on these private accounts, and (3) use different phone numbers for their account setup. Efforts to roll back user anonymity by platforms, through real name or ID verification policies, could threaten the safety of journalists. Experts therefore propose that platforms make special accommodations for journalists when enacting practices designed to reduce user anonymity. The importance of online anonymity was noted by then-UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, in a 2015 report.
- Status: Partial Implementation
Enable Bulk Reporting of Content, as recommended by an International Press Institute expert panel (link)
- Date Proposed or Announced: 07/13/2016
- Summary: Journalists can be overwhelmed by thousands of harassers at once, but most social media platforms only offer users the option to delete direct messages and comments one at a time. This makes it challenging for journalists to keep their accounts clear of large quantities of harassing messages. Platforms could offer batch delete and block functions to assist journalists and other users experiencing harassment. One of the groups to make this recommendation was an expert panel convened by the International Press Institute.
- Status: Partial Implementation
Increase Transparency and Recourse in Content Reporting, as suggested by Caroline Sinders, Vandinika Shukla, and Elyse Voegeli (link)
- Date Proposed or Announced: 01/05/2021
- Summary: Researchers, as well as Amnesty International and other advocacy organizations, recommend that platforms (1) make their processes for reporting harassing content more transparent, (2) provide clarity as to what constitutes violence and abuse on their site, and (3) provide recourse for individuals who disagree with platform decisions regarding moderation. One group of researchers to propose this solution is Caroline Sinders, Vandinika Shukla, and Elyse Voegeli.
- Status: Partial Implementation
Modify UI to Make Protective Features Easier to Access, as suggested by Caroline Sinders, Vandinika Shukla, and Elyse Voegeli (link)
- Date Proposed or Announced: 01/05/2021
- Summary: Some protective settings are hard for the average user to find. Design choices are important to user uptake, and one way to make privacy and security settings more accessible would be to make them more visible by not burying the options in multiple drop-down menus. Sinders, Shukla, and Voegeli recommended that platforms make their processes more accessible and easier to understand, as Twitter did in March 2020 by making it easier to mute or block certain words and hashtags.
- Status: Partial Implementation
4. Increase cooperation to counter harassment online
Effectively responding to harassment will require increased cooperation between newsrooms, digital platforms, and law enforcement.
-
Legislation
Global Press Freedom Act of 2021 (S.204)
- Date Proposed or Announced: 02/03/2021
- Summary: This bill establishes an Office of Press Freedom to advance the protection and well-being of members of the press abroad and to engage with foreign governments and global press freedom organizations concerning freedom of the press and of expression.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Bipartisan Group of Sponsors
International Press Freedom Act of 2021 (S.1495)
- Date Proposed or Announced: 04/29/2021
- Summary: This bill would promote global press freedom by creating a "Coordinator for International Press Freedom" at the State Department. The legislation would also authorize new funding for programs that help keep foreign journalists safe; use existing funding to prevent, investigate, and prosecute crimes against journalists overseas; and create a new visa category to provide threatened journalists with refuge in the United States.
- Status: Referred to Committee in Previous Congress
- Partisan Status: Democrat-Only Sponsors
-
Self-Governance
Platform Policies to Counter Online Harassment and Bullying, as Meta has implemented to protect public figures (link)
- Date Proposed or Announced: 10/13/2021
- Summary: Twitter and Facebook have a variety of tools and recommendations for users facing harassment. These tools include reporting schemes to notify Platforms of harassing content, as well as methods to block or unfollow unwelcome users. Facebook also uses AI algorithms to proactively find and remove content. In October 2021, Meta announced an expansion of its anti-bullying and harassment policies to better protect public figures facing mass harassment. Platforms could continue modulating these policies to combat the specific threats facing journalists and other vulnerable individuals.
- Status: Partial Implementation
Third Party Offerings to Remove Journalist' Personal Information from Data Broker Databases, as offered by companies like DeleteMe (link)
- Date Proposed or Announced: 2011
- Summary: News organizations can work with third party data removal services to have the personal information of their journalists removed from data broker datasets. This could help increase physical and psychological safety for journalists, who often become targets of harassment campaigns. One business that provides these services is DeleteMe.
- Status: Partial Implementation
Submit a Proposal, Edit, or Comment
The Index is currently a dynamic, non-exhaustive list of proposals. If we missed a proposal, or you would like to provide us with a comment or edit to the index, please fill out the form below.
If you'd like to connect further on the Index or our larger Democracy and Internet Governance Initiative, please reach out to our research team at tapp_project@hks.harvard.edu.