Article
from Belfer Center for Science and International Affairs, Harvard Kennedy School

The Drivers of Platform Harm

Platform Governance

The following analysis is part of Harvard Kennedy School’s  Democracy and Internet Governance Initiative, which has focused primarily on improving the quality of our information ecosystem, countering online extremism and radicalization, and addressing harassment and diminishing press freedom online as part of its initial research.

Digital platforms have reshaped our society. Social media companies like Facebook and TikTok connect billions of people around the world, entertain millions on a daily basis, and create economic opportunity across industries and professions. However, their rise has been accompanied by a burgeoning mental health crisis, countless cases of online harassment, and growing political polarization (to name just a few harms).

In 2021, 22% of high schoolers seriously considered attempting suicide.[1] A 2021 Pew survey found that 41% of adults in America have been harassed online.[2] Pew research also now says the U.S. is more polarized than at any point in the last two decades.[3] While societal problems like polarization and the mental health crisis are complex issues with numerous causes, experts have demonstrated strong correlations between digital platforms and modern forms of societal harm.

This publication identifies the causal drivers of these harms, based on research across academia, civil society, and industry. By identifying and aggregating these drivers, industry and government officials can focus their efforts on targeted interventions that counter the specific variables at issue including:

  1. Consumer health and security vulnerabilities
  2. Privacy infringement
  3. Incitement to violence and radicalization
  4. Networked harassment and diminishing press freedoms
  5. Polarization

Consumer Health and Security Vulnerabilities

Mental Health

Feelings of depression, anxiety, and stress are becoming commonplace in American society. Between 2019-2020, 20.78% of American adults experienced a mental illness.[4] A CDC study of mental health among high schoolers found alarming results. In 2021, 42% of high school students experienced persistent feelings of sadness or hopelessness, 22% seriously considered attempting suicide, 18% made a suicide plan, and 10% actually attempted suicide. All these indicators of poor mental health increased between 2011 and 2021. They were even more concerning among female and LGBTQ+ students.[5]

Numerous researchers have studied the link between social media use and anxiety, stress, and depression. One meta-study put it bluntly, “(s)ocial media are responsible for aggravating mental health problems.”[6]

Researchers have generally pointed to five psychological drivers that, when coupled with the rise of digital platforms, fuel the mental health crisis.

  • First, social comparison reduces self contentment. Social media and online platforms encourage users to compare their experiences to others[7] and to seek external validation[8] for their accomplishments. However, online interactions are not a sufficient replacement for meaningful connection.[9] Studies show this can lead to structural changes in the brain, such as a reduction in the reward center, making users’ mental health more susceptible to harmful comments.[10]
  • Second, information overload overwhelms users and causes feelings of anxiety and stress. The internet has vastly expanded human access to information, and has made it accessible within seconds. Researchers suggest that accessing large amounts of information in short periods of time may cause feelings of anxiety, stress, powerlessness, and mental exhaustion. As two scholars of information overload put it, “the general consensus is that people with a high level of information overload will experience lowered well-being, and the more information stress someone feels the less happy they are in general.”[11] Further, the increased information leads to multitasking, which increases the production of the stress hormone cortisol.[12]
  • Third, addiction to digital platforms affects dopamine levels in users. Digital platforms are designed in a way that makes them addictive.[13] In 2019, teens spent an average of 1 hour and 27 minutes on social media every day.[14] Moreover, according to a researcher at Stanford, “as of 2021, there are over 3.78 billion social media users worldwide, with each person averaging 145 minutes of social media use per day.”[15] This addiction can be damaging to user mental health. Research shows that becoming addicted to anything affects the brain’s ability to process dopamine, and this process drives depression and anxiety.[16] 
  • Fourth, algorithmic curation of content can highlight counterproductive material for individuals predisposed to mental health problems. Negative thinking is contagious. Researchers have found that exposure to suicidal content is correlated with an increase in suicidal ideation and suicide attempts,[17] especially within young teen populations.[18] Unfortunately, platform algorithms can inadvertently highlight or spread negative material. In a 2019 study by the Center for Countering Hate, researchers posed as 13-year-olds with an interest in body image and mental health on TikTok. Within 2.6 minutes, TikTok’s algorithm recommended suicidal content to their accounts. Within 8 minutes, TikTok served content related to eating disorders.[19]  
  • Misinformation and disinformation can negatively affect user health behaviors. Digital platforms are rife with mis- and disinformation related to health. According to a report by the WHO, “the proportion of health misinformation on social media … reached up to 51% in posts associated with vaccines, up to 28.8% in posts associated with COVID-19, and up to 60% in posts related to pandemics. Among YouTube videos about emerging infectious diseases, 20–30% were found to contain inaccurate or misleading information.”[20] This same paper found that the consequences of such mis- and disinformation include a reduction in patients’ willingness to vaccinate and increased social fear, panic, stress and mental disorders. A separate study found that fake news in public health “can cause psychological disorders and panic, fear, depression, and fatigue.”[21]

Digital platforms fray communal relations in three ways, giving rise to increased feelings of isolation and loneliness.

  • First, cyberbullying alienates users and increases risk of self harm. Cyberbullying is on the rise. In 2018, Pew Research reported that 59% of teens have experienced some form of cyberbullying.[22] Cyberbullying is also linked to mental health challenges. Teens who have been cyberbullied are at a 50% increased risk to have suicidal thoughts than their peers,[23] and are twice as likely to harm themselves.[24] Cyberbullying has also been linked with increased feelings of post-traumatic stress.[25] Finally, due to the highly visible and permanent nature of cyberbullying, victims may further be ostracized and excluded from fellow classmates, who are scared of falling victims themselves.
  • Second, reliance on digital platforms for social connection is correlated to user loneliness and isolation. While evidence for a causal connection appears lacking, a clear correlation exists between social media use and feelings of loneliness. One study found that undergraduate students experienced reduced feelings of loneliness and depression when they reduced their social media use.[26] This correlation also appears to be generational, with older internet users indicating less feelings of loneliness than younger users.[27] This generational divide could point to a separate underlying factor – what matters is the way in which one uses the platform. If an older user utilizes social media to rediscover and connect with others, whom they then meet offline, platforms can actually reduce loneliness. As a study put it, “older people’s engagement on social media may be a resource to reduce loneliness during the COVID-19 pandemic.”[28] The question, then, is how to encourage positive social media use.
  • Third, disinhibition disrupts the ability to form online communities and make meaningful connections online. Disinhibition, or the tendency of users to act outside the norms of society by being, for example, more aggressive online or less likely to use self restraint, is common on social media.[29] A 2022 study on disinhibition and trolling found that anonymous, disinhibited accounts were more likely to troll other users than named accounts.[30] A further study in 2014 found that disinhibition was a significant predictor of cyberbullying.[31] Researchers theorize that disinhibition could be the result of online anonymity or distance from others. There is also a positive association between witnessing online cruelty and participating in it, thus propagating a toxic online environment.[32] Additionally, feelings of invisibility, anonymity, and asynchronicity, out of which online disinhibition grows, have been linked to online deception such as catfishing.[33]

Scams

Increasing numbers of Americans are falling victim to online scams. In 2022 alone, the FBI received 800,944 complaints of internet scams, worth $10.3 billion in damages. That is up from 351,937 complaints worth $2.7 billion in 2018.[34] Social media has become a vector for these scams. Americans aged 18-56, the main demographic of social media users,[35] were 34% more likely in 2021 to report losing money to scams and fraud, most often in cryptocurrency and online shopping scams.[36] 40% of the scams millennials fell prey to in 2021 originated on social media.[37]

Researchers have pointed to two drivers that increase susceptibility to scams.

  • First, reduced social connection opens users up to becoming victims of scams. Lonely individuals are more likely to fall victim to online scams. A 2022 survey from the UK indicated that 25% of people who stated they experience loneliness on a weekly basis have been scammed.[38] These users appear to know they are more susceptible. Among adults who have felt lonely, 29% said they felt more vulnerable to an online scam, particularly an online romance scam. In 2022 alone, 70,000 victims of online romance scams were reported in the U.S., suffering a collective total financial loss of $1.3 billion.[39] Individual victims of romance scams are said to lose on average $6,003.[40] These types of scams are reported to be the second most profitable scams prevalent on social media following cryptocurrency scams.[41]
  • Second, lack of understanding around online safety increases susceptibility to scams. Lack of online safety increases susceptibility to scams. Research has linked increased analytical reasoning abilities with resilience against digital scams,[42] whereas over-disclosure and trustfulness online[43] – including a lack of awareness of the potential threats online communication can hold[44] – and a lack of financial literacy[45] have been correlated with a susceptibility to scams.

Infringement of Privacy

Americans are growing more concerned with how their data is being collected and used. According to Pew Research, more than 60% of U.S. adults do not believe it is possible to go through daily life without their data being collected by the government or companies. Further, Americans do not believe they receive sufficient benefits to make this data collection worthwhile, with 81% of adults reporting that the risks posed by corporate data collection outweighs the potential benefits.[46]

But even as Americans say they are concerned about data collection, they also admit to rarely reading a company’s privacy policies. The same Pew poll found that just 9% of adults say they always, and just 13% say they often read a company’s privacy policies before using their product. A total of 36% of Americans say they never read a company’s privacy policies.[47]

Researchers have pointed to three drivers that fuel the lack of privacy in today’s digital world.

  • First, users expect and prefer personalized experiences. Internet users tend to appreciate and expect personalized, relevant, and helpful online experiences.[48] Creating such an experience often requires a Platform to understand a user’s preferences, which happens through the collection of user data. This tradeoff drives the data economy and leaves users forced to make a choice between participating in today’s online world and keeping their data unmonitored.
  • Second, the data economy incentivizes limitless collection of data. Data is a key commodity for modern businesses. Access to data is crucial to feed AI algorithms which are critical for many products and services, including self-driving cars, advertising, digital applications, and pharmaceutical development. The tremendous value of data incentivizes data collectors to collect, aggregate, structure, and store data to be sold to data consumers.[49] This model encourages maximized data collection and does not require compensation of data subjects.
  • Third, a lack of understanding, transparency, and accessibility reduces calls for more privacy among the general population. Most Americans claim that they lack understanding of how companies use their data (59%) and of how the government uses their data (78%). Despite feeling concerned about data collection, only 9% of Americans always read companies’ privacy policies.[50] Privacy policies are often lengthy and difficult for most individuals to understand.[51]

Incitement to Violence and Radicalization

Radicalization and incitement to violence are not new problems. The internet has made the problem worse,[52] providing speed and scale to the radicalization process. In particular, social media has proven to be an important tool for bad actors aiming to spread extremist ideologies and incite violence.[53]

Researchers have generally pointed to five drivers of incitement to violence and radicalization.

  • First, the internet increases bad actors’ communication power. Web 2.0 has provided largely free and unrestricted network access for the majority of people.[54] The internet facilitates near instantaneous communication and sharing of multimedia content regardless of geographic distance or boundaries, allowing bad actors to spread propaganda and mis/disinformation. It also provides bad actors access to individuals, groups, and communities online and the ability to narrowcast them and communicate directly with them.
  • Second, anonymous and unverified accounts have been linked to higher rates of dangerous activity online. Anonymity online has been linked to higher exhibition of negative behaviors online, including aggression, antisociality, and violence.[55]
  • Third, engagement-based algorithms feed users increasingly extreme content and amplify the reach of this content, exacerbating dangerous polarization,[56] echo chambers, and extremism.[57]
  • Fourth, the internet is a rights-based, not permissions-based, ecosystem, resulting in a lack of governance in the space. The internet is a rights-based, not permissions-based, ecosystem.[58] During the early development of the internet, self-regulation and maintenance of open access, and free facilitation of information were prioritized. These values continue to shape public opinion and regulatory approaches regarding the internet.
  • Fifth, misinformation and disinformation feed narratives that can be used to radicalize and incite. While research linking mis- and disinformation to political violence is lacking, one study found that countries with increased disinformation about their governments had higher levels of violence and domestic terrorism.[59] A further study found that bad actors can increase radicalization by exposing users to false and derogatory rhetoric of minority groups.[60] This rhetoric, often spread through digital platforms, increases contempt for minority groups and results in increased political radicalization. The idea that misinformation increases political radicalization is widely accepted by Americans. A 2022 poll found that 73% of adults believed misinformation increased extreme political views, and 77% of adults believed it increased hate crimes including violence motivated by race, gender, or religion.[61]

Networked Harassment and Diminished Press Freedoms

Online harassment is a factor of life in America. A 2021 Pew survey found that 41% of U.S. adults have experienced online harassment. Of these adults, 34% say their most recent incident involved a case of physical threat, sexual harassment, stalking, or sustained harassment. 75% of these Americans say their most recent experience occurred on a social media platform.[62]

While this is a problem facing all Americans, journalists and public figures – particularly female – face higher forms of harassment. In a UNESCO global study, 73% of female journalists claim to have experienced online abuse, harassment, threats and attacks. Further, 20% of female journalists say they were abused and attacked offline in actions they believe were connected to their online harassment.[63] Online harassment is often accompanied with threats to female journalists’ sources, families, and audiences. This can give rise to self-censorship by journalists, often called chilled speech. In a study by the International Federation of Journalists, 38% of harassed female journalists admitted to self-censorship as a result of their online attacks.[64] This is having a serious impact on the journalist pipeline. A 2021 study by Reporters Without Borders indicated that digital harassment caused about a quarter of female reporters to leave professional networks, resign, not want to renew contracts, or abandon specialties.[65]

Researchers have generally pointed to three drivers of networked harassment, diminishing press freedom, and “chilled speech” in the United States.

  • First, anonymity can incentivize reckless behavior and increase online aggression. Anonymity has long been linked with increased abusive behavior. A Stanford study in 1969 first documented that anonymous participants were more likely to comply with orders to administer shock therapy to other participants.[66] Later studies appeared to further confirm that higher degrees of anonymity increase aggressive behavior.[67] This behavior can occur in online interactions, which often involve the use of anonymous accounts. A study in 2016 found that participants were more likely to report a higher temptation to be aggressive in online blog postings when they were anonymous than in postings when they were not anonymous.[68]
  • Second, frictionless account creation and the ubiquity of smaller Platforms allows wrongdoers to continue their behavior, even after being banned from large Platforms. Removing negative users from Platforms has been a tool of many to try and punish bad actors. However, research shows that these users often shift to smaller Platforms that have more lax rules, where they become even more toxic and aggressive.[69] Additionally, users banned from Platforms can attempt to create new accounts under different names – a process called ban evasion.[70] While Platforms have invested in detecting and stopping such activity, if undiscovered, the bad actors can resume their behavior on large Platforms under pseudonyms.
  • Third, the sheer volume of content makes it difficult for Platforms to effectively and consistently moderate online harassment. Platforms are flooded with new information everyday. By 2011, over 200 million tweets were sent per day.[71] The sheer volume of data being exchanged on these Platforms means that, even with 99% accurate moderation algorithms, large quantities of negative content still reaches other users.

Polarization

America is becoming more polarized. As of 2014, Pew Research found that 79% of Democrats had an unfavorable view of Republicans – up from 57% in 1994 – and 82% of Republicans had an unfavorable view of Democrats – up from 68% in 1994.[72] The number of moderates in Congress is also shrinking. In1972 there were more than 160 moderate members of Congress. By 2022, there were about 24.[73] Political violence is also becoming more commonplace. An NPR/Ipsos poll found that 32% of Trump voters and 22% of Biden voters believed it was “OK to engage in violence to protect American democracy.”[74]

As digital platforms become increasingly influential in shaping public opinion and discourse, experts have considered their effect on political polarization.

Researchers generally point to five platform based drivers that contribute to polarization in society.

  • First, filter bubbles and echo chambers. Researchers claim that platforms may contribute to polarization through the creation of “filter bubbles,” where individuals are predominantly exposed to content that reinforces their existing beliefs and opinions.[75] This can lead to the formation of “echo chambers,” where like-minded individuals become increasingly extreme in their beliefs. This effect may be exacerbated by algorithms that are designed to show users content that is more likely to keep them engaged and interested.[76]
  • Second, algorithmic bias. The algorithms that determine what content is shown to users can contribute to polarization by promoting extreme or sensationalist content that generates more engagement, while suppressing more moderate views.[77] This can lead to a feedback loop where extreme content is amplified, while more moderate content is sidelined.
  • Third, amplification of emotions. Research suggests that emotionally charged content is more likely to be shared, liked, and commented on in social media, making emotional content more prevalent than non-emotional content.[78] Emotional content is more likely to elicit strong reactions, resulting in more polarized conversations.
  • Fourth, low cost for scaled social media influence. Paying to spread information through social media is cheap. An experiment in Kenya by the Institute for Strategic Dialogue has shown that for just $10,000 in social media ads, researchers were able to reach 4.4 million Facebook users – or two-thirds of all Facebook users in Kenya.[79] This low cost process allows bad actors to easily inflict harm on society.
  • Fifth, mis- and disinformation increase polarization. The news that users view shapes their voting behavior. One study found that watching Fox news increases Republican vote share by 0.3% among viewers induced into watching 2.5 additional minutes per week.[80] If the news that users view is predominantly seeded with mis- and disinformation, its effect on polarization only increases. One study modeled how disinformation was used to drive polarization in Hong Kong during the Umbrella Revolution.[81] 

Additionally, researchers point to two human and societal drivers that contribute to polarization.

  • First, selective exposure. Individuals tend to consume media that aligns with their preexisting attitudes, values, and beliefs, which may lead to limited exposure to opposing viewpoints.[82] This can strengthen polarization by reinforcing existing beliefs and discouraging engagement with alternative viewpoints.
  • Second, bad actors can take advantage of existing fractures in society, identities, and fears. Bad actors often exploit existing grievances and identities in society as a means to drive polarization. For example, multiple studies analyzed Russia’s strategy for meddling in the 2016 US Election, and found that it focused on tapping into existing identities and grievances of American society. They often used sophisticated social media messaging techniques and algorithms to build trust among members of a particular group and reinforce group identity, before turning them against opposing groups.[83]

[1] National Center for HIV, Viral Hepatitis, STD, and TB Prevention (2021). Youth Risk Behavior Survey: Data Summary & Trends Report. Centers for Disease Control and Prevention. Retrieved from https://www.cdc.gov/healthyyouth/data/yrbs/pdf/YRBS_Data-Summary-Trends_Report2023_508.pdf

[2] Vogels, A. (2021, January 8). Characterizing people’s most recent online harassment experience. Pew Research Center. Retrieved from https://www.pewresearch.org/internet/2021/01/08/characterizing-peoples-most-recent-online-harassment-experience/ 

[3] Pew Research Center (2014, June 12). Political Polarization in the American Public. Retrieved from https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/

[4] Reinert, M., Fritze, D., & Nguyen, T. (2022, October). The State of Mental Health in America 2023. Mental Health America. Retrieved from https://mhanational.org/sites/default/files/2023-State-of-Mental-Health-in-America-Report.pdf

[5] Youth Risk Behavior Survey: Data Summary & Trends Report.

[6] Karim, F., Oyewande, A. A., Abdalla, L. F., Chaudhry Ehsanullah, R., & Khan, S. (2020, June). Social Media Use and Its Connection to Mental Health: A Systematic Review. Cureus. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7364393/ 

[7] Nobel, J. (2018, December 21). Does social media make you lonely? Harvard Medical School. Retrieved from https://www.health.harvard.edu/blog/is-a-steady-diet-of-social-media-unhealthy-2018122115600

[8] McLean Hospital. (2023, January 13). The Social Dilemma: Social Media and Your Mental Health. Mass General Brigham. Retrieved from https://www.mcleanhospital.org/essential/it-or-not-social-medias-affecting-your-mental-health

[9] Ibid.

[10] Ricci, J. (2018, June 28). The Growing Case for Social Media Addiction. California State University. Retrieved from https://www.calstate.edu/csu-system/news/Pages/Social-Media-Addiction.aspx

[11] Bawden, D. & Robinson, L. (2020). Information Overload: An Overview. Oxford Encyclopedia of Political Decision Making. Oxford: Oxford University Press. Retrieved from https://openaccess.city.ac.uk/id/eprint/23544/

[12] Levitin, D. (2015, January 18). Why the modern world is bad for your brain. The Guardian. Retrieved from https://www.theguardian.com/science/2015/jan/18/modern-world-bad-for-brain-daniel-j-levitin-organized-mind-information-overload

[13] Montag, C., Lachmann, B., Herrlich, M., & Zweig, K. (2019, July 23). Addictive Features of Social Media/Messenger Platforms and Freemium Games against the Background of Psychological and Economic Theories. International journal of environmental research and public health. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6679162/

[14] Rideout, V., Peebles, A., Mann, S., & Robb, M. B. (2022). Common Sense census: Media use by tweens and teens, 2021. Common Sense. Retrieved from https://www.commonsensemedia.org/sites/default/files/research/report/8-18-census-integrated-report-final-web_0.pdf

[15] Qiu, T. (2021, September 14). A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health. Stanford University Human-Centered Artificial Intelligence. Retrieved from https://hai.stanford.edu/news/psychiatrists-perspective-social-media-algorithms-and-mental-health 

[16] Ranes, B. (2015, September). Drug Abuse, Dopamine, and the Brain’s Reward System. Butler Center for Research. Retrieved from https://www.hazeldenbettyford.org/content/dam/hbff/images/sitecore/files/bcrupdates/bcr_ru16_drugabuseanddopamine.pdf

[17] Gould, M., & Lake, A. (2013, February 6). The Contagion of Suicidal Behavior. Forum on Global Violence Prevention; Board on Global Health; Institute of Medicine; National Research Council. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK207262/

[18] Swanson, S. A., & Colman, I. (2013, July 9). Association between exposure to suicide and suicidality outcomes in youth. Canadian Medical Association Journal. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3707992/ 

[19] Center for Countering Digital Hate. (2022, December 15). Deadly By Design. Retrieved from https://counterhate.com/wp-content/uploads/2022/12/CCDH-Deadly-by-Design_120922.pdf

[20] Borges do Nascimento, I. J., Pizarro, A. B., Almeida, J. M., Azzopardi-Muscat, N., Gonçalves, M. A., Björklund, M., & Novillo-Ortiz, D. (2022, June 30). Infodemics and health misinformation: a systematic review of reviews. Bulletin of the World Health Organization. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421549/

[21] Rocha, Y. M., de Moura, G. A., Desidério, G. A., de Oliveira, C. H., Lourenço, F. D., & de Figueiredo Nicolete, L. D. (2021, October 9). The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. Zeitschrift fur Gesundheitswissenschaften = Journal of public health. Retrieved from ncbi.nlm.nih.gov/pmc/articles/PMC8502082/

[22] Anderson, A. (2018, September 27). A Majority of Teens Have Experienced Some Form of Cyberbullying. Pew Research Center. Retrieved from https://www.pewresearch.org/internet/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying/ 

[23] Sumner, S. A., Ferguson, B., Bason, B., et al. (2021, September 20). Association of Online Risk Factors With Subsequent Youth Suicide-Related Behaviors in the US. JAMA Network Open. Retrieved from https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2784337?utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_term=092021

[24] John, A., Glendenning, A. C., Marchant, A., Montgomery, P., Stewart, A., Wood, S., Lloyd, K., & Hawton, K. (2018, April 19). Self-Harm, Suicidal Behaviours, and Cyberbullying in Children and Young People: Systematic Review. Journal of Medical Internet Research. Retrieved from https://www.jmir.org/2018/4/e129

[25] Mateu, A., Pascual-Sánchez, A., Martinez-Herves, M., Hickey, N., Nicholls, D., & Kramer, T. (2019, December 18). Cyberbullying and post-traumatic stress symptoms in UK adolescents. Archives of Disease in Childhood. Retrieved from https://adc.bmj.com/content/105/10/951.info

[26] Hunt, M. G., Marx, R., Lipson, C., & Young, J. (2018, December). No More FOMO: Limiting Social Media Decreases Loneliness and Depression. Journal of Social and Clinical Psychology. Retrieved from https://guilfordjournals.com/doi/abs/10.1521/jscp.2018.37.10.751

[27] Bonsaksen, T., Ruffolo, M., Geirdal, A., et al. (2021, July 21). Loneliness and Its Association With Social Media Use During the COVID-19 Outbreak. Social Media + Society. Retrieved from https://journals.sagepub.com/doi/full/10.1177/20563051211033821

[28] Ibid.

[29] Super, J. (2004, July 28). The Online Disinhibition Effect. CyberPsychology & Behavior. https://www.liebertpub.com/doi/reader/10.1089/1094931041291295

[30] Nitschinsk, L., Tobin, S., & Vanman, E. (2022, May 20). The Disinhibiting Effects of Anonymity Increase Online Trolling. CyberPsychology, Behavior, and Social Networking. Retrieved from https://pubmed.ncbi.nlm.nih.gov/35594292/

[31] Udris, R. (2014, October 20). Cyberbullying among high school students in Japan: Development and validation of the Online Disinhibition Scale. Computers in Human Behavior. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S0747563214004944

[32] Wachs, S. & Wright, M. F. (2018, September 17). Associations between Bystanders and Perpetrators of Online Hate: The Moderating Role of Toxic Online Disinhibition. International Journal of Environmental Research and Public Health. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6163978/

[33] D'Costa, K. (2014, April 25). Catfishing: The Truth About Deception Online. Scientific American. Retrieved from https://blogs.scientificamerican.com/anthropology-in-practice/catfishing-the-truth-about-deception-online/

[34] Internet Crime Complaint Center. (2022). Internet Crime Report 2022. Federal Bureau of Investigation. Retrieved from https://www.ic3.gov/Media/PDF/AnnualReport/2022_IC3Report.pdf

[35] Pew Research Center. (2021, April 7). Social Media Fact Sheet. Retrieved from https://www.pewresearch.org/internet/fact-sheet/social-media/

[36] Federal Trade Commission. (2022, December 8). Who experiences scams? A story for all ages. Retrieved from https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2022/12/who-experiences-scams-story-all-ages

[37] Ibid.

[38] Nationwide Building Society. (2022, February 14). Love is Blind: Feelings of loneliness and isolation go hand in hand with romance scams. Retrieved from https://www.nationwidemediacentre.co.uk/news/love-is-blind-feelings-of-loneliness-and-isolation-go-hand-in-hand-with-romance-scams

[39] Fletcher, E. (2023, February 9). Romance scammers’ favorite lies exposed. Federal Trade Commission. Retrieved from https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposed

[40] Leonhardt, M. (2019, April 18). ‘Nigerian prince’ email scams still rake in over $700,000 a year—here’s how to protect yourself. CNBC. Retrieved from https://www.cnbc.com/2019/04/18/nigerian-prince-scams-still-rake-in-over-700000-dollars-a-year.html

[41] Fletcher, E. (2022, January 25). Social media a gold mine for scammers in 2021. Federal Trade Commission. Retrieved from https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2022/01/social-media-gold-mine-scammers-2021

[42] Kelley, J. N., Hurley-Wallace, A. L., Warner, K. L., & Hanoch, Y. (2023, May). Analytical reasoning reduces internet fraud susceptibility. Computers in Human Behavior. Retrieved from https://www.sciencedirect.com/science/article/pii/S074756322200468X

[43] O’Brien, S. (2021, August 10). Tech-savvy teens falling prey to online scams faster than their grandparents. CNBC. Retrieved from https://www.cnbc.com/2021/08/10/tech-savvy-teens-falling-prey-to-online-scams-faster-than-their-grandparents.html

[44] Williams, E. J., Beardmore, A., & Joinson, A. N. (2017, July). Individual differences in susceptibility to online influence: A theoretical review. Computers in Human Behavior. Retrieved from https://www.sciencedirect.com/science/article/pii/S0747563217301504

[45] DeLiema, M., Fletcher, E., Kieffer, C. N., Mottola, G. R., Pessanha, R., & Trumpower, M. (2019, September 30). Exposed to Scams: What Separates Victims from Non-Victims? FINRA, BBB, and Stanford Center on Longevity. Retrieved from https://www.finrafoundation.org/files/exposed-scams-what-separates-victims-non-victims

[46] Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019, November 15). Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information. Pew Research Center. Retrieved from https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/

[47] Ibid.

[48] John, L., Kim, T., & Barasz, K. (2018, January-February). Ads That Don't Overstep: How to Make Sure You Don't Take Personalization Too Far. Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53707

[49] Carrière-Swallow, Y., & Haksar, V. (2019, September 23). The Economics and Implications of Data. International Monetary Fund. Retrieved from https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2019/09/20/The-Economics-and-Implications-of-Data-An-Integrated-Perspective-48596

[50] Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information.

[51] Fowler, G. A. (2022, May 31). I tried to read all my app privacy policies. It was 1 million words. Washington Post. Retrieved from https://www.washingtonpost.com/technology/2022/05/31/abolish-privacy-policies/

[52] Jensen, M., LaFree, G., James, P. A., et al. (2016, December). Final Report: Empirical Assessment of Domestic Radicalization (EADR). National Consortium for the Study of Terrorism and Responses to Terrorism. Retrieved from https://www.start.umd.edu/pubs/START_NIJ_EmpiricalAssessmentofDomesticRadicalizationFinalReport_Dec2016_0.pdf

[53] Qureshi, A. J. (2022, October). The Role of Mass and Social Media in Radicalization to Extremism. U.S. Department of Justice. Retrieved from https://www.ojp.gov/ncjrs/virtual-library/abstracts/role-mass-and-social-media-radicalization-extremism

[54] Petrosyan, A. (2023, April 3). Number of internet and social media users worldwide as of January 2023. Statista. Retrieved from https://www.statista.com/statistics/617136/digital-population-worldwide/#:~:text=Worldwide%20digital%20population%202023&text=As%20of%20January%202023%2C%20there,population%2C%20were%20social%20media%20users.

[55] Kasakowskij, R., Friedrich, N., Fietkiewicz, K. J., & Stock, W. G. (2018). Anonymous and Non-anonymous User Behavior on Social Media: A Case Study of Jodel and Instagram. Journal of Information Science Theory and Practice. Retrieved from https://koreascience.kr/article/JAKO201831960580576.pdf

[56] Barrett, P., Hendrix, J., & Sims, G. (2021, September 27). How tech platforms fuel U.S. political polarization and what government can do about it. The Brookings Institution. Retrieved from https://www.brookings.edu/blog/techtank/2021/09/27/how-tech-platforms-fuel-u-s-political-polarization-and-what-government-can-do-about-it/

[57] Sahani, S. (2018). Examining the Association Between Social Media and Violent Extremism: A Social Learning Approach. U.S. Department of Justice. Retrieved from https://www.ojp.gov/ncjrs/virtual-library/abstracts/examining-association-between-social-media-and-violent-extremism

[58] Klosowski, T. (2021, September 6). The State of Consumer Data Privacy Laws in the US (And Why It Matters). The New York Times. Retrieved from https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/

[59] Piazza, J. (2020, September 27). Fake news: the effects of social media disinformation on domestic terrorism. Dynamics of Asymmetric Conflict. Retrieved from https://www.tandfonline.com/doi/full/10.1080/17467586.2021.1895263

[60] Bilewicz, M., & Soral, W. (2020, June 19). Hate Speech Epidemic. The Dynamic Effects of Derogatory Language on Intergroup Relations and Political Radicalization. Advances in Political Psychology. Retrieved from https://onlinelibrary.wiley.com/doi/abs/10.1111/pops.12670

[61] The Pearson Institute, Associated Press, & NORC at the University of Chicago. (2022). Many Believe Misinformation is Increasing Extreme Political Views and Behaviors. Retrieved from https://apnorc.org/wp-content/uploads/2022/10/Pearson-Institute-AP-NORC-Poll-Report-on-Misinformation.pdf

[62] Characterizing people’s most recent online harassment experience.

[63] Posetti, J., Shabbir, N., Maynard, D., Bontcheva, K., & Aboulez, N. (2021, April). The Chilling: global trends in online violence against women journalists; research discussion paper. UNESCO. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000377223

[64] International Federation of Journalists. (2018, November 23). IFJ global survey shows massive impact of online abuse on women journalists. Retrieved from https://www.ifj.org/media-centre/news/detail/article/ifj-global-survey-shows-massive-impact-of-online-abuse-on-women-journalists

[65] Reporters Without Borders. (2021, March 5). Sexism’s Toll on Journalism. Retrieved from https://rsf.org/sites/default/files/sexisms_toll_on_journalism.pdf

[66] Zimbardo, P. G. (1969). The Human Choice: Individuation, Reason, and Order versus Deindividuation, Impulse, and Chaos. Stanford University. Retrieved from https://stacks.stanford.edu/file/gk002bt7757/gk002bt7757.pdf

[67] Rehm, J., Steinleitner, M., & Lilli, W. (1987). Wearing uniforms and aggression–A field experiment. European Journal of Social Psychology. Retrieved from https://onlinelibrary.wiley.com/doi/pdf/10.1002/ejsp.2420170310

[68] Zimmerman, G. A. (2012). Online Aggression : The Influences of Anonymity and Social Modeling. University of North Florida. Retrieved from https://digitalcommons.unf.edu/cgi/viewcontent.cgi?article=1472&context=etd

[69] Ali, S., Saeed, M. H., Aldreabi, E., Blackburn, J., et al. (2021, June). Understanding the Effect of Deplatforming on Social Networks. WebSci '21: Proceedings of the 13th ACM Web Science Conference 2021. Retrieved from https://dl.acm.org/doi/10.1145/3447535.3462637

[70] Niverthi, M., Verma, G., & Kumar, S. (2022, April). Characterizing, Detecting, and Predicting Online Ban Evasion. WWW '22: Proceedings of the ACM Web Conference 2022. Retrieved from https://dl.acm.org/doi/10.1145/3485447.3512133

[71] Twitter Engineering. (2011, June 30). 200 million Tweets per day. Twitter Official Blog. Retrieved from https://blog.twitter.com/official/en_us/a/2011/200-million-tweets-per-day.html

[72] Political Polarization in the American Public.

[73] Desilver, D. (2022, March 10). The polarization in today’s Congress has roots that go back decades. Pew Research Center. Retrieved from https://www.pewresearch.org/short-reads/2022/03/10/the-polarization-in-todays-congress-has-roots-that-go-back-decades/

[74] NPR/Ipsos. (2022, January 3). Seven in ten Americans say the country is in crisis, at risk of failing. Retrieved from https://www.ipsos.com/en-us/seven-ten-americans-say-country-crisis-risk-failing

[75] Arguedas, A. R., Robertson, C. T., Fletcher, R., & Nielsen, R. K. (2022, January 19). Echo Chambers, Filter Bubbles, and Polarisation: a Literature Review. Reuters Institute, Oxford University. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-01/Echo_Chambers_Filter_Bubbles_and_Polarisation_A_Literature_Review.pdf

[76] Bakshy, E., Messing, S., & Adamic, L. A. (2015, June 14). Exposure to ideologically diverse news and opinion on Facebook. Science. Retrieved from https://www.science.org/doi/10.1126/science.aaa1160

[77] Hao, K. (2021, March 11). How Facebook got addicted to spreading misinformation. MIT Technology Review. Retrieved from https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

[78] Berger, J., & Milkman, K. L. (2012, April 1). ​​What Makes Online Content Viral? American Marketing Association. Retrieved from https://journals.sagepub.com/doi/10.1509/jmr.10.0353

[79] Your Undivided Attention. (2020, October 6). Your Nation’s Attention for the Price of a Used Car. Guest: Zahed Amanullah. Center for Humane Technology. Retrieved from https://www.humanetech.com/podcast/25-your-nations-attention-for-the-price-of-a-used-car

[80] Martin, G. J., & Yurukoglu, A. (2014, December). Bias in Cable News: Persuasion and Polarization. National Bureau of Economic Research. Retrieved from https://www.nber.org/system/files/working_papers/w20798/w20798.pdf

[81] Au, C. H., Ho, K. K. W., & Chiu, D. K. W. (2021, April 19). The Role of Online Misinformation and Fake News in Ideological Polarization: Barriers, Catalysts, and Implications. Information Systems Frontiers. Retrieved from https://link.springer.com/article/10.1007/s10796-021-10133-9

[82] Garrett, R. K. (2009, January 1). Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication. Retrieved from https://academic.oup.com/jcmc/article/14/2/265/4582957

[83] DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., et al. (2019, October). The Tactics & Tropes of the Internet Research Agency. U.S. Congress. Retrieved from https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1003&context=senatedocs

Recommended citation

Schultz, John, Robert Laxer, Nishank Motwani, Dilnoza Satarova, Jake Steckler, Sandhya Jetty, Rohan Chandra, Elizabeth Parant, Nadyah Hilmi, Ruchika Joshi and Barath Harithas. “The Drivers of Platform Harm.” Belfer Center for Science and International Affairs, Harvard Kennedy School, June 5, 2023

Up Next