I rarely talk or write about the fact that I work under a pseudonym. My image is not connected to this name. To this day, there are many colleagues and others who only know me or my work via my black and white cartoon avatar. The reasons for my use of a pseudonym are not things I touch on often, unless I need to establish my own privacy and security needs in a new work space, environment, or panel. I am talking about it now because this rudimentary shield for safety is currently under threat
In this context, using a pseudonym (i.e., not my legal name but an assigned work name), avoiding any images or videos of me or direct links to my “real” identification would be seen as being anonymous online: For example, I have social media accounts or an online presence that cannot be directly linked to my official identity (which is often required in verification processes) I use my alt-identity.
If I am unable to use a pseudonym, information about my whereabouts, networks, family and many other sensitive pieces of information could be compromised, potentially leading to real danger.
The nature of my work, the countries I do field work in, my own identity, and the safety of my loved ones means that there are no other viable options for me other than to work in a way often seen as “extreme” or unconventional to most. There are a lot of procedures that need to be set in place to allow for the right security. And I'm not the only one. For this, I and many others have been working on ways to help those that need safety online (and offline) to have more security options at their disposal.
Authoritarian governments, oppressive laws and regulations, or different forms of discrimination are some of the reasons why many need to work under a pseudonym. Many researchers, activists, writers and marginalized persons (who want to just exist in online and offline communities) need methods of anonymity, pseudonyms, or online personas to protect their work and themselves. Whole communities have developed norms to keep themselves safe, evolved to protect their members against the powerful, including governments, law enforcement, or corporations working closely with states. Yet, the current trajectory of some current tech and ethics discussions could imperil survival for these groups and their members.
Threats to Online Anonymity
Anonymity, and more broadly, the right to use a pseudonym not tied to one’s legal name, has been contested for centuries. With the rise of social media platforms as vehicles for communication, this debate has only become more central. Recently, politicians in the United Kingdom have (again) amped up their calls to regulate anonymity online. The tragic killing of Sir David Amess, a Conservative Party MP (Member of Parliament), has reignited calls to speed up the introduction of the questionable Online Safety Bill and for the Bill to extend to policing online spaces by including a measure to stop people being able to post on social media sites anonymously.
The bill would provide more powers to Ofcom (the UK's communications regulator) to regulate social media sites and force companies to have a duty of care towards their users. Tech companies will be liable to remove harmful content including vague measures to remove (“lawful and harmful”) and protect users from anonymous accounts .
Tackling the use of anonymous accounts online has been one of the more controversial rallying calls in this period. In a recent evidence session on the bill, Nadine Dorries, UK Culture Secretary leading the charge, stated that “'This Bill will end anonymous abuse.” How curtailing anonymity would possibly be implemented is far from clear. The bill seems to rely on requests for stringent ID verification, algorithms, and sanctions on companies for non-compliance, but the proposals fail to provide a detailed outline, placing the onus on the companies themselves.
It’s not clear how ending online anonymity actually relates to Amess’s murder - there’s no indication that anonymous online speech played a role in his killing. Rather, the UK government has been pushing for expansion of “online safety” for years. In the discussions for better platform regulation, the UK government published the Internet Safety Strategy Green Paper in 2017. The green paper boldly and abstractly claims that it will “ensure Britain is the safest place in the world to be online”. Of course, it’s not clear who it would be safest for.
Heightened calls for implementing the Bill also occurred after vicious racist online abuse of England's black Euro 2020 footballers. Yet, of course, instead of addressing the structural, historical and political ingrained nature of racism in the UK that often rears its ugly head so viciously in things such as national football games, the response was to end online anonymity. The Premier League, with a long history of institutionalized racism have also, unsurprisingly, decided to campaign for the Bill and ask for further ID verification.
What’s striking is that even outside of Amess’s murder, there is no evidence to suggest that ending anonymity and increasing online ID verification has any effect on online abuse. Twitter’s own recent report on “Combatting online racist abuse: an update following the Euros” points out that “99% of the accounts suspended were not anonymous” and “that ID verification would have been unlikely to prevent the abuse from happening - as the accounts we suspended themselves were not anonymous.” Most research suggests that most abusive accounts are not anonymous or untraceable. Just think about all the high profile racist, transphobic, sexist, homophobic accounts that run on an open and public persona by design. Anonymity does not fuel abuse.
And it’s not just the UK. In October, Sen. Richard Blumenthal, a Democrat from Connecticut, went viral after demanding to know if Antigone Davis, the head of global safety for Facebook (META), Instagram’s parent company, would commit to ending “finsta”. Finsta is a slang term for a “fake Instagram,” a set of account practices predominately used by younger users and obviously not an actual product feature. The comic aftermath of this missed the point that a high-ranking US politician was calling on a social media company to disallow users, especially younger users to communicate pseudonymously and the dangerous course of this line of inquiry.
It’s not just governments that seek to tie our virtual selves more tightly to our offline identities. There have been many attempts to curtail anonymity online including by companies themselves. Take for example Facebook’s long-held online policy of requiring “real names.”. You’d just have to have a quick look at the Wikipedia page to see the chaos and harm this has caused to so many communities. And it is fair to say that this has not reduced any levels of online abuse on this platform.
Why Anonymity Matters
Policymakers or companies using anonymity online in these debates as a cause of abuse isn’t just an easy way to bypass addressing root problems and design issues but also creates significant dangers for those at the margins, who are most impacted by knee-jerk policy decisions. Removal of anonymity harms queer people, sex workers, activists, researchers, journalists, and persons holding combinations of these identities. I want to put an (illustrated) face to the debate as to why anonymous accounts matter. Not being able to work under a pseudonym or being forced otherwise means that I need to risk my work, my loved ones, and my own safety to give some policymakers a false sense of security.
In my own work, I study the weaponization of technology by law enforcement to target, arrest, and prosecute queer people in the Middle East and North Africa (MENA). Through on the ground research at ARTICLE 19, our team on this project documented numerous ways police and police ‘consultants’ use fake profiles to catfish and entrap queer people. In this work, we use this documentation to hold technology companies accountable to provide better protection to these users. I work directly with people who experience online risk as offline harm. And yet we do not push for more uses or real names or ID verification. We know that verification does not keep folks safe.
Our work highlights the dilemma that the solution around fake accounts cannot require more personal, identifying information from these at-risk users. There are more creative and privacy-preserving methods to address the need for authentication and verification, but they come from deep community knowledge, rather than knee-jerk policy-making.
Those at risk know what they need and want and this is what should direct the research and recommendations to protect them. All people deserve to be able to make decisions about their safety online, in the same way I assess and make decisions about my own safety and anonymity online based on the needs of my family, personal life, research, networks, or extra governmental risk I may face.
The most at-risk users need the option to remain anonymous and minimize the data that their apps or profiles contain about them. Companies, technologists and politicians who do not understand this, risk further harming queer folks through exposure of details that might be leveraged for account identification or enumeration or alienating users who cannot risk using a platform that may expose them to identification or arrest.
We Need A Better Way Forward
More creative methods for verification without providing further intrusive powers to companies or governments and increasing risk can exist. (For example, using key types of engagement/activities as a way to unlock additional features or functionality (i.e. gamification) and trust in the online community such as the method used by the queer LGBTQ platform used in MENA called “Ahwaa”.) This issue is convincing these decision makers to take on more creative methods that directly center and focus on those with the most at risk and those with the most to lose. The challenge is convincing them to also protect those most at risk and at the margins of society, rather than going for easy fixes.
The calls for real name policies/banning online anonymity are not evidence-based, and even if they were, they disproportionately harm the people who should be centered and protected, those at the margins. My own experience as someone who uses a pseudonym allows me to see exactly what risks and detriments to my life and work would exist if online personas were tied to our “wallet names”. This lack of connection to the broader impacts on folks at the margins is what I’ll be focusing on at TAPP. Building on my previous work, I am developing a framework for how companies, regulators, and civil society can avoid these kinds of mistakes.
We’re at a vital juncture. Radical change is necessary. There are increasing harms through technology and the political impetus often prioritizes fast band-aid options over structural thinking. We must design with those communities most impacted and left at the margins in mind. We must design based on those most harmed by security and privacy issues with full understanding of the contexts that impact those deemed vulnerable and/or hard-to-reach communities. Banning anonymous or pseudonymous speech online is a prime example of looking for an easy fix while harming those who are already the most marginalized.
I’d like to thank Kendra Albert, Mahsa Alimardani and Nina Ninoushkaio for their kind, considerate comments, edits, and review of this piece.
Rigot, Afsaneh . “Why Online Anonymity Matters .” November 9, 2021