Blog Post
from Perspectives on Public Purpose

Identifying and Reducing Harms: a Look at Artificial Intelligence

Artificial Intelligence (AI) is likely the most transformational set of technologies for the 21st century. Whereas the industrial and the information technology revolutions led to advances in automation for the creation of objects and the processing of information, AI holds the promise of replacing the role of humans with automation in all walks of life. However, like technologies preceding it, from nuclear energy to bio-engineering, it can be both a power for good and for ill.  

Unfortunately, our experience with emerging technologies and how humanity develops and implements them are not in AI’s favor. History is littered with the detritus of technological innovations whose flawed roll-out caused terrible harm, from leaded gasoline to electroshock therapy. This history is reflected in public opinions about AI, which receives at best mixed marks in many democratic nations such as the US, France, and the United Kingdom. Here we articulate some of the potential harm of  AI, and suggest specific approaches that governments can implement to make AI a force for good for humanity while mitigating key harms. In particular, many of these suggestions can be implemented by the newly established National Artificial Intelligence Initiative Office (NAII). 

Proposed framework: focus on the harms 

AI is new technology, but technology is not new. We can already regulate artificial intelligence’s impact on society using the existing space of harms to individuals and to systems and networks, like capital markets and national defense. These harms include health, dignity, and infringement of human rights -- “life, liberty, and happiness”.  By focusing on actual harms, we can leverage the vast array of existing mechanisms for and research on regulation and oversight to ensure more positive outcomes. We suggest three concurrent approaches towards enabling public purpose by evaluation of harms in AI: 

  • Research - because the long term effects harmful effects are playing out in society, continual research is needed to create recommendations. 

  • Development and Maintenance of Recommendations - Coordinated recommendations via advisory panels and committees with consultation with external stakeholders  

  • Execution and Enforcement - backed by research and recommendations, specific enforcement and regulation options using existing authorities are more effective. 

Private companies, citizens, and governments all have key roles to play. However, we write this piece now in part because the coordination of these three different lines of effort can and should be developed and maintained through the new National AI Initiative Office, whose purpose is in part to ensure these positive outcomes.  

We now examine how this framework fits in the context of three different technological directions: AI and social media; AI and its role in building wealth; and AI and its role in the national defense arena. 

Social Media and AI 

Fundamentally, AI is enabling new networks and means of communication between citizens, where both the editorial voice of a media company and the community-driven, in-real-life norms of interaction are supplanted by automated systems. While this has enabled vast scaling of social networks, it also provides a window into the failures of our ability to predict the consequences of replacing the moral voice of the crowd and key representatives selected through thought and time with machine-based systems. At the same time, the monetization of social media via advertising has meant that the most shareable content, which typically flouts the norms of 20th century discourse, is also the most lucrative, leading to a direct conflict between profit and societal benefit. 

One striking example is how AI recommendation algorithms have enabled increasing polarization in society, proliferation of conspiracy theories and alternative facts, and even played key roles in the recent insurrection in America’s capital. Although smaller sites such as Parler and 4Chan are blamed for allowing for conspiracies to run rampant, there is strong evidence that much of the radicalization is happening on major platforms such as Facebook, Twitter, and YouTube.  

There are also other major issues with social media, such as its interaction with, and response to, teen depression. Teenage suicides rates have sky rocked in 56% in a decade, partially due to social media. At the same time, social media is but one element of the picture, with social discontent from an increased separation between rich and poor driving related aspects. Yet here too we find that the proliferation of AI into other human venues will exacerbate this problem with increasing polarization of technological haves and previous century have-nots. 

Furthermore, beyond the degradation of our social and community structure, there are economic and security downsides to the consequences of AI-based social media work. A key example: the negative perception by the world of US-based consumer technology companies. Primarily because of Facebook, new US technology companies - especially social media based companies, will undergo intense scrutiny from foreign government regulators when attempting to enter their markets. Already this has led to a detente in Australia, with the situation continuing to develop not in the US's favor.  

How do we change this cycle? We believe that it is critical to identify harms, which in turn can help individuals and our elected and appointed officials in the government take immediate action to break this unholy trinity of social media, AI algorithms, and perverse economic incentives. At the same time, some harms arise from reduced access to knowledge and information, such as in sexual education, a topic being investigated by TAPP Fellow Clare Baley. The answer cannot be ‘less interaction’ -- rather it should be ‘better interaction.’ 

For example, to address the social damage, including depression and isolation induced by fractionalization and marginalization created through AI-related algorithms, the US government can empower health agencies to set recommendations regarding the usage of social media, such as limiting usage and unhealthy consumption of social media content. Much like tobacco companies were subjugated to put graphic health warnings to reinforce the dangers of smoking in the 80s, these recommendations can be adopted by regulatory and enforcement bodies such as the FCC  right now.  

At the same time, research and long-term monitoring, preferably sponsored and/or provided by government science agencies such as the NIH and the NSF, needs to be developed and deployed to build a continuing understanding of the negative mental health harm of social media. This information, reported broadly as a ‘report card’ for individual social network companies, will provide pressure in the marketplace and reverse a basic element of the perverse economic incentives behind harmful content. Even a simple rating system (analogous to those found in video games, movies, and TV) can help, applied both at the macro-scale -- to specific social media companies or products -- and the micro-scale -- to links and information enabled informed consent for venturing into unexpected territories that AI recommendation systems encourage to maximize ad revenue. 

Technology companies are likely to push back on such policies - especially since much of the research on the negative effects are very new. Thus the government must act first, leaving  existing technology companies and platforms to respond via modification of their algorithms to reduce harm. If not, the U.S. startup ecosystem will create new players and will take the place of existing social networks. We note that this approach is quite different than some calls for a “Luce Commission” or the like, in which social media companies would be regulated based upon some concept of community standards, as publishers did in the U.S. in the 1950s. Community standards, and self-regulation, have a key role to play, but they should be aided by foundational research and sit in the context of an existing understanding of individual harms that we have proposed here.  

There is a significantly positive economic outcome that can come about from enabling companies and individuals to more clearly see and respond to potential and actual harms of their actions and choices. First, the technology and monitoring needed to avoid the harm of social media and AI is non-trivial and will require a significant amount of research, development, and sustained support. This in turn will give US companies a significant competitive edge over social media companies abroad. Second, by making social media “mental-health friendly” across the US technology ecosystem, the ability for US companies to enter and maintain leadership in international markets where these human costs are incorporated will only be improved. Third, this is one of the areas where the United States can actually cooperate with Chinese health experts. China recently instituted sweeping reforms in online gaming to improve mental health, increase worker productivity, and reduce internet addiction. In particular, it had to work with the big tech company Tencent in order to institute its policies while not, at the same time, destroying one of its greatest technology successes. Tecent is up almost 120% in market capitalization since instituting highly restrictive regulations on its games - something that US technology companies can also emulate. 

Suggested approaches are the following : 

  • Research: we need to know more, and have a community invested in knowing more on an ongoing basis. Research agencies like the National Institute of Mental Health and the National Science Found can help support the research needed. 

  • Developing recommendations: panels and committees can be developed between research agencies and private industries e.g., as subcommittees of the National AI Initiative’s advisory committee. 

  • Enforcement:  Creation and enforcement of regulations around these recommendations should be handled using agencies with authorities over the relevant technology spaces, such as the FCC’s role in regulating communications over networks. In addition, private sector adoption using agencies that work with the private sector to set standards should be encouraged, leading to, e.g. the adoption of ‘report cards’ on App stores. 

AI-enabled Monopolies 

Monopolies directly harm society because they abuse their market power to set higher prices, underpay its employees, and stifle innovation. Most governments in capitalist countries use anti-trust regulation as a last resort, in large part because of its potential negative effects on the economy. Only in selected markets such as gas production, water distribution, and telephone systems - markets that naturally have a “network-effect” - are candidates for antitrust regulation. These “natural monopolies” occur when the most efficient number of firms in the industry is one.  However, with the increasing role of AI to automate and scale heretofore unscalable systems, many markets that previously did not have “network-effects” have the potential for a rapid transformation into natural monopolies.  

We contend that the root cause of this is the interplay between the dependence of artificial intelligence algorithms and data. New or more effective algorithms increase the number of customers and users of a service, which in turn enables companies to create and leverage that same data to enable better artificial intelligence. An example of a market that has transformed into a natural monopoly is retail. Prior to the internet, retail stores, even malls, did not have a natural monopoly. However, with the rise of Amazon, the pricing power of a large buyer is combined with the data it can get from its network of retailers and customers in surprising and powerful ways, leading, e.g., to a dominance represented by 50% of e-commerce sales in the US.  

The second effect of AI-enabled monopolies is the inevitable loss of jobs, and acceleration of that loss, through automation. Although humanity has gone through multiple iterations of automation causing job loss, the breadth and scale of the labor market transformation looks to be significantly larger. Furthermore, most industries that are being disrupted by existing deployable machine learning systems, a key component of AI technologies, are being replaced by single AI-enabled monopolies (see the Amazon example, above) which are incentivized by market forces to reduce its cost structure as much as possible by automating wherever possible. 

While automation can dramatically help improve our standards of living, it is rare that the reduced cost structure seen is fully passed on to consumers. Without government intervention, these new monopolies can and will abuse their market power to set higher prices, create mass unemployment, and stifle innovation. For the US, however, lawmakers and regulators have taken a more nuanced approach towards anti-trust for these new AI-enabled monopolies.  

When the US broke up the phone monopoly of AT&T, there were mandatory requirements around network compatibility so that people could continue to communicate seamlessly across companies. In essence, the infrastructure that led to a ‘natural monopoly’ became government regulated, while the operation of the networks was allowed to be run privately so long as the overall network was maintained. AI monopolies which fundamentally depend on data may need to have intricate data-sharing standardization practices that protect the end user privacy by, i.e., having a common, government-backed framework for data storage, sharing, and usage. The history of commerce in the U.S. shows time and again the benefit of standardizing these network-wide elements of interaction while allowing multiple private companies to compete within the network. .  

Of course, technology does provide new opportunities to improve the situation. For example, differential privacy may be needed to implement such data sharing practices in a privacy preserving way, otherwise the result will be inferior products that make US corporations less competitive with the rest of the world. However, any thinking that technology will save us from technology seems wishful at best -- the challenge is a human one, and must be driven by humans choosing to help others, rather than assuming that we can trust a technology to take the same care. 

Finally, we note that in today’s connected economy, antitrust regulation around AI-enabled monopolies needs an international approach, and again this is an area to collaborate both with European and Chinese governments. Another significant change that needs to be evaluated is how to deal with monopolies with zero marginal cost. Ben Thompson has a detailed explanation of how zero-marginal cost monopolies hurt the economy. Lastly, the US government may just have to accept that certain monopolies will result from the AI revolution, and look to alternative taxation policies on AI driven corporations to pay for programs that benefit impacted citizens, or even all citizens, via a foundation, tax credit, or other mechanism. In addition, the U.S. can use existing alternatives to taxation, such as encouraging coordinated, positive outcomes through consortia (a topic one of us wrote about recently) or through the establishment of multilateral or multi-institutional mechanisms, ranging in style from Bell labs to landmark international agreements. 

Our suggested approach toward regulating AI-enabled monopolies are the following: : 

  • Research : Exactly how to regulate AI-enabled monopolies while maintaining US’s global leadership is an area of uncertainty, and requires research prior to implementation of drastic actions. NSF in conjunction with many academics at business schools should be consulted. In addition, legal understandings of antitrust in this new space will need input from both the Department of Justice and from private think tanks 

  • Developing recommendations: Private Industry, Department of Commerce, Department Of Justice, and NIST can help come up with recommendations. 

  • Enforcement: DOJ and FTC have existing authorities for the regulation and enforcement of antitrust regulation. At the same time, consortiums such as the “Partnership on AI” can play a role in advocacy and education of the recommendations ahead of the DOJ and FTC enforcement.    

AI Non-Proliferation 

AI technologies have vast implications for national security, from intelligence gathering to weapons systems. Just as unintended or unexpected consequences have had substantial negative impact in social and economic domains, AI is already leading to races in filtering, recognition, projection, and automated vehicles. We believe that some of these outcomes require a non-proliferation approach to mitigate. Successful non-proliferation efforts include reductions of nuclear, biological, and chemical weapons research, development, and deployment. Automated weapons systems already present a threat that can destabilize, as the hypothetical “Doomsday device” from Dr. Strangelove intimated 60 years ago. More specifically, today, the cost to build a drone that performs facial recognition and automatically deploys weapons against targets is dropping so significantly that resources to carry out asymmetric actor attacks is below the $1000 mark per system. 

To look for precedence of how non-proliferation changed an entire field, one can look at the history of biological non-proliferation. In 1972, Richard Nixon signed the Biological Weapons Convention treaty, which essentially outlawed the use of bioengineering for the purposes of offensive weapons.  Dr. Matthew Meselson’s argument made to Nixon was that bio-weapons were simply too cheap and too dangerous for the world. Unlike nuclear weapons that cost billions of dollars to maintain a working arsenal, bioweapons can be created to wipe out the entire human population with only several thousands of dollars and a few rogue scientists. Hence it was vital not only to halt creation of such weapons, but also to create an international stigma around the development of these weapons too. The stigma in particular has been very effective because to this day, there has not been any mass bio-weapon attack yet. Although there are countries that occasionally “cheat” and continue bioweapons research, there is no standing army of bio-weapons specialists or use of bio-weapons to threaten other countries. 

AI’s use in national security in a similar position right now. If the US, in collaboration with the international community, bans and stigmatizes the use of AI in offensive weapon systems, essentially requiring a “human-in-the-loop” for making decisions that direct facilitate injury to people (e.g. today’s drones), it dramatically reduces the probability of AI being used for indiscriminate killing of people. The US can even take such a ban a step further, and call on all nations to ban, or require ‘human-in-the-loop’, when using AI to gather surveillance or censorship. Critically, the inclusion of ‘human-in-the-loop’ for all action decisions, regardless of whether they are for weapons release or for police action, can be a red line upon which advocates from many directions can agree. This will indirectly put pressure on governments such as Israel and China to reduce, or at least not export, such harmful technologies to other countries. At the same time, the United States can work with allies and partners to establish norms around this human-centric decision process and approach, via international agreements and via multilateral policy adoption and arms sales limitations. 

At the same time, we need to think about the economic downsides of the proliferation of AI in both the defense and intelligence space. Because much of AI is dual-use technologies, the proliferation of AI technologies in these domains will also mean that vast swaths of AI technologies will become subject to ITAR and other export control mechanisms. This could significantly hamper the ability for US corporations to export their technologies beyond US borders. Stop-gap measures using multi-lateral controls such as the Wassenar arrangement are more likely to enable U.S. companies to maintain their competitiveness as these new norms and expectations promulgate throughout the world-wide market for automation. 

Successful research, development of recommendation, and enforcement lies in multilateral diplomacy similar to how the world implemented the Biological Weapons Convention of 1975. There are several countries - both US allies and competitors that will hamper negotiation. America may ultimately need to repeat history, and unilaterally disarm itself offensive AI weapons similar to how it unilaterally ended its offensive biological weapons program in 1969. Careful coordination will ultimately be needed between the Department of Defense (Joint AI Center), Department of State, USAID, Department of Commerce, and the key private sector providers of AI technologies. 

Our suggested approaches towards AI Non-Proliferation are the following: 

  • Research : Determining how human-in-the-loop approaches to AI can remain effective against US’s enemies requires further research from DoD, private sector, and international relations.  

  • Developing recommendations: Recommendations will ultimately require international discussions and adoption via diplomatic and other means, but will leverage the efforts of research and of non-profit and private sector recommendation bodies, including the National Academies  

  • Enforcement: There are already mechanisms in place to ensure that weapon ban treaties are enforced - either unilaterally or multi-laterally. Internally in the United states, we need to ensure that procurement and deployment of AI technologies have a human-in-the-loop component. 

AI Ethics and Education 

Although AI technologies will have a profound effect on society, few people in society are educated in or being educated in both the technologies and their consequences, including positive and negative outcomes. As a result, citizens often lack the understanding necessary to effectively advocate for improved outcomes to their democratically elected officials who could enact laws to reduce the downsides of AI. What is even more alarming is that the creators of AI technologies -- namely the thousands of engineers and scientists that come out of undergrad and graduate school -- have little to no education on the consequences of technology or even on the ethical and moral frameworks and consequences that have been debated for thousands of years. Given this dearth of education, it is no wonder that technology companies appear flat-footed when their technologies are misused at scale. 

This is an area that we personally hopeful. Over the course of the COVID pandemic, many of our children (including those in our families) have been forced to attend school from home. As a result, schools have been teaching “Digital Citizenship”, to reduce bad behaviour online such as improper sharing of content, cyber bullying, etc. We need the country’s AI technologist to be practicing “AI Citizenship”, which can be institutionalized fairly quickly. Overall, the minimal goal is very simple: prime those who write the software and build the hardware to be aware of and thinking about moral and ethical questions. Preliminary research in such “ethical priming” shows that even one week of thought added to an introductory computer science course leads to substantial gains in ethical thinking about whether to deploy a technology. This integration of civic and philosophical thinking to our STEM education is an essential, but missing, component of today’s education. We can and should do better.  

To implement AI Ethics and education, computer science education will require reform in America. This means a coordinating reform between the American Computer Machinery Society, the National Science Foundation, universities, and the private sector. We have seen that this type of reform is possible, as seen by the ‘Learn to Code’ movement of the past decade or more recently in the Q-12 education partnership. The National AI Initiative Office can be a catalyst for these actions going forward. 

Final Thoughts 

In this article, we suggested ways to identify and reduce harms caused by artificial intelligence technologies. There are many other areas of AI that need to be addressed, such as bias in AI systems, culpability and liability in AI decision making, AI’s impact on the environment, and the network and systematic effects of AI in enabling, e.g., the creation of alternate sets of facts. These are all very hard problems that require the public, the government, and the private sector working together to overcome - problems that can be addressed through the National AI Initiative’s efforts. Still, we believe that working from the perspective of harms, rather than risks, and developing pathways where humans grapple with the challenges of technology as they deploy have been and will be a path for enabling good from these new technologies. Our suggested approach, combining research, recommendation building, and enforcement using existing harms frameworks, provides a path that can help society navigate these challenges going forward. 

When we find ourselves in the midst of a pandemic, and on the footsteps of a global climate crisis, one may question if humanity can hope to survive and thrive in the age of Artificial Intelligence. However, we only need to look back at the 20th century where scientists avoided the downsides of nuclear sciences and bio-engineering, and turned them into tools for peace and prosperity. Quoting Jonas Salks, a virologist who engineered the vaccine of polio, “Hope lies in dreams, in imagination, and in the courage of those who dare to make dreams into reality.” 

Alan Ho is the product management lead at Google’s Quantum AI team. His responsibilities include the identification of applications of quantum computing that can benefit society.  He can be reached at linkedin or via email at karlunho@gmail.com . Jake Taylor is a TAPP Fellow examining how to integrate public purpose into emerging technologies, and can be found on twitter @quantum_jake and reached at jtaylor@hks.harvard.edu. The views and opinions expressed on the TAPP Blog are those of the authors and do not necessarily reflect the official policy or position of Google and the Belfer Center. 

 

Recommended citation

Taylor, Jake and Alan Ho. “Identifying and Reducing Harms: a Look at Artificial Intelligence .” February 26, 2021