At a time when delivery robots block the sidewalks, leaving wheelchair users stuck, when navigation apps create neighborhood traffic nightmares, when department stores secretly track customers' movements and sell the data, and when angry residents throw rocks at autonomous cars and throw scooters into the sea, it's tempting for Washington to override state and local laws on AI. In fact, the White House's draft memo on Guidance for Regulation of Artificial Intelligence Applications seems designed to preempt state action. But the administration's statements about over-regulation suggest this is a call, not for a federal answer to these problems, but for the destruction of local solutions.
That would be a mistake.
In the absence of federal action, U.S. cities and states have been regulating facial recognition, autonomous vehicles, rideshare companies, and consumer data collection and sale, offering their residents additional protections and creating environments to test new ideas. As quickly changing technology creates new models and markets, local action offers a natural way to experiment with regulatory environments on a smaller scale before national policies are finalized.
However, even the biggest cities and states cannot do what the federal government can.
The federal government is in the best position to identify and address some kinds of AI risk, including those which come from cumulative or widespread use or which are threats to or through national infrastructure and networks. It can implement solutions that require significant financial resources, political power, collaboration across governments or with industry, or which might require rethinking common regulatory frameworks or creating national standards. It can take a broader view, offering protection across the U.S. and far beyond.
Congress and the White House should use local and international experience to help inform U.S. federal AI policy, provide protection for all residents by creating minimum national standards, and make limited use of federal preemption, at least for now. They should also work to build deep AI technical and policy capacity across all levels and sectors of government.
It's hard for governments to compete with AI industry salaries and there are great inefficiencies in each city or state or even each country separately building technical capacity and trying to think through all the societal implications of artificial intelligence and its applications. Many don't have the internal resources to do it well or even at all, leaving industry, academia and nonprofits filling the void: convening city and state leaders, providing tech education for policymakers and building and deploying AI and algorithms for city, state and federal services. Those organizations should all continue to contribute their expertise, but governments should not have to rely on it.
The federal government should create a national AI research and policy office, as a resource for local and federal agencies and policymakers, recruiting top talent and encouraging collaboration across all levels of government and across all sectors, providing a cost efficient way to develop, access and share unbiased AI technical and policy expertise. It should be done very quickly, this year. It might only be needed for a limited period, less than 20 years, but there's a pressing need right now.
The substantive work could be undertaken in collaboration with existing agencies or national research labs including the United States Digital Service, the Office of Science and Technology Policy, the national transportation lab Volpe, and the DOE's new Office of Artificial Intelligence. Each holds some of the resources that are needed, but none holds the mandate.
Governments across the world are trying to figure out what the benefits and risks of AI are, what safeguards should be in place and how high-level principles about trustworthy, responsible, or ethical AI or international human rights should translate to specific action or law about AI. We don't know how AI's unprecedented ability to autonomously pull together data, find patterns and make inferences online and in the physical world, cheaply, quickly and at a massive scale is affecting and will affect societies globally. We don't know how we want it to, or how to get there.
Federal and local regulations are part of the solution, but no regulatory approach on its own can fully answer these questions today. Finding answers will require significant investments in research and development, collaboration, negotiation, and experimentation, deep and careful thought across many fields and leadership at all levels, across and between governments.
The U.S. federal government is well positioned to lead, but if it won't, it should at least stay out of the way.
Gretchen Greene is a Fellow with the Technology and Public Purpose Project at Harvard Kennedy School's Belfer Center and a Senior Advisor at The Hastings Center. She is an internationally recognized expert on face and emotion recognition and AI and AV policy and ethics.
Greene, Gretchen. “Washington Should Take Action on AI or Stay Out of the Way.” Belfer Center for Science and International Affairs, Harvard Kennedy School, March 19, 2020