Blog Post
from Perspectives on Public Purpose

What's so Dangerous About Smart Cities Anyway?

Screenshots from Nuclear Energy videos
Screenshots from Nuclear Energy videos (2015) by Kurzgesagt via letstalkscience.ca 

The Belfer Center, home of the Technology and Public Purpose (TAPP) project, was originally created to analyze dangers posed by nuclear technology. When people ask about my TAPP fellowship, I usually say that I am doing the same analysis, but for “smart cities.” Even for folks not particularly concerned or interested in “smart cities” this usually sounds spicy enough to pique their interest or at least succinctly explain what I am up to. In this blogpost, I’ll provide an update on what “smart city” technology I am considering and some of the potential harms its misuse introduces. 

What “Smart City” Technology?

The goal of my research is not to examine any one “smart city” technology (which is an expanding and nebulous list), but rather the tension between increased state data collection to improve civic services and the risks that collection creates. (In this way smart city technology is representative of many public data governance issues to come). The “smart city” themed projects that inspired this research (LinkNYCLos Angeles’ Mobility Data SpecificationSan Diego Smart Streetlights, and Sidewalk Toronto) all involved different technologies and civic aims but aroused similar public concerns regarding unprecedented data collection and how that data would be used. While each project surely had nuanced trade-offs, what level of care was not being taken by local governments (and their partner vendors) across these projects to provoke public outcry? 

The examination of harms that “smart city” technology might contribute to cannot be examined in a vacuum. The issues raised by these projects are certainly related to the broader Big Tech and privacy regulation debate, but are distinguished from them in that one cannot individually consent or opt out of their neighborhood. Further, “smart city” technology is increasingly implicated in growing state surveillance capabilities (in the U.S. and abroad), but related policy discussions can miss their role if they focus exclusively on technologies managed by law enforcement agencies or only consider current uses in the United States.  

Rather than focusing on a specific technology, how do these technologies, in context with everything else that is happening in the world, potentially contribute to harm? How are “smart city” projects considering harms of further surveilling public spaces? Are relevant “smart city” projects in current surveillance policy discussions? What constituencies are prioritized when deciding if the collective utility of new “smart city” data collection is worth the risks? How can the U.S. Federal Government and U.S. based vendors be responsible global actors given these variable environments?  

Potential Harms of Smart City Technology

While some parties will dismiss potential harms of “smart city” technology as alarmist or premature, I see these repeated instances of public pushback as intrinsically legitimate. The widespread implementation of new technologies that ultimately collect personal data throughout public spaces is uncharted territory. I was recently reminded by a colleague that nuclear risks advocates were similarly considered to be extremists before Chernobyl, but were ultimately correct in being fearful of a worst-case scenario. Moreover, thoughtful regulation and management of these technologies must be applied to protect against the potential harms (outlined below), and we must forward plan for the worst-case scenarios of mis-using/mismanagement “smart city” technology in order to safeguard our democracy. 

In the spirit of these common public concerns, I have started to outline broad categories of harms that might arise from deploying “smart city” technology. The intent for this framework is for it to evolve into an assessment tool for current practices and guide for future considerations of deploying  “smart city“ technology.

Lack of Community Input

A first order issue is does the community where “smart city” technology will be deployed want it? To know the answer to this question means ongoing engagement with a community and robust dialogue about types of data collection, how that might contribute to the collective good, and all the trade-offs involved. Given the other possible harms involved (see below), projects should not be pursued at all unless the community is on board for an articulated outcome. Challenges for community input on “smart city” technology include ensuring that approval is informed (perhaps via trusted experts and intermediaries) and identifying the appropriate level of approval (e.g. neighborhood v. city, majority v. unanimous). Examples like Sidewalk Lab’s poor public reception (procedurally as well as substantively) to their Master Innovation and Development Plan highlight the need for this dialogue to take place before the procurement process takes place. Cities like Boston and Seattle have attempted to systematize community input on “smart city” tech with a Boston Smart City Playbook (which highlights the need for right-tech versus high-tech approaches to civic problem solving) and Surveillance Impact Report processes (which highlights the need for public comment, working group, and council approval of new surveillance technologies).  

Erosion of Privacy and 4th Amendment Protections 

While community input is a first order issue to deploying “smart city” technology, the rest of these harms are not delineated in any sequential or ranked order. As technology development moves faster than law, there is a trend of technology expanding possible searches by law enforcement and that expansion being challenged in court as a violation of our Fourth Amendment protection from unreasonable searches and seizures. While an individual’s actions or movements in public spaces have historically fallen outside the scope of Fourth Amendment protections, recent case law has inspired some legal scholars, such as Andrew Ferguson, to examine how digital may be considered differently. In “Structural Sensor Surveillance” 106 Iowa L. Rev. 47 (2020) Ferguson considers how automated, continuous, aggregated, long-term acquisition of personal data with “smart city” sensors may trigger Fourth Amendment scrutiny under current Supreme Court doctrine. Separate from Fourth Amendment protections, as a matter of public policy, one may consider other harms that may occur from an erosion of privacy including social detriment and a loss of liberty. How are “smart city” technology contracts construing their privacy policies? Lastly, as “smart city” technology collects more and more data that can be used to re-identify people, the cybersecurity of any information collected becomes an integral aspect of overall privacy protections. A data breach could lead to re-identifying someone and causing threats to their safety and wellbeing or economic loss.

Chilling of 1st Amendment Rights 

In the U.S. the first amendment protects the five freedoms of: speech, religion, press, assembly, and the right to petition (protest) the government. The surveillance imposed by “smart city” could have a chilling effect on community members feeling comfortable participating in these protected activities for fear of harassment or retaliation by the state. As more instances of filming protestors are documented (such as in San Diego streetlight camerasMiami UniversityHong Kong) one could reasonably anticipate to be filmed and identified in public space. If public space becomes a place where one fears punishment, how will that affect collective action and political movements?

Discrimination / Oppression 

Because “smart city” tech is applied to a given neighborhood, it shares the potential for discrimination rife in urban planning and public safety history and also a new power of extending those inequities to the digital worlds term that many have coined as “digital redlining”. Potential harms that flow from disproportionate use or disparate community impact include loss of opportunity, economic loss, and social determinants (dignitary harms, constraints of bias). Cities, such as Baltimore and DC, have closed-circuit television (CCTV) installed in in majority nonwhite areas, on average, than in majority white neighborhoods. Detroit has come under scrutiny by local activists for using facial recognition technology in public housing, spurring the introduction of Federal legislation to prohibit “the use of biometric recognition technology in certain federally assisted dwelling units.” These biases compound as data collection from strategically placed “smart city” and other surveillance technology increasingly inform policy decisions such as predictive policing. Seattle’s surveillance law requires Equity Impact Assessment reporting as part of their surveillance technology review process, but to date the city has articulated an inexpertise in measuring this impact other than examining how it comes up in public comment.  

Loss of Accountable Government 

Lastly as governments continue to outsource technology services to private vendors the vendors at play take on a quasi-government function without many of the accountability measures built into government functions such as public records access, public auditors, or consequences for elected officials if services do not meet community members expectations. Moreover, if care is not taken with data governance, community members may be further vulnerable to corporate influence via “surveillance capitalism.” As “smart city” must be considered as a potential extension of police surveillance and its biases, it must also be considered as a potential extension of corporate surveillance. At what point does a single corporation have “vertical integration” (in terms of personal data) of a whole neighborhood? This corporate influence (via data, and sheer size of these vendors) was central to Sidewalk Toronto criticismAmazon HQ2 criticism, and Port Covington criticism. For the data aspect, some cities have retained data rights in their contacts (e.g. GovEx’s Data Ownership and Usage Terms) or “open standards” (Mobility Data Specification) for access to data collected by the private sector but this raises new questions of what data the vendor be collecting and managing and what data should governments be collecting and managing. Namely, does this collection protect individuals and is the collection fit for its purpose? Ultimately data collected for the purposes of consumer payment is more granular than what is needed for collective city planning and very different from data collected for the purposes of law enforcement. In addition to these fitness for purpose considerations, many alternatives to data governance have emerged as potential approaches to navigating data spaces that must consider individual and collective purposes, as well as competing individual, corporate, and public interests. How is data access explicitly or implicitly included in “smart city” vendor business models or contracts? (i.e. Is part of the bargain that the vendor retains data as a good in exchange for the hardware they provide?) Where no or less money is exchanged, how is data access considered in public private partnerships and other test bed scenarios? 


Research Next Steps  

The final output goals of this Whose Streets? Our Streets! (Tech Edition) research project is to provide some high-level recommendations for governments, the public, and vendors to prevent these harms. The immediate next steps for this project include further refining this set of harms and collecting examples of current “smart city” tech use and policy. Questions for readers:  

Please send feedback to rebeccawilliams@hks.harvard.edu by January 31, 2021. 

Recommended citation

Williams, Rebecca . “What's so Dangerous About Smart Cities Anyway?.” December 16, 2020