DAVID EAVES: Good evening, everyone. Welcome to the forum on a Friday evening. I appreciate especially the students, who I know have (:11) at the moment and are (:13) to be here instead. I’m unbelievably excited to be here. This is a panel I’ve been wanting to do for a long time. As many of you have probably read in the news of late, there’s been increasing concern about the impact of technology on society and on public goods, so much so that a number of people started to think about, what should we be doing about that, and particularly what is the responsibility that technologists have in the technology they create, and this had led some to start to talk about the idea about public-interest technologists. Who are the people who are thinking about how technology is going to impact society and are advising both governments and the rest of us about how we should be managing that?
To think more about public-interest technology, we’ve invited four unbelievably smart and articulate people to come join us, and I’m going to introduce them briefly, and then I’m going to dive right in, because we only have an hour, and I want to get as much out of them as we possibly can.
I have Ash Carter, who many of you know, he’s the director of the Belfer Center here. He, before this, ran a really small organization called the Pentagon, so kind of small, but some of you have heard of it. And then Vanita Gupta, who’s immediately to my left here, she’s the President and CEO of the Leadership Conference, but before that she was the head of the Civil Rights Division at the DOJ.
And then Reid Hoffman, also someone who’s run some small organizations that you may have heard of. He’s a partner right now at Greylock, but he’s a co-founder of LinkedIn and it kind of grew into what it is today; also was a co-founder at PayPal and (01:49) of that organization, one that actually most of you use every day, whether you’re aware of it or not. He’s also recently written a book called Blitzscaling, which I encourage you to go check out.
And then, finally, I’m unbelievably excited to have Latanya Sweeney. Latanya is a professor here at Harvard at the School of Government, and she’s the director of the Data Privacy Lab, where she does incredible work looking at how, most recently, local governments are securing electoral roles and onboarding people onto electoral roles and questioning whether they’re doing that effectively or not. But before that, she was the CCO at the FTC, so she’s been on the inside of government, but now she’s doing incredible work on the other side. So with that introduction, I want to dive right into some questions, and so maybe I want to talk initially to Latanya. You’ve been kind of doing public-interest technology for several years now. Can you tell me a little bit about your personal journey, about how you came to this work and why you think it matters?
LATANYA SWEENEY: Yeah. I would say I’ve been doing for two decades. I, all of my life, wanted to build a thinking machine, and so I was a computer science student at MIT getting my PhD, and I just have this real passion for computers. So you can imagine my shock one day when, as I’m walking by an ethicist says, “Computers are evil,” so I had to stop and try to correct her thinking right there on the spot, and she basically foretold—the year was 1996, and she foretold the future. She said that there’s so much data being collected that it’s breaking our social contract when people go to the doctor and so forth.
She was particularly concerned about a health dataset that came out of here in Massachusetts on state employees and their families and retirees, and she said, “Look, now this data is sort of not just between a patient and a doctor, but all they’re putting it to all these other uses,” and I said, “Don’t worry about it. It doesn’t have anyone’s name of Social Security number. You shouldn’t really worry about it.” And she said, “Well, is it anonymous?” So it had the full date of birth, gender, and zip code, so if people 365 days in a year, people live 100 years and two genders, that’s 73,000 combinations, but the typical five-digit zip code in the United States only has 25,000 people. That meant that that combination tended to be unique.
I said, “Well, let’s see if it’s really true.” William Weld, who was the Governor of Massachusetts, at the time, he had collapsed, and so his information was in that data, and I went down to City Hall here in Cambridge, got the voter list, which cost me $20 and came on two 5-1/2-inch floppies, by the way, which my younger students don’t know what I’m talking about, and, sure enough, six people had his date of birth, three of them were men, and he was the only one in this five-digit zip code. Those three combinations were unique for him, and they turned out to be unique for most of the people in the United States, 87% of the population.
What was interesting about that is, one day I’m a graduate student, and the next day I’m testifying before Congress, because, at that moment in time, two things had happened. 1) It wasn’t just that dataset. That was the standard around the world, and 2) the United States Congress was debating what to do about what became known as the HIPA Privacy Rule. So those simple experiments that I had done ended up having worldwide impact, and so the work is actually cited in the preamble of HIPA and to change laws all around the world, and many of those laws have my name on them.
That was the first time that I was to be able to see how a simple experiment could have profound impact on how we live our lives, and this idea was to be replicated over and over again through my life and my career from showing unexpected discrimination and online ads and unfairness in algorithms to just tons of work that I was able to do at the FTC and to much of the work that my students have done since I’ve been here at Harvard.
DAVID EAVES: Thank you. Ash, you also have a personal story that kind of got you to this space from the people who taught you. Can you tell us a little bit about how you got here?
ASH CARTER: I’ll be quick. I started out in theoretical physics, and that’s what I was going to do, but the people who were the generation that taught me, the mentors of that time, were the Manhattan Project generation, or immediately thereafter. They had something of which they were actually proud, a disruptive technology, to put it mildly.
They were proud of it, because, in their view, it had ended World War II and kept the peace for 50 years, but they also knew that it had a dark side, and they felt, and they instilled in me, that I, as a knowledgeable person, had responsibility, and with the knowledge and the power that comes with that comes responsibility, so that’s why when I was asked to come in 38 years ago for one year, just one year, my very first Pentagon job when I was in and out, they told me, “You have to do this. You have to do this. You’re being summoned to do it. You have to do it just for one year.”
I’m really glad I did, and then I was in and out and so forth in life and did business and a lot of academia and so forth, but it stuck with me, but it doesn’t come out of nowhere, and one of the reasons I love being here is I hope that a little of that spirit that I can pass on and sticks with these fantastic people we have here.
DAVID EAVES: Before I ask Reid to say something, Ash, I know you’ve worked with Reid; just a few things you might want to say?
ASH CARTER: Thank you. I do. I want to say something, because I’m very much an admirer of Reid Hoffman and am in his debt, not only because of PayPal and LinkedIn, but there are a few things that are really dear to my heart. The first is that LinkedIn has been pioneering what I’ll call a blue-collar LinkedIn that is not white-collar professional people, which is where it started, but everybody, let’s say, machinists in a town where the factories closed, but is a very worthy citizen. If our people don’t see a path ahead for themselves and their families, we’re not going to a cohesive society, so it’s a huge thing.
The other thing is, and we’re close to my home as Secretary of Defense and before that many other jobs over time, he was one of the pioneers in veterans’ employment, and this isn’t something you do just to say thank you. It’s because they’re really good, but there was an odor hanging over from Vietnam that they were not and there was something wrong with them, and that needed to be dispelled, and he helped do that.
And then last thing, he served on the Defense Innovation Board, which is a group I got of him and Bezos and Jim Polka, and these are titans who have all kinds of stuff to do in life, and they don’t have to do this. They don’t have to give their time to the government. They don’t have to give their time to the Department of Defense, and they did, and they didn’t always tell you what you wanted to hear, but they’d tell you what you had to hear, and I learned a lot. I got a whole lot out of it, and it is a public service, and he does stuff like this all the time, so he is a public-interest technologist. I didn’t want the event to go by without my saying that.
DAVID EAVES: So Reid, you’ve been at the heart of Silicon Valley. You were part of the PayPal Mafia, kind of the group that brought PayPal to life, and then LinkedIn, but a lot of the people that are kind of in Silicon Valley have a kind of relationship with government that’s a little bit arms’ length. They’re not really sure what they think of it. Sometimes there’s a libertarian streak, and yet you’ve been pretty vocal about the responsibility of technologists and their kind of thinking about how to engage government, so can you tell us a little bit about why did you end up in this place? What’s your journey to this, and what can that tell us about how we should be engaging Silicon Valley?
REID HOFFMAN: So many of the folks who end up as Silicon Valley technologists are a little bit outlier. They discover their ability to influence the world just by building some technology, and so that has a more tendency to align to. Well, it’s all about my own hands and this technology and building and so forth, and that then informs an ideology that is incomplete, because, by the way, how does the Internet get built? There’s just this whole stack of things where there are these platforms in society of education and laws and infrastructure, and so thinking about how a society works.
Part of the mission that I decided to get on when I started thinking about this at college, at Stanford, was how do we help humanity evolve at scale? I’m actually not reflexively a technologist. I don’t think, oh, I just love technology. It’s the change of how we are our better selves both as individuals in a society, and how does technology get us there? Part of the reason why most of my career has been focused on networks—LinkedIn, PayPal, Facebook, other things, is how do we try to design and build these systems that both help us navigate, but also then help us navigate together as well, and that is part of how I tend to look at this. I have a Master’s in Philosophy at Oxford, and I tend to think a lot about what is human nature, and we’re creates of the Polus. We’re naturally social animals, and so you have to think both individual and society together as you’re doing these things.
DAVID EAVES: Vanita, you’ve been civil rights work for a long time now. When you look at the tech sector, the civil rights movement has been around for a long time, what lessons can we be drawing from the movement that you’ve been involved in? What works? What doesn’t work? What lessons can we draw?
VANITA GUPTA: I’m neither a technologist, nor a physicist, and so I clearly am the oddball here on the panel, but I’ve been a civil rights lawyer my whole adult career. When I was at Justice, and even before working on things like criminal justice reform and police reform, voting rights enforcement, lending, trying to prevent discrimination and lending and housing and the like, and I come to this kind of intersection with technology from that vantage point, which is that increasingly kind of the ways in which consumers are interacting on all of these issues is influenced very deeply by the digital sector and the technology sector, and that the laws that were so hard-fought in the ‘60s and ‘70s around the Fair Housing Act and Voting Rights Act and all these things not only are getting eroded by the courts and are going to increasingly be getting eroded, but are actually kind of these weird analogs that technology companies haven’t quite figured out how to deal with, and civil rights groups haven’t quite figured out how to make them apply in their world.
At the Leadership Conference in 2014, kind of out of recognition on this, this was before I got there, we convened a table with digital rights folk and civil rights folks to come together and actually really start to think about what kind of civil rights principles need to guide technology and have more kind of civil rights voices at the table so that it wasn’t this afterthought that the technology would get developed, and then it would get implemented, and then there would be a kind of revelation of discriminatory impact.
For whatever reason, it was in their AI context. It was in predictive policing, in the risk assessment instruments, in concerns around election security and hacking and the ways in which communities of color are often bearing the brunt of that. Increasingly now in our work, even as we kind of work as an NGO with all of the onslaught of the peril that’s happening right now in civil rights from the administration are also working increasingly with private sector companies like Facebook and Airbnb and all these companies that are kind of coming to us almost now, unfortunately, in a crisis mode to try to figure out, okay, how are we going to address the problems?
What we want to be able to do is be a part of a conversation with technologists at a much earlier point of the innovation and recognizing that we don’t bring the kind of scientific or technological expertise necessarily, although there’s a huge sector now that’s growing that’s at that intersection, but it’s vital that we do this. Just using predictive policing as an example, there’s so little transparency about how some of these technologies work, and the inputs that are going in them are often deeply problematic and based on decades of systemic bias against communities of color, and African-American communities in particular.
I was at Justice, and then now in this role, when police reform was such an important part, and has been an important part of our work, really trying to understand, how do we stop replicating decades of bias and injustice and use technology, which is obviously a force for good, to be that force for good while kind of preventing and mitigating the harms. There’s a recognition about the need for greater diversity in Silicon Valley among who’s generating the stuff, but also about kind of putting in these questions at a much earlier point so that we can have technology work for all of our communities.
DAVID EAVES: One of the things that I kind of wrestle with when I think about public-interest technology is sometimes people are like, oh, well, we’ll just give the technologist more ethics training and the problem will be better, and my suspicion is that is not enough, but how do we even define this problem and how to begin to think about ways of tackling it? Anybody have thoughts? Yeah, Latanya.
LATANYA SWEENEY: I basically say that technology is the new policymaker. We don’t vote for the people in Silicon Valley. Most of the time we don’t even know their names, but the arbitrary decisions they make in the technology they design dictate how we live our lives. How I use this phone, my ability to use this phone, I don’t really have choice. If I want to live a better life or do a better job, I have to take the technology as it is, and encoded in that technology are policy decisions that didn’t come from a government, that don’t necessarily respect our historical protections or historical loss.
One of the things today that we need most are people who can understand that who will work with respect to technology with a view towards what’s the greater good for the technology. At the end of the day, a technology company has one big responsibility, is the fiduciary responsibility to its stockholders. It’s not to its consumers. It’s not to society. It’s not even to the rules of the country of any particular country, and so there’s no one there other than those technologists, those few example we have, the students who’ve come through these processes and at a few other schools who are doing the kind of work to shore up our helpers, to shore up the civil society organizations, to shore up the regulators, to shore up the journalists in order to make a difference, and that’s why public-interest technology, that is, those technologists who will work in the public interest are important.
I would just also say that many of our students have also been critical to Silicon Valley companies themselves, because no one wants to have a company spend millions of dollars in a technology and have it disrupted, because of an unforeseen consequence to the way it impacts society. So having someone on the inside that also helps them navigate away from those particular challenges is helpful for the companies as well.
ASH CARTER: Just to second that, bridges is everything. It takes all kinds. You can’t expect policymakers to learn science. When I was doing theoretical physics, I was all focused on that, but as they get older and so forth they diversity. To me, it’s all about action, and what I say to technologists is it’s another call to invention, and I have a favorite thing I say to people. How many of you work here in Boston? I do a lot of this. I say, how many work on driverless cars? Hands go up, and I say, how many of you are working on the carless driver? Think about that, and don’t think of it is a headache or a welfare program. Think of it as an invention, because these are worthy people who are doing real work and good citizens and so forth, but they’re not going to be able to be deployed in that way. How can you deploy it?
So invent, and the people who raised me didn’t just stop with the bomb. They invented arms control, nonproliferation, missile defense, civil defense, reactor safety. They didn’t just talk about it or be ethical about it. They invented, and that’s a positive challenge. Let’s make this better. People will rise to that, and you have to cozy up to somebody who’s not like you, doesn’t sell as well.
REID HOFFMAN: I think one thing just to add, I think it’s super important to say, what are the kind of things we’re trying to build towards society? How do we articulate those in ways that you’re almost thinking about a dashboard, about what would be the way of representing the kinds of interests that we have in society that we’d want tech companies to build towards and actually have dialogue with a number of the different constituencies in order to figure that out?
Now the one thing I want to add in, because I think it’s not normally that I’m playing the defense-of-industry perspective, I’m usually talking to the industry trying to get them to move, but the kind of rhetorical point of saying, you’re only responsible to your shareholders, well, you fail your shareholders if you fail your customers, so you actually have a lot of focus on what makes your customers happy, especially over time. You’re populated by a bunch of employees who care a lot about what kind of mission they’re on and which kind of companies they work for and whether or not they’re the good guys or not.
It’s not to say that there isn’t a real role in saying, well, wait, there’s issues that you may not have considered, diverse voices, history of problems, of kind of oppression or other kinds of issues that need to be factored in, and they may not be there, and that dialogue has to be there, and that’s really important, but it can be overstated about only shareholders and only money for the shareholders, and there’s a lot of other things that actually go into the kind of company decisions.
LATANYA SWEENEY: Let me just push on that a little bit since you took a little bit of an exception to my point. The Sleep Number bed is a bed that has this new thing called sleep IQ. It’s basically a pattern of sensors that are across the top of the mattress, and as you move in the bed, it takes measurements. It sends those measurements through the outside of the home through the Internet to a server somewhere, and in the morning you can wake up and get a measure of your sleep quality. The sensors are incredibly sensitive and pick up all kinds of activity and movement on the bed.
There’s no promise or guarantee to even get a copy of the data that comes off of that bed. I can only go see what they allow me to see, and I have no idea of where the server’s located, under what rules the server is operating under, and who else they may sell or share that data. I don’t have it on me right now, but I also have an Apple Watch. An Apple Watch is also trying to help me live a better life and get better in shape, and I can even use it to also monitor how well I sleep.
The difference, though, is the data from the watch only goes to my cell phone. I can choose which apps I might want to share that data with or not. I have more control. Those are two different design decisions, and sleep IQ is not changing this design decision, just because customers have no choice. If you want that feature, then you either buy the bed or you don’t, and most of the technologies we have are in that exact design where if you want a phone, you take it as it is with its probes and so forth.
I’m a computer scientist by training. Trying to do something new is really hard, and many of my students, just regular computer science students, have gone to Silicon Valley. They work really hard. They’re good people. It’s not like they’re trying to do this, because they’re evil. It’s that there’s no one else looking out for the bigger issue, because just to be able to get this thing to go faster, to give me 5G instead of 3G, all of that is the focus, and it’s not that the people who built sleep IQ were trying to be evil. They were just trying to get the sensors to work, and once they got it to work, they were trying to get it to market. They’re trying to make money. They’re trying to sustain themselves. They’re trying to have a good product, but in those design decisions, a lot just happened, and that’s the space in which public-interest technologists, someone who’s looking out for the bigger good, and to think that it’s going to happen within the current way the companies work could be so if they, in fact, embrace public-interest technology, but it isn’t the case now.
VANITA GUPTA: Just to put a fine point, I hear what you’re saying, because I think those are forces, and they’re broader forces than what we may recognize, but I do think that there’s just power differentials between some of the customers and consumers of this stuff and the folks that are creating the stuff. When there’s no transparency around whether it’s the algorithms that get used or around how it is that Airbnb is making decisions about how its customers are going to be able to access or discriminate against certain populations, or Facebook, any number of things that are plaguing Facebook at the moment.
That’s where, to me, having public-interest technologists is actually really, really powerful, because I think so long as we are siloed into civil rights and the technologists, the bridges is important, but that’s not going to actually get us to where we need to be, unless we’ve got kind of embedded public-interest technologists that day in and day out in these companies are thinking of these issues.
I think we as a civil rights community can be very helpful in thinking about policies, but those policy decisions that are baked into how technology gets created, we’re not going to be able to see that unless there’s greater transparency, there’s greater kind of education in the community about how this stuff works. That’s why the intersection of kind of folks that are really in the companies and thinking about these issues day in and day out, and also understand the technology and understand the hundreds of policy decisions that get made and what the inputs are, I think is a really important development, because we can’t just do this through bridging.
ASH CARTER: I’m just trying to reconcile these two, because they’re not different as much as they’re saying.
VANITA GUPTA: We’re just trying to create some conversation [ph].
ASH CARTER: I understand. Reid will have the last word. The idea that shareholder value is the only purpose of a corporation would be completely lost on corporate culture, not only of 30 years ago, but 400 years ago when it started. It’s crap, and it comes from economists who do reduction ad absurdum, and it’s very clear. You can trace it, and it’s rubbish. You have, and Reid said it right, those three responsibilities.
If you’re ever sworn in a Board member somewhere, you’re told there are three things. Now you’re right that there’s that myth out there that that ‘s all you need to do, but Reid’s right that it’s, in fact, not all you need to do, and the best don’t do. We all have a responsibility. I like what you said about we’re all in this together.
REID HOFFMAN: And in particular, for example, I’ve helped stand up the AI Ethics and Governance Fund between the Media Lab and the Berkland [ph] Center, because you do need to have these outside voices and context, and I’m a funder of Code for America, trying to do public-interest technology too, so it’s not a not public (27:41) technology.
It’s important in reasoning this through if the characterization is the general demonization of, okay, all corporations act in this entirely sociopathic way, the level of which the nature of the interaction is totally different, and that’s the reason I was like, look, I’m taking a little bit of an unusual role, because normally I’m arguing for, look, we should be more transparent. We need to be more in dialogue. That’s my normal role in this kind of conversations. In this conversation I’m going, well, actually, look, there are a bunch of ways that the industry has accountability over time that need to be factored into that too. That was the only point I was getting at.
LATANYA SWEENEY: Can I just say, first of all, I ran a company for ten years, so I have experience, obviously not as good as you, because nobody knows my companies, and that’s all fine. I didn’t say that they have a narrow singular focus on making money. I said they have a fiduciary responsibility to their stockholders. A fiduciary responsibility means you want the company to work, so if you only are transactional in the idea that it’s only about money, it will, in fact, limit where the company can go.
So to the extent that it’s in their interest, their fiduciary responsibility, their fiduciary interest, to think in the greater good and the bigger picture, they do, but that is not far enough to answer the question from the side of how this is impacting societies. For example, Google can decide where it’s going to put its data to determine whether or not it’s going to adhere to a search warrant. They can decide, oh, we’ll put it here, and then we will answer the warrant. We’ll put it somewhere else, and we won’t answer the warrant. That’s incredibly powerful; not the FTC, Google negotiates its contracts with countries.
The European Union is banking on an idea that if they raise the bar, that the companies will provide the technology to hit that bar, and that they will, therefore, give us Americans better technology, because it’s easier to give everyone the same technology. That’s the world that we’re talking about. We’re not talking about the smaller decisions that are within the scope of do you keep your power better? Do you do all of these other things, which are all important, and I’m not taking away from it. I’m a LinkedIn user. I’m a PayPal user. I live in this society. I live in digital society, but I am not confused. It is, in fact, a technocracy. It is not a democracy.
DAVID EAVES: Let me ask this question. Latanya, I feel like you’ve identified there is this corporate power, and one of the things that has influenced public-interest technology was this idea of the public-interest lawyer. So in the ‘70s and ‘80s, Ford Foundation put money into this idea that not all lawyers should end up in corporate law, but actually there are other ways that lawyers could be exercising power that could help influence the public interest and have the law influence public interest, and so I think it was kind of an effort to try to build some counterpower or some alternative sources of power.
I think it’s a wonderful metaphor, but I don’t know how it maps against the public-interest technology space, so do we need to train 100,000, a million technologists and give them ethics? Is it that we need something like the ACLU for technology? Reid, you’ve talked a lot about maybe there’s greater transparency can be a way into that trust. What’s the infrastructure we need to be building to think about bringing kind of transparency or public interest to the technologists? What’s that infrastructure look like? Vanita?
VANITA GUPTA: I think it’s all of those things. I don’t think that there’s one way to do it, and, frankly, on the NGO side, I think more and more of our organizations are increasingly engaged in this, because of how much it implicates the communities that we represent or the work that we try to do, and that’s kind of the outside folks that are weighing in, like the ACLU, which is a member of the Leadership Conference, and other groups that are really invested in us.
As I said, we also need people on the inside that are thinking about these issues and actually being very mindful on the kind of engineering and technology side about what inputs are we putting in? What is it based in? Are they already biased? How are we actually detecting where bias comes in or where there may be an opening for discrimination or for perpetuating implicit bias or explicit bias?
I also think that Silicon Valley itself has to change about who is working there and kind of diversity even in its midst, and that’s just a serious problem. We work with these companies. The numbers of folks of color, and it’s not just kind of racial diversity, it’s across a whole gamut, but it’s a real problem, and it’s to the detriment of these companies, which often see themselves as deeply progressive, and yet have these incredible blinders until there’s a real crisis that happens that has dire implications on vulnerable communities, and then it’s like, oh, my God, we’ve tripped up. It’s caused tens of thousands of people to be disenfranchised, misinformation, or dark web stuff that’s targeting black voters, or whatever, you name the context.
To me, it’s all of these things that are needed, and I think that, frankly, from the outside, we have been dealing with these issues in a crisis mode rather than, and now I think, increasingly trying to do this in a more proactive mode of engaging with these companies earlier on, but, as I said, there’s a power differential. There’s an access differential. I think what’s interesting is some of the companies that we’ve been working with over the past couple of years, it’s like the first time that they’re reaching out and developing relationships with some of the organizations that are really pushing on the privacy issues, on some of the civil rights issues, because they thought of themselves as working in more of this bubble of innovation, and I get it.
My hope is that increasingly as some of these issues have become so high profile that companies are thinking much earlier on about how do we actually embed these conversations, not just about ethics, about civil rights, about privacy and the like kind of earlier on, and not to stymie innovation, because it’s incredibly important, but to actually have thinking around it much earlier on in innovation.
DAVID EAVES: When you’re negotiating with someone, the only thing worse than a really effective counterpart is an incompetent counterpart. It’s actually really hard to negotiate with people who don’t know what they want or are uninformed. Who do you want to have as an effective counterpart to work with around issues of technology? What would that look like?
REID HOFFMAN: Well, part of the thing, whether it’s the U.S. Digital Service, Numerica [ph], Code For America, the AI Ethics and Governance Fund, all of which I’ve supported and tried to do this, say, well, let’s try to get folks who say, look, I’m approaching this from a what are the good outcomes in society, and then trying to reason to, what is the framework that iteratively gets us there, identifies major risks? Okay, there we hit a hard wall, and the rest of this we discover and improve as we’re going.
It doesn’t have to be that they have to be technologists, but they have to understand that this is a rapidly dynamic thing in the future and actually instincts to kind of roll back the past, or they just don’t work. That’s what you’re trying to get, and so you’re trying to get fellows. We’re trying to get folks who had worked in industry to come and serve in government and others kinds of things as ways to solve these problems, and that’s at least the work that I and a bunch of other Silicon Valley people have been contributing to doing in order to try to get there. It’s not to say that it’s complete. It’s a journey, but it’s the initial steps.
ASH CARTER: First of all, David, you ought to answer this question, because you work on this all the time, which is, at the end of the day, we have a government. It’s not going anywhere. You can’t walk down the street and shop anywhere else, so like it or not it is what it is, and you’ve given your life, you’re giving your teaching here to improving the way government functions. You changed, for which I’m personally grateful, by the way, HIPA. It’s like a huge accomplishment, but it’s written in law, so we have legislatures that could be doing better, but they’re flat on their tails in this thing, as the Zuckerberg hearing showed. It was not a good showing on either side. It was an embarrassment, but imagine an alternative in which had come out of that, here’s what we need to do.
We’ve all decided that, more or less, we have some disagreements, but we’re in the same sort of frame. There needs to be some regulation and some self-regulation, and we’re going to do both, and we’re talking about which, and that would have been a huge thing, but that isn’t what happened. You have to look at the law. You have to look at Congress and so forth and get in there and try to make them—they don’t always understand, but nowhere to look for expertise on how to improve. You showed it. How many people knew what you were writing in the paper that is the footnote in all the HIPA laws now? Not many, but it’s in the goddamn law, excuse my French, and that’s pretty cool.
LATANYA SWEENEY: Well, I think it’s kind of cool. Every once in a while, it impresses my students. Most of the time, it doesn’t.
ASH CARTER: Not old enough.
LATANYA SWEENEY: Well, I don’t want to be responsible for HIPA. HIPA’s got lots of problems for which they have nothing to do with that footnote, but this question about the role of government, absolutely part of the problem is that there are three parts to this issue. One is that we actually have historical protections that technology, by its design, ignores. That’s one part of this conversation.
The second part of the conversation is what you said, that there’s a temporal mismatch. Policy moves slowly as a function of the years. Technology is moving fast as a function of months, and so if the policy isn’t there, the ability to get the policy there, to only have it come here doesn’t particularly work. And then the third piece is the government itself learning to do its job through technology itself. The world is technological.
I’ll give you a very quick example. When I first came here as a faculty member, I was being interviewed by a reporter, and I wanted to show him a particular paper, so I typed my name, Latanya Sweeney, into the Google search bar, and the paper came up, but then some ads popped up, and one of them implied I had an arrest record. It said, “Latanya Sweeney arrested,” and I’d say to the reporter, “There is the paper we’re looking for,” and he says, “Forget that paper. Tell me about when you were arrested,” and I said, “Well, I’ve never been arrested,” and he says, “Then why does it say it is?”
Now I click on this ad. I pay this money to this company to show that not only do they not have arrest information for me, they don’t have an arrest record for anyone with the name Latanya Sweeney. So then he says, “So how come computers are racist?” and I explained to him that computers are neutral, and that technology is not biased, and we started tracking names, and eventually I couldn’t distill the myth in an anecdotal way. I ended up taking two months doing a full-fledged study using VPNs around the country hitting day and night what ads popped up on everybody’s name, all these American names, and found that if your first name was given more often to black babies than white babies, you got an ad implying you had an arrest record. If your first name was given more often to white babies, you could get a neutral ad.
That this disparity happened 80/20 split, and so what was interesting about that is we have a law. We have a law called the Civil Rights Act. Discrimination in general is not illegal in the United States. We here at Harvard regularly exercise it. We give student discounts. As I get older I’m going to look forward to my old-age discounts, and so it’s not that discrimination is illegal. It’s that we have a law that says certain groups in certain situations are protected. One of those groups is blacks, and one of those situations is employment, because when I apply for a job, somebody’s going to type in a name in a Google search bar and wants to see what kind of things come up.
If there are two candidates and one implies an arrest and the other one doesn’t, it puts a group at a disadvantage. Now this is totally in the venue of a law we have, and so this speaks strongly to the need of government to redefine its practices in the space of the technology as well.
VANITA GUPTA: One of the thing that I think is difficult is that a lot of courts, which move even more slowly than policymakers do, have just not kept up with applying these laws to new technologies, and so we don’t actually have a ton of jurisprudence around some of this stuff, and it’s happening now, because more public-interest litigators are suing companies, taking them to court, and for better or for worse kind of trying to get courts to opine on the application of legacy civil rights laws to modern day.
I think it’s going to happen, but it’s going to evolve much more slowly than probably what consumers need in order to actually be protected or have a way in on this stuff. We’re seeing other ways that this happens, and it’s where Microsoft with facial recognition technology, there’s been a lot on facial recognition technology, and companies have approached it very differently. There’s obviously really great benefits to facial recognition technology, and then great crumbs [ph] from it that are obvious, but I thought it was interesting that Microsoft is actually saying, yeah, this is totally an unregulated space. We’re going to ask for some form of regulation, but we’re going to try to shape it, and so now they’re developing their own principles. They’re going to put them out for comment, which we know a bunch of groups that are going to be weighing to help shape it, but what’s the policy?
They’re talking about trying to get a bill written and to put that forward as part of the industry, whereas what you saw or what I saw at least when Zuckerberg testified was this, do not regulate us. Do everything you can to just get out of the oversight hearing and make sure that the feds don’t regulate us. I do think that there’s a different role for companies to play to actually be proactive actors and acknowledging that some of the stuff is in totally unregulated spaces, or it’s going to take ten years for the courts to actually get their hands on it, or, frankly, even for the Senate, a lot of those folks asking Mark Zuckerberg questions didn’t even know how the stuff worked.
Companies have a huge role to play and actually being proactive actors and working with folk like you all in the room to help think about what is the policy? What are we trying to solve for? What are the problems that we know are going to come from this? What are we trying to solve for, and how do we at least do some regulation that provides some policies while allowing for innovation always and all of that? I just think that that’s going to become increasingly important given how slowly the legal structure, the legal system works.
DAVID EAVES: So you mentioned the people in the room. I’m going to turn over to questions in a second, but as we’re preparing for that, the last question I have for you guys is, who gets to participate in this conversation? The public-interest law was really about lawyers becoming public-interest advocates. With the technology not as much of a controlled group, but does it need to be people who know how to code or who are the kind of hard-skilled technologists, or is this open to kind of policy people, and, if so, what’s the skillset that you need to have in order to participate in this conversation?
LATANYA SWEENEY: I’ll jump in, and I’m sure everyone will have a view on that. So when I got to the FTC, it was one of the best jobs I ever had in my life. It was just awesome. I strongly recommend if you have a chance to work in senior government, you do it. I’m just saying. But one of the reasons it was awesome is being chief technology officer, nobody really knows what that means, and so there were not a lot of responsibilities, and at the senior level the whole organization is flat, and so we just did a ton of stuff. We just got a lot done, so much so that when I left the FTC, I came back to Harvard, and I decided that, you know what? This isn’t actually rocket science.
In the examples that I gave you, the Weld example and the discrimination online ads, nobody’s going to ever call that theoretical physics, and so the question is, can we get our undergraduates to do this? So I taught a class called data science to save the world, and I told the students that at the end of the class you’ll have to do a project, and if you do a good job I’ll take you down to D.C. and give you an audience with regulators, and I thought I would take two or three students down. I ended up taking 26 students down.
We set it up like a poster fair, and the regulators come in, and a typical regular is usually a white guy in his 50s, a lawyer, and the students were standing by their posters, and they were talking about technology that that regulator regulated that had no idea that that technology even existed, and it was really electric. It went on for four hours. It was really scheduled for two, went for four hours. The students had huge impact, both in the things that the regulators were able to do, but also that they felt like they had had impact on the world.
When we came back to Harvard, we started a new journal called Technology Science, and now it’s a publication from scholars around the world, but those first papers are from those 26 students, and what’s interesting about those is they made Facebook fix something that no one had been able to get them to do for years. It leaked through a GPS location. As you went around, you’d see Facebook Messenger, it was leaking your GPS location, so Iran [ph] built a plug-in that anybody could download, and you could track friends and friends of friends around as they moved around, and within a week they fixed that, but yet it had been talked about in the industry for almost two years, and they never fixed it.
Airbnb, the students pointed out the difference in prices that different hosts were being given, and if you were Asian you got about 20% less than you did if you were a white host for the same comparable property and so forth, and Airbnb stepped up and changed their pricing model, so we all have now benefited from that. They showed price discrimination on Princeton Review, because you had to give a zip code. There were different prices given to where you live, and Asian families were almost twice as lucky to pay the higher price, and the list goes on and on. They just had huge impact in ways and in just such a short amount of time.
I use that as an example. If you ask me now, where did they come from? Are those lawyers? Were they computer scientists? Were they economics? Were they statistics? They would turn out to be all of them. We opened the class to anyone. The students came in with the disciplinary skills they had learned as undergraduates, because it was an undergraduate course, and this was the work they did, and so this has been a reminder to me that the idea that public-interest technology should be limited to one discipline, one pre-defined discipline, is a really bad idea, because they’ll bring forward an economic analysis, a statistical analysis. Some of the best work we had was done by students from the history of science.
ASH CARTER: When I was a young physicist, I was sure that I knew it all and that I had scorn for how can anybody have any opinion on the Cold War and the rights and wrongs of the Cold War and nuclear weapons and Star Wars? They don’t know how a free-electron laser or an excimer laser—then I began to work on these things and I realized that the wisdom of people who had other backgrounds and how it could be so much better if we all worked together, and what a twerp I was. Don’t think that people who’ve done something different from what you’ve done don’t know anything. They do, and if they’re people of goodwill and experience, they know a lot. Get in the game with people. Let’s not decide who’s most important.
DAVID EAVES: Let’s maybe go to some questions. There’s four mics out, two at the top and two at the bottom, so if you have a question, please step on up to the mic and you, ma’am, look like you may have a question.
Q: I do. Thank you, guys, so much for coming in for this amazing debate. This question is directed for Reid. So a lot of critics of this debate would say that public-interest technology could potentially slow down free markets and innovation, and so I’m curious in the cohorts that you’re part of in the tech space how that plays out and if you’ve seen kind of any leaders in the space both balancing ethical accountability and economic accountability.
REID HOFFMAN: Well, I think you do. For example, Brad Smith and the Microsoft group is saying, look, let’s try to make sure that we can identify what the issues are and come and talk about them and provide that information, and that’s still with Microsoft going at full speed just saying, how do we build products? I think that the general recommendation of saying, look, let’s take a stand so it isn’t like, oh, we’re only going to talk about you regulating if you catch us doing something bad is stupid and bad for society, bad for customers, bad for shareholders.
I think it’s beholden for all of us to do that, and the key thing around operating at speed doesn’t mean—part of this book that I just published on Tuesday, Blitzscaling, has a chapter on responsible blitzscaling and as you get larger as an organization, you invest more in these questions. What it is you identify, there are some major risks, huge damage to an individual, moderate damage to a set or damage to a system or things where you should be paying attention proactively. Are you causing damage to vulnerable groups? If you’re doing that, you don’t have to slow that down that much to have an active clock about thinking about that, about asking those questions, and then cross-checking.
Now we need to do some work about figuring out, because modern machine learning kind of works on datasets, and can we figure out are there biases implicit in the datasets, and what do we do? When I was thinking about that, that was part of the reason I worked with (5238) and Joe Ito [ph] to say, let’s get at this AI Ethics and Governance Fund to start trying to build kind of the knowledge and the techniques for evaluating that as a way of doing that. I think the mistake is when people are absolutist on either side, including any regulations slow us down. It’s just not true.
Q: Thank you for this great panel. This question is for Reid and for Latanya. It’s interesting, going back to that premise that firms have a responsibility to their customers, with a corporation like Facebook where you have 2 billion monthly users, is there any reasonable constituency of customers that can actually coalesce around an issue, especially one, say, like the genocide of the Rohingya and the role that the firm played in that, and, if not, what does it imply about governing firms of this scale?
LATANYA SWEENEY: I’ll go first. Facebook is a monopoly. If they want a monopoly, if there were another Facebook that was built under European Union principles, they would have lost 50%, 75% of their customers, but there’s no place to go. We have laws about monopolies. I’m just saying, we have laws about monopolies. That’s all I can say.
VANITA GUPTA: Just on the Facebook example, since we’ve been so engaged with them, you think about what happened in 2016, and then the Cambridge Analytical scandal. Certainly, for the civil rights groups and voting rights groups and election security groups, there was a lot of anger about the kind of denial and the degree to which and the amount of time that they actually knew that this had happened and were not preparing for what’s about to happen in 25 days.
It’s not to say that we were an army that was at all kind of on par with the size of Facebook. Facebook to its credit, but very late, recognized, okay, we’ve got some responsibility here to actually a) acknowledge what happened, and b) to take proactive action to try to prevent it from moving forward. For a long time, they were like, well, we just don’t want to do anything that would curb free speech, and it was just this notion that they had the same kind of 1st Amendment responsibility as the government does without recognizing they’re private actors and there are things that they can do, but in this context—so we pushed them.
They’re doing a civil rights audit, which is an extensive thing, and we’ll see how much difference that makes, but we’re very engaged in that process, and they’ve now set up a war room; it’s late, but with election security, folks that are watching for this stuff and doing it, and we’re training their folks, because it’s a room. I will tell you, the New York Times had an article about this war room, I think, two weeks ago, and it was a room of 12 white people and one Indian person, none of whom had had any civil rights or voting rights expertise to know all of the different ways and myriad ways that local government disenfranchise, move poll sites, change voter information, all the things that happen.
We’re doing that training. There are steps that they are taking to kind of acknowledge the role that they play in really designing and changing the ways in which voters are targeted for misinformation and the like, but it’s been a slow process given what happened almost two years ago, and I just think we have to continue to do that. I don’t know when and if they ever get broken up, and until then they are an incredibly powerful force in our society, and the amount we’re bringing pressure from all parts to bear, and the goal is that we now have finally set up relationships to really push them on making some serious changes. Are we moving efficiently on this? Not nearly as much as we would like, and there’s a lot of deep frustration, but we are at least now in dialogue on a bunch of these different issues.
LATANYA SWEENEY: I love Facebook in many ways. Like I told you, I’m a computer scientist by training, and I totally respect the civil rights organizations, but by your answer, let’s just think about your answer for a second. You said at least four times talked about how slow they were to move. They’re a technology company. They’re supposed to be moving like this.
VANITA GUPTA: That’s right. That’s absolutely right.
LATANYA SWEENEY: And then let’s look at those things that they did. So they start this political ads thing, but you can’t really track the ads, because they limit your ability to do so. It’s not even publicly available. I have to be on Facebook even to view it, so they get to see which ads it is I’m investigating, and you still can’t get a macroscopic view.
VANITA GUPTA: The level of frustration is deep.
LATANYA SWEENEY: And the issue, it didn’t go away. It wasn’t like the 2016 election happened, and all of the propaganda and manipulation stopped. It didn’t stop. It’s been going on, so exactly what the war room is looking to look for is already there. I don’t know anything that got stopped.
VANITA GUPTA: They’re not doing it on their own is the issue. What are the answers then, at that point, because, frankly, there’s a lot that is known about what happened in 2016 and the role that Facebook played, and the alternative can’t be that we are so angry that we are just not participating in any kind of way, because we are afraid of what’s already happening in the lead-up in the next few weeks on Facebook in the lead-up to the midterms. Folks are deeply frustrated and angry about the pace, about the minutiae of changes that have happened. The alternative is to just sit back and do nothing and watch people get disenfranchised, watch voters get targeted again, and watch Facebook basically wipe their hands of it.
ASH CARTER: I think there’s something in between. It’s not do nothing or to go out in the parking lot and take our lives, and I know you’re not saying that.
LATANYA SWEENEY: I didn’t say that.
ASH CARTER: Neither of you are saying that, but I don’t think these companies are going to do it by themselves.
VANITA GUPTA: I don’t either.
ASH CARTER: That’s just not in the cards. Nothing’s telling me that, and to the very excellent question up there, my experience over many decades of managing technology projects is mission spurs innovation. If people have a meaning, they go faster, and just saying you’re going to disrupt things is kind of not really vague or you’re going to live a certain kind of lifestyle is narcissistic. People get behind a mission. Making a better world is kind of a slogan, and that’s motivating, but then you see what that has come to mean in some places, and it’s not really that. It’s just messing around where you want to or where the suits upstairs are telling you the advertising income is—I’m not trying to deliberately exaggerate.
Mission is important, and the mission of truly making the world better and making things really just and making public-interest technology, Reid’s idea, real, that’s a challenge. To me, that’s intellectually exciting. Go back to the carless driver. That excites me, because I don’t live in a society where things are ignored, and I think that’s pretty cool, and you have to ask yourself. I worked in defense, and the people who work in defense, whatever you think of national defense and so forth, man, that’s not a game. People took it deadly seriously and felt a huge sense of responsibility and were extremely within their frame very innovative. I think you asked an excellent question, and I turn it around and I say, mission motivates. Let’s get a real mission out there and one that matters to people.
LATANYA SWEENEY: Let me just plus-one this. Many years ago, right before 9/11, I was among a team of people who were approached by the military, because they foresaw an attack on U.S. soil, and they couldn’t get the FBI and others to take them seriously, and so they wanted to build a surveillance system, and the reason I was in the room is because they wanted to build the surveillance system with guarantees of privacy. We built that system.
They had anthrax coming out of a plane. They didn’t have a plane going into buildings. We had anthrax coming by mail, but after 9/11 all the privacy constraints went out the door, but that vision before 9/11 was one where privacy was quite intact, so this idea of a mission statement is certainly exactly how technology works. We build to a goal.
DAVID EAVES: I want to wrap us up. Anybody have any closing statements they want to make, any closing comments?
LATANYA SWEENEY: Well, I’m not going to say anything other than thank you, since I’ve said quite a bit.
VANITA GUPTA: Hopefully, you all will come up with the answers.
ASH CARTER: Great admiration for all these people and gratitude for the audience. How’s that?
DAVID EAVES: For me, there’s a lot going on in the world, but it’s an exciting time. We have huge challenges, and I think that right now we’re trying to articulate a mission around how do we enable technology to serve us as a society and not just as individuals or us as companies? I’ve got that as an exciting mission. I know it brings me to the table all the time. I know it brings you to the table, and Reid, I know it brings you to the table. It brings all of us. That’s why we’re here. I think as people who are here, I hope that you will grab that mission and feel like it’s important to you, and the most important thing, being here, I’m not sure people always understand how much privilege there really is.
To get these four people, two of these people didn’t have to come very far. Ash Carter, his office is upstairs, but a few people actually had to come far, and they won’t travel for just anything, so you get to be part of a conversation that involves these people who are at the height of their career thinking about these types of issues, and I just want you to feel the incredible responsibility that you need to take what you learn here and go do something with it, and I hope that the flag of public-interest technology is one that some of you will want to pick up, but whether it’s something else, I don’t really care, but go pick up a flag and find that mission, and I hope you feel motivated by that. Come talk to these people right now before they go. Get more energy out of them, but then go do something about it. So thank you, everyone, for coming.
[end]