Paper

Shaping Disruptive Technological Change for Public Good

| August 2018

This essay is adapted from the Ernest May Lecture the author gave to the Aspen Strategy Group in August 2018.

 

I am very pleased to give the Ernest May lecture for two reasons. The first is that I urged Joe Nye, Condi Rice, and Nick Burns to make technology the theme of an Aspen Strategy Group gathering, even recognizing the equally momentous international and political currents of interest to this group today. 

The second reason is that Ernie May was a colleague and a friend—and an historian. As you’ll see, I come at this subject from the perspective of my own origins as a scientist. But it’s unrealistic to expect leaders in the real world to use their own knowledge of science very much in policymaking—or to use economics, political science, or even philosophy. Instead the dominant mental methodology of real policymakers is historical reasoning. Ernie emphasized as much in his seminal book with Dick Neustadt, Thinking in Time: The Uses of History for Decision Makers. He would, therefore, have approved of a lecture that begins with an effort to use history to illuminate the seemingly very ahistorical topic of disruptive technological change.

First let me say that I use “disruptive” in both its good and bad connotations. Disruptive scientific and technological progress is not to me inherently good or inherently evil. But its arc is for us to shape. Technology’s progress is furthermore in my judgment unstoppable. But it is quite incorrect that it unfolds inexorably according to its own internal logic and the laws of nature. My experience and observation is that this is true only directionally. Which specific technologies develop most quickly is heavily shaped by the mission that motivates and rewards the innovators: improving health, selling advertising or some other service, cheap energy, education, or national defense, for example. Making “disruption” more good than bad is the topic, as I understand it, of this year’s Aspen Strategy Group. 

A little personal history leads into what I want to say about technological history. I began my career in the field of subatomic physics, and the elders in that field were all of the Manhattan Project generation or the very immediate aftermath. Mentors of mine were Sidney Drell, Edward Teller, Hans Bethe, Richard Garwin, and others. It was their example, in fact, that made me interested in the consequences of science for public purpose. The culture that they inculcated in those of my generation stressed that along with the great ability to make change came great responsibility. It was that culture of science at the time of my upbringing that ultimately drew me into the service of national defense, and finally to Secretary of Defense. After 37 years of continuous service of one kind or another with the Department of Defense during the administrations of presidents of both parties, I was thinking last year about what to do next. I decided that aligning technology with public purpose and solving some of the dilemmas I’ll describe in this lecture was the most consequential issue of our time, second only to protecting ourselves and creating a better world for our children, and that that’s how I would spend my time.

That Greatest Generation was proud to have created a “disruptive” technology: nuclear weapons. It had ended World War II and deterred a third world war through almost 50 years of East-West standoff. But the flipside of that coin was an existential danger to humanity. Recognizing both bad and good, those same scientists—coming at it from various ideological and political directions—accordingly devoted themselves in the years following to developing arms control and nonproliferation as new fields of innovative endeavor; to missile defense and civil defense; to making strong contributions to the intelligence systems that were needed to monitor arms control agreements; and to reactor safety to make the accompanying revolution in nuclear power safer. This is the culture that I knew.

The generation of leaders that came shortly thereafter was very, very different. The tech culture, including what is most associated with Silicon Valley but is actually pervasive in digital tech (though not the rest of tech), grew out of the hippie and counterculture movements. This is a very different kind of social impulse. It is inherently distrustful of government and believes that public good and public purpose will somehow emerge through a popular and supposedly freer mechanism.

I won’t pretend to understand or share this ethos, but it is still the prevailing one among not only the founders, but many of the employees of the tech companies today. It shows in some of the troubling dilemmas that I’ll turn to later.  

Another characteristic of that long-gone era of my upbringing is that in those days most technology of consequence arose in America, and most of that from or within the walls of government. Neither is true anymore. Technology today is commercial and global. That creates an entirely different context for the pursuit of public purpose. (I prefer to use the term public purpose instead of public policy, because public policy suggests actions of government. In matters of technology today, as in the atomic age, solutions require unified effort of the tech community and government.)  

A consequence is that some of the moral guidance to steer us to a good technological future will need to come directly from entrepreneurs and companies. The right decisions will not be made without strong input from technologists themselves. That is what originally convinced me to work on defense problems. I realized that many of the key issues during the Cold War had a strong technological component, and they could not be addressed well without the input of people like me. Big issues and a chance to see your training make a difference are a powerful attractive combination to a young technologist.

This being the case, I’m happy to say that today’s generation is very different from the second generation I described. I see it every day at Harvard and MIT and did likewise during my time at Stanford. There’s a strong demand for instruction and guidance on how to contribute to public purpose. Many of these young people are not looking at going into government, but they are looking to do something more consequential than get people to click on ads.

I discovered that I was able to tap into this same reservoir as Secretary of Defense. I always said that as Secretary of Defense I was “the Secretary of Defense of today” and also “the Secretary of tomorrow.” Secretary of today meant standing strong against Russia and China, deterring and defending ourselves, allies, and friends from North Korea and Iran, and destroying ISIS and other terrorists in Iraq, Syria, Afghanistan, and around the world. Secretary of tomorrow meant ensuring we had the people, strategies, and technologies to continue making ours the world’s finest fighting force. I wasn’t sure I could succeed when I embarked on my so-called outreach to the tech community, beginning by founding a Pentagon outpost in Silicon Valley, the Defense Innovative Unit-Experimental (DIU-X), which we subsequently replicated in both Boston and Austin. I would have established additional outposts in more tech hubs were I still Secretary of Defense, and I hope Jim Mattis does. Despite the Snowden hangover, I found that there was a hunger among most of the tech company employees to be part of something bigger than themselves and their firms. I found great uptake through DIU-X, and also through the Defense Digital Service, which allowed technologists to come and go right in the halls of the Pentagon with their hoodies on and aviator glasses on their foreheads. I am particularly proud of the Defense Innovation Board I instituted, which included senior leaders like Eric Schmidt (to whom I’m grateful for serving as chair), Jeff Bezos, Reid Hoffman, Jen Pahlka, and others. All this reflected my principle that technologists and the tech industry were essential to achieving the important public purpose of national security. 

This outreach to the wider technology community was an essential complement to the big funding impulse we gave to the DoD research and development budget in the so-called “third offset,” and the huge strategic reorientation we were making from 15 years of counterterrorism and counterinsurgency to the big-ticket, full-spectrum threats associated with Russia and China. At some $80 billion per year, DoD’s R&D effort is more than twice Google’s, Microsoft’s, and Apple’s R&D combined. 

The other defining experience for me were the wars. I was Undersecretary for Acquisition Technology and Logistics during the big Afghan surge of 2010, and I found that alongside the F-35 Joint Strike Fighter, the KC-46 aerial refueling tanker, and the other big traditional programs I had to manage, my daily preoccupation was making sure that the troops had everything they needed to win and protect themselves. That meant new kinds of Mine-Resistant Ambush Protected (MRAP) vehicles for Afghanistan, persistent surveillance like aerostats, all kinds of techniques to counter improvised explosive devices (IEDs), and things that you may not associate with the “weapons czar”: buying dogs, ballistic underwear, and so on. Nothing was too small, nothing too inconsequential. Every day the wars were Job 1, and I make no apologies for that. Whatever you think of the wars, when the kids are out there, you have to be all in. 

It wasn’t enough during war to carry out the usual ten-year defense program. You had to do a ten-week program, even a ten-day program. I began thinking about how we needed to change, not only to serve the wars that are, but the wars that might be at any moment. We need to make sure that we don’t have regrets if we get in a dustup with Iran, for example—we need to give them a bloody nose and make sure they don’t give us a bloody nose in the first few days. We don’t want to look back and say, ‘I wish I had done something, I wish we had done something that we could have done, but that we didn’t do because we were on the old Cold War tempo.’

 

* * *

 

I described the post-World War II technology cultures in the U.S. Going back even further in history, the great transition that everyone, especially economists, loves to study is the farm-to-factory migration. This is often described as a success story, and, in retrospect, it surely must be so regarded. Hundreds of millions of people changed fundamentally their way of life while the means of production moved from individual artisanship to collective mechanized effort, and, for the most part, their lives were much better in the end. At the same time, it looks better in the rearview mirror than it must have at the time. Don’t forget that the farm-to-factory movement took decades to sort out. It is not clear that the pace of change today will give us that kind of time to make momentous technologically-driven adjustments. 

The farm-to-factory migration was also pretty rocky if you think about the rise of communism, the formation of urban ghettos, and other speedbumps that were less than minor, and only if you forget also that the transition failed miserably in some countries, notably Russia.

Above all, the success was not at all automatic—far from the work of the invisible hand. In the United States and Britain, there emerged from the bleakest period of the Industrial Revolution the Progressive and Chartist movements, which by introducing regulation of commerce, foods, and medicines made large-scale, widespread, anonymous non-artisanal production and distribution of goods acceptable, since it was no longer possible for a person to know who made something they consumed or where it was coming from. The list goes on: child labor laws, compulsory public education, boards of public health, the Sherman Antitrust Act of 1890, muckraking journalism, labor unions, and so on. In short, the farm-to-factory transition was paralleled and made a success in this country not by laws of technology or economics alone, but by a host of non-technical innovations that set the conditions for overall public good. 

One way to pose the week’s topics is therefore: How do we set the conditions for today’s disruptive changes to redound to the overall good of humankind? How might the tech communities contribute to solving some of the big dilemmas of today’s looming disruptive change in the three big categories: digital, biotech, and jobs and training? 

 

* * *

 

There are so many digital dilemmas: offensive and defensive cyber, big data, Augmented Reality, quantum computing, Internet of things, and others, but let me touch on two: social media and artificial intelligence. 

Social media are wonderful enablers of commerce and community, but also of darkness, hatred, lies, and isolation; invasion of privacy; even attack. I, therefore, had much higher hopes of the Facebook hearings before Congress, featuring CEO Mark Zuckerberg. Hearings are a way of calling the public’s attention to the tech dilemmas and paving the road to a solution. In the case of the Facebook hearings, there was no need to call attention: 91% of Americans, according to a recent Pew survey, feel that they’ve lost control of how their personal data is collected and used, and two in three think current privacy laws are not sufficient.

In terms of leading to solutions, however, the hearings laid an egg. They missed entirely what was a historic opportunity to devise what everybody seemed to acknowledge is needed: a mix of self-regulation by tech companies and informed regulation by government. Zuckerberg, for his part, gave an account of his company’s ethical conduct that sufficed for one news cycle, but will not, I fear, suffice at all for the great arc of history. As for the quality of the congressional questioning, well, all I can say is that I wish members had been as poorly prepared to question me on war and peace in the scores of testimonies I gave as they were when asking Facebook about the public duties of tech companies! But make no mistake, we need to land this plane. 

Ernie May might have advised us to look back a little bit upon history’s analogous dilemmas. How might the members have been better informed to prepare their own way in the Zuckerberg hearing? It’s not that this issue, or any of the ones I’m discussing in this lecture, has become particularly partisan. Who would have thought there was another form of gridlock in Washington? 

One of my early Washington jobs was for an organization called the Office of Technology Assessment (OTA). It was the fourth congressional support agency next to the Library of Congress’s Congressional Research Service, the General Accounting Office, and the Congressional Budget Office. OTA did high quality work for members on exactly subjects like this. It would have prepared a report in consultation with Facebook, other media companies, tech experts, lawyers, lobbyists, and so on, and tried to put together options for that combination of self-regulation and regulation that was the underlying consensus solution in the hearings. OTA was eliminated during the Gingrich revolution as part of the effort to downsize government. The other three congressional support agencies were big and powerful and could defend themselves, but little OTA got the axe. 

I also remember the Senate Arms Control Observer Group when I worked for Paul Nitze, who was President Reagan’s Chief Arms Control Advisor. They not only met with the administration regularly, but also had a panel of experts from that post-atomic era of scientists who advised them on technical matters such as verification of arms control agreements, nuclear effects and civil defense, antiballistic missile systems, and survivable-basing modes. Then there has long been a committee of scientists who advise the select intelligence committees on the super-secret optical, radar, signal, infrared, and other satellite programs used for intelligence purposes. 

So, in short, Ernie might say that there is in living memory the idea of bipartisan outreach to, and reliance on, tech expertise. 

As we think, in the manner of Ernie May, about precursors that may be models for new institutions to join tech and public purpose today, another one that comes to mind is the National Security Telecommunications Advisory Council (NSTAC). In my very first job in the Pentagon in 1981, one of my office’s duties was to lead Cap Weinberger’s battle against the breakup of AT&T. In retrospect, we resembled the last Japanese soldier on Saipan in World War II, still charging about the jungle unaware that the emperor had surrendered. NSTAC was established, in part, at Weinberger’s insistence once the AT&T breakup became inevitable to make sure that the deregulated system continued to serve the public interest. 

The breakup of AT&T was, in fact, an episode in a long history of communication and information system regulation. This history begins with the U.S. Postal Service, a natural communications monopoly that the government managed. When the telegraph came along, the U.S. government decided against absorbing it into the postal service as most European governments did. There followed a period of vigorous competition, which ended in a Western Union monopoly—as it had to for a natural monopoly—and was regulated accordingly. 

Western Union remained a regulated monopoly, but its fear of more regulation probably was a factor in discouraging it from getting into telephone communications, leaving that field to AT&T, which also drifted toward natural monopoly. Some of the same concerns may have applied to AT&T’s decision not to move into radio but instead content itself with carrying its programs over its long-haul lines. When NBC Radio became too big, it was forced to split into NBC and ABC. The Nixon administration gradually relaxed strictures on cable, in turn, to challenge the major broadcast networks. 

So, there is an abundant history of antitrust or other government regulation applied to natural monopolies of information and communication. Ernie might remind us that this history could have inspired some kind of productive output from the Zuckerberg hearings, such as regulation based loosely on antitrust to handle Facebook’s monopolization of its form of social media. Some economists argue that since Facebook and Google are free, no economic harm can be shown to the consumer by the government, and, therefore, the government has no antitrust authority. This interpretation would be alien to both Senator Sherman, of the Sherman Antitrust Act, and Justices Brandeis and Douglas, who wrote the early opinions. They repeatedly stressed that the government’s interest was in the general public good and was not confined to price gouging.

Here is Justice Douglas: “The philosophy and the command of the Sherman Act is founded on a theory of hostility to the concentration in private hands of power so great that only a government of the people should have it.”

Here is Justice Brandeis: “…the maintenance of competition does not necessarily involve destructive and unrestricted competition, any more than the maintenance of liberty implies license or anarchy.” 

So with a little of Ernie May’s history in mind, and returning to technology, join me in what Einstein called a thought experiment. What would be different algorithmic approaches to social media curation and delivery, and how might they reflect the public good? You can imagine a number of them. One algorithm would organize digital platform content by maximizing advertising and platform revenue. This is essentially the prevailing model. A second would reflect individual choice, offering what you seem to want based on your past patterns. There is some of this in Facebook and other feeds in order to promote the ends of the first algorithm. 

A third algorithm would stress the crowd, that is, what everybody else seems to be watching what is “trending.” A fourth might be profit-based, but share profit with the owner of the data in another form of subscription-free service. A fifth channel you might dial would have content curated by professional journalists—the elusive Campbell Brown at Facebook. Another possibility is multiple demonopolized competing platforms that use whatever model they choose. My concern about a simple breakup of Facebook into smaller Baby Bell-type offspring is that they will only end up competing to represent the lowest common denominator and we will have a worse outcome than we do now. 

The best world to me would be one where there are multiple channels representing these different algorithmic models, and the consumer could simply switch from channel to channel and shop, compare, and pay accordingly, with the content of all subject to some rules written by a public commission that went beyond simple strictures on terrorism, child pornography, and the like. When I watched I Love Lucy as a child and Lucy and Ricky prepared to go to sleep at night, they got into twin beds separated by a nightstand with a lamp on it. That was regarded as appropriate to protect decency and children. 

So thinking historically and conceptually, there are a number of possibilities and mixes that might have emerged from the Zuckerberg hearings. But nothing did. Ernie May would probably have regarded the Facebook hearings as one of those potentially seminal historical moments that was wasted. 

Turning to artificial intelligence: In my last year as Secretary of Defense, the question I would get most often in a wide-open press availability would be about “autonomous weapons.” I would remind people that way back in 2013 or 2014, I had promulgated a directive on that subject that governed the conduct of the Defense Department as it developed the technology of artificial intelligence. It stated that for every system capable of executing or assisting the use of lethal force, there must be a human being making the decision. That is, there would be no literal autonomy. So that is how things stand on the books.

I was motivated to do that by imagining myself standing in front of the press the morning after, let us say, an airstrike that had mistakenly taken the lives of women and children. Imagine further that I tried to assign responsibility by saying, “The machine made a mistake.” I would be crucified. So also will be the designer of a driverless vehicle that kills a little old man and cannot explain. Judges simply aren’t going to accept anything other than an accounting of human responsibility.

I believe that accountability and the transparency to promote it are the key issues for the designers of artificial intelligence systems today. Now there are some who will tell you that the AI system they have developed simply does not enable the tracing of the method of decision that underlies an algorithm’s recommendation. In almost four decades of working on technology projects, I’ve heard that many times from engineers about the difficulty of incorporating some desired feature or another that they haven’t bothered to include in their design. My retort to these scientists is: if you want your algorithm to be adopted, you had better make it transparently accountable. If this requires an adjustment in design, which I can well imagine it does, then make that adjustment.

Before I leave the subject of artificial intelligence, I need to say something about the Google employees who resisted working on artificial intelligence for the U.S. Department of Defense. I imagine what I would say to them in a Google town hall or if I were the Google leadership. I’d tell them they should think about and reconsider their decision. First of all, they should understand that the U.S. Defense Department is governed by the memorandum I have described. Our nation takes its values to the battlefield. But second, more fundamentally, and following everything I’ve said so far in this lecture, who better than they at Google, who are immersed in this technology, to steer the Pentagon in the right direction? Shouldn’t they be like the atomic scientists and help find solutions rather than sitting on the sidelines? And last, I’d ask them whether they’re comfortable working for the People’s Liberation Army. Because they work in and for China. China is a Communist dictatorship, and there is no boundary there. There is no getting around that working in China is working indirectly for the People’s Liberation Army or that all of their work is available to the PLA. 

I’ve talked a bit about social media curation and about artificial intelligence but I do not have time in this lecture for the elephant in the room that is China. I would only say this: We have never been in a sustained economic relationship with a Communist-controlled economy. The Soviet Union was such an economy, but our approach to it was not to trade with it at all and to hermetically seal it off from the Western tech world. But we are in an intense trade relationship with China. Because it is a Communist dictatorship, China is able to bring to bear on U.S. companies and our trading partners a combination of political, military, and economic tools that a government such as ours cannot match. This puts us at an inherent competitive disadvantage. Though it is not a matter for a Secretary of Defense, I felt that international economists have failed utterly to provide the U.S. Government a playbook for dealing with this situation. The approach preferred over the past decades was rules-based free trade, destined to fail with Communist China and in any event abandoned by the U.S. itself when it walked away from the Trans-Pacific Partnership. What is left is a spotty trade “war” and some important but partial limits on Chinese investment in “sensitive” technologies. One additional thing I will say, and as a former Secretary of Defense, is that it is important to play offense and not just defense. Major national investments in areas like artificial intelligence and public-private partnerships (like the National Manufacturers Institutes founded by the Pentagon during my time) are needed. 

Let me now turn to the biological sciences. This is not an area in which I have any particular expertise. But I have attempted to learn about it, and my jobs in the Pentagon gave me plenty of opportunity to be acquainted with some parts of it. I’ve learned a lot also from Eric Lander, John Deutch, George Church, and others. 

 

* * *

 

It seems likely that a biosciences revolution is looming that will be at least as consequential in coming decades as has been the revolution in the information sciences of the past several decades. The resulting “disruptive” change will be enormous, for both good and bad. The first reason is the sheer number of avenues of innovative change that are being paved by quite recent breakthroughs in biological science. The second factor is a new investment climate that will follow. Let me begin with the remarkable variety of avenues of innovation. 

One avenue, of course is Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) and the possibility of editing even the human genome. If this passes from laboratory to clinical stages and from animal models to human models—and above all from not only contributing to therapeutics for serious illnesses but also to physical and cognitive enhancements of human potential—then the choices in front of us about where to draw the line are very consequential indeed. 

In addition to the obvious moral issues associated with binding one’s children and their successors with the decisions a parent makes when offspring cannot conceivably provide any sort of consent, and the moral issues involved in tampering with life itself, there’s a serious distributive issue as people of means can purchase a new kind of unequal opportunity that makes any previous form of discrimination pale in comparison. 

A different innovative avenue is the growing capacity to create new kinds of designer cells. This has gotten a lot of attention, including by the Defense Department, in the matter of novel pathogens with high lethality and flu-like ability to spread. But it extends to organisms and tissues custom-made for a wide range of purposes, which may be more or less benign. 

Yet another category consists of biosensors, and another of biomanufacturers. Biosensors may revolutionize the ability to change environmental signals into processable and storable data in a way we have become well accustomed to with the revolution in electro-optical and other electronic and electromechanical transducers. These sensors could potentially reliably detect even quite subtle and seemingly intangible factors like mood and behavior. Biomanufacturers are custom organisms that can synthesize novel proteins or biological materials in very large scale, thereby making compounds previously only available in trace amounts available in bulk.

There is, next, quite a literature on self-defending cells. These are animal or plant cells provided with new or enhanced ability to defend themselves. These self-defenses could, in turn, be part of the long-sought solutions to cancer, viral infection, or antibiotic resistance. 

Additionally, there is the avenue of bio-inspired engineering. Those of you familiar with robotics know that many of them are modeled on either human or animal locomotion with either legs, tails, or cilia. The wheel is an interesting invention in that it has no clear biological precursor, but most of the locomotion chosen by robotics engineering is modeled on nature. So also are biologically-inspired exoskeletons and other structural features, and cognitive and behavioral models used in artificial intelligence. 

Finally, with all this innovation of all these kinds, goes another encompassing avenue of disruptive potential: the union of the information revolution and the biological revolution. It is becoming quite possible, for example, to do a “big data” collection of a cell’s DNA, RNA, and protein inventory, not just on a sample basis from a single organism, but cell-by-cell within the organism. 

The sheer number and profundity of these bioscience avenues of innovation is the first factor in the coming revolution. 

The second factor is who will be able to use all these avenues. 

The disruptive avenues of biotech I noted have been until now laboratory techniques requiring PhD-level talent and institutional-scale investment and instrumentation. They are becoming platforms on top of which scientifically minor, but still socially significant, innovation can build. It is already possible to send off a DNA sample and get an entire sequence returned overnight by email. This took Eric Lander and his colleagues a decade and billions of dollars to do just a few years ago. Someone who knows nothing about the underlying science can sit atop this same platform and think only about novel applications. Many digital unicorns were founded by an entrepreneur using the powerful computational platform on their laptop whose underlying digital technology they neither created, appreciably added to, or even understand. 

In Cambridge, Mass., where I work, which is probably the leading biosciences hub in the country, there are a number of bio incubators, including old warehouses, where kids with an innovative idea can set up a little shop and at no expense make use of laboratory equipment that cost millions of dollars to buy. This would have been completely out of the reach of even a pretty-well funded startup a few years ago. 

What this second, non-scientific factor shaping the biosciences revolution means is that the scale and the cost of meaningful innovation will go way down, and the speed of socially (while perhaps not scientifically) consequential innovation will go way up. Sound like digital? 

For many purposes, the multibillion dollar, decade-long investment cycle of traditional pharma will be supplemented by something much shorter that can be fueled by fast money—venture capital money. There will shortly be innovators and investors sitting atop the platform exploiting those new bioscience avenues I described who will not necessarily have the culture or the values of research scientists, or who have been brought up with the norms and regulations that come with, for example, National Institutes of Health and Food and Drug Administration funding and approvals, with their rules concerning use of human subjects, protection of personal information, and so on.

What I’m describing here is a climate that looks very much like the early digital era. While a lot of good came out of this combination, including by a lot of people who were essentially amateurs at digital technology itself, we cannot say in hindsight that it came out at all the way we might have hoped. 

 

* * *

 

The third tech-driven revolution of our time is in the future of work and training. I only have time to say this about what is a gargantuan challenge: unless our fellow citizens can see that in all this disruptive change there is a path for them and their children to the American dream or its equivalent, we will not have cohesive societies. 

There are a lot of smart kids at MIT and around Boston working on the driverless car. LIDAR (light detection and ranging), which along with passive imagery and radar provides inputs to the steering algorithms, was in fact invented for the military at MIT’s Lincoln Labs. I always say to these smart kids, “save a little bit of your innovative energy for the following challenge: How about the carless driver? What is to become of the tens of thousands of truck, taxi, and car drivers whose jobs are disrupted?” 

For these drivers, this unstoppable transition will be like the farm-to-factory transition. We owe it to them to create a Progressive Era of supporting conditions so it all comes out well. 

For more information on this publication: Belfer Communications Office
For Academic Citation: Carter, Ash. “Shaping Disruptive Technological Change for Public Good.” Paper, August 2018.