
Introduction
Many Americans’ first introduction to epidemiological modeling occurred via the media in the first months of the COVID-19 pandemic; and, like so much else associated with the pandemic response, the experience was not salutary. Although many were familiar with the idea of projecting future trends—for example the extrapolation of a company’s quarterly earnings growth into future years—here was something quite different: At a time when coronavirus cases in many places were increasing exponentially, some media and politicians showed projections— models—that had the pandemic rapidly peaking and then rapidly diminishing, while others predicted a series of cycles that might continue for several years. Models varied as to the effectiveness of “flattening the curve” to keep the demand for ICU beds and respirators manageable. Many graphs in the published media showed, as a likely forecast, the disappearance of the coronavirus within a few months. 1
It was difficult for the public to understand how, with almost all indicators trending upwards, scientists could predict a near-future peak followed by a decline. Two explanations were frequently given: “herd immunity” that might occur naturally,2 and R0 (the virus’ “basic reproduction number”) that might, by social distancing measures, be brought below the critical value of 1.0, slowing and ultimately halting virus propagation. 3
These were difficult ideas to communicate. The dependence of the models on input data—which were scarce and often unreliable early on—was often glossed over. The models themselves were incomplete. Recognition of such real effects as asymptomatic transmission, aerosol transmission, and super-spreaders came only gradually. Scientific understanding was changing with time, communication channels to the public were cluttered with noise (not least from the Executive Branch), and the character and implications of continuing uncertainties were sometimes lost in translation.
The consequences of these shortcomings—especially the lack of clarity around genuine scientific uncertainty—are not small. The public's respect for science has been undermined in ways whose full damage has yet to manifest itself, for example, in people who will be reluctant to agree that a properly developed and tested future vaccine will be safe and effective, and who may thus refuse vaccination. It is also disheartening to see respected public health authorities backing away from epidemiological modeling as a useful tool in time of pandemic crisis (even if their statements are narrowly accurate), 4 since, as we will explain, modeling is one of very few tools available for guiding policy responses.
The purpose of this report is to give our views on (i) what went right and what went wrong; (ii) what are the weaknesses in the field of epidemiological modeling, and in its federal support, that made its sub-optimal showing in time of crisis almost preordained; and (iii) what needs to be changed, so that in future crises modelers can be more effective in providing accurate models and estimates of model uncertainty, communicating more effectively to the public, and connecting more effectively to policy makers facing difficult real-time decisions. This is not a case of simply "send more money." We will show that epidemiological modeling is, in a sense, an "orphan" field; 5 and we will make specific recommendations for changes inside federal agencies and for new kinds of initiatives to support academic research on modeling.
This is not about finger-pointing. Rather, the coronavirus pandemic has exposed this deficiency (among many others) in how public health in the United States is organized, prioritized, and funded. If the nation takes necessary steps now, it will be better off during future pandemics.
1 The Economist, "Briefing", February 29, 2020, at https://www.economist.com/briefing/2020/02/29/covid-19-isnow-in-50-countries-and-things-will-get-worse ; The New York Times, "Flattening the Coronavirus Curve", March 27, 2020, at https://www.nytimes.com/article/flatten-curve-coronavirus.html.
2 The Guardian, "Herd immunity: will the UK's coronavirus strategy work?", March 13, 2020, at https://www.theguardian.com/world/2020/mar/13/herd-immunity-will-the-uks-coronavirus-strategy-work .
3 Jon Cohen, "Scientists are racing to model the next moves of a coronavirus that's still hard to predict," Science (February 7, 2020) at https://www.sciencemag.org/news/2020/02/scientists-are-racing-model-next-movescoronavirus-thats-still-hard-predict .
4 Dr. Anthony Fauci on Fox News: "Models are only as good as the assumptions that you put into the model…. When real data comes in, then data in my mind always trumps any model." April 10, 2020, at https://www.foxnews.com/media/fauci-coronavirus-mitigation-programs-models-china-italy
5 Caitlin Rivers et al., "Modernizing and Expanding Outbreak Science to Support Better Decision Making During Public Health Crises: Lessons for COVID-19 and Beyond," The Johns Hopkins Center for Health Security Report (March 24, 2020) at https://www.centerforhealthsecurity.org/our-work/publications/modernizing-and-expandingoutbreak-science-to-support-better-decision-making-during-public-health-crises . These authors propose measures that, while different from ours in detail, are much in the same spirit.
OPCAST Ad-Hoc Pandemic Response Group
The OPCAST Ad-Hoc Pandemic Response Group is a subgroup of former members of President Obama's Council of Advisors on Science and Technology ("OPCAST"). Members are: John P. Holdren, Christine Cassel, Christopher Chyba, Susan L. Graham, Eric S. Lander, Richard C. Levin, Ed Penhoet, William Press, Maxine Savitz, and Harold Varmus.
The members of the subgroup serve as individuals working on their own time, not as representatives of their institutions. The effort has no sponsors and no budget.
Ad Hoc Pandemic-Response Subgroup of Former Members of President Obama’s Council of Advisors on Science and Technology. Epidemiological Modeling Needs New, Coherent, Federal Support for the Post-COVID-19 Era. opcast.org, September 28, 2020.