When some epidemiological models’ predictions failed early on in the COVID-19 pandemic, several economists took it upon themselves to use economics to improve the models and create synergies that can be used for the future.
In early 2020, as COVID-19 was declared a pandemic and conflicting epidemiological models of the virus were circulating, Christopher Avery, a public policy professor at Harvard Kennedy School, reached out to some colleagues and friends with an idea.
Avery, who teaches analytic courses in microeconomics and statistics, was interested in better understanding those models. “Maybe we were thinking about doing research, but even just to try to understand the modeling and the nature of these predictions to try to get a sense for what we were really facing,” he said.
Eventually, though, Avery and his colleagues did publish research: first as a working paper with the National Bureau of Economic Research, then a follow-up in the fall edition of the Journal of Economic Perspectives.
“There’s a lot of connection between mathematical economics and mathematical biology,” Avery said. “Both kind of, broadly speaking, thinking about the world as a dynamic system and then trying to tell you the properties of the dynamic system if you put in place better or worse underlying conditions.”
The goal of the paper, Avery said, is to help people understand what the epidemiology models do and don’t do, as well as what’s known and unknown about COVID-19.
“One of the things that’s still really not known is the death rate for folks, or even the rate of serious impact for folks who contract COVID,” he said. “And we still don’t know that because we haven’t had rigorous enough testing.”
Although epidemiologists and researchers have a sense of the number of deaths, hospitalizations and intensive care unit admissions that help them understand the severity of the virus, Avery said the total number of COVID-19 cases is still unknown, which is “a big limitation.”
Additionally, early modeling of the COVID-19 pandemic was viewed by the public as unreliable. The Institute for Health Metrics and Evaluation at the University of Washington, for example, predicted that the coronavirus would die out in the U.S. by early June because of certain erroneous assumptions. And the model developed by London’s Imperial College also sparked headlines for its extreme predictions of the virus’ mortality rate, which were based on the assumption that governments wouldn’t mandate any mitigation efforts, despite much of that report discussing the potential impact of government policies.
Adam Clark, an assistant professor at the University of Graz and a co-author of the paper, said that one of the main takeaways for him is that epidemiological models often behave counterintuitively because of the way they calculate the number of susceptible, infected and recovered people.
Unlike linear state-space models that project a pattern over time and predict how a particular state is likely to vary in response to different factors, epidemiological models are dynamic and meant to predict changes in variables as a function, starting with the growth rate of the disease and equating that to an empirical growth rate from the actual case counts.
“I’ve never felt like I could trust my intuition on how these models are going to work,” Clark said. “You really just have to sit down and do the math to figure out what the expected outcome is going to be, and often your sort of gut feeling for how the system should respond based on our own physical intuitions about the world will be wrong in very meaningful ways.”
Similarly, the exponential growth in epidemiological models can be counterintuitive and hard to fit, Clark said, and small errors in parameter estimates can have a large effect.
For example, early in the pandemic, experts were estimating that a single infected person may cause between 1.5 and five new infections on average, assuming nobody else in the population is sick. With the model set at 1.5, it takes approximately a week to double the number of infections; with the model set at five, the doubling time is less than a day.
“So these tiny little wiggles in these parameter values can have a huge impact on what you’re predicting,” Clark said.
Additionally, the number of infections can double many times without people really noticing at first, making it harder to react from a policy standpoint.
“If you start with one infected person, you go to two, four, eight, 16, 32,” Clark said. “And so you can double ten, fifteen times without really noticing much of anything. But then all of a sudden, as you’re hitting 1,000, 2,000, 4,000, 8,000, you start noticing these doubling events. And so even without any change in the underlying process … you go from what looks like very slow growth to suddenly exceedingly rapid growth.”
Clark said he hopes this research serves as a reference guide and introduction for economists who may not be familiar with epidemiological models in order to help them avoid common pitfalls of dynamical systems models, as well as convince policymakers and the public to trust epidemiological models again.
“These early, relatively inaccurate modeling initiatives … sort of cut the legs out from modeling for the COVID epidemic,” he said. “Because these models have been wrong so frequently in the past, it’s very hard to come up with a new model and present it to the public.”
The study “An Economist’s Guide to Epidemiology Models of Infectious Disease,” published in the Fall 2020 edition of the Journal of Economic Perspectives, was authored by Christopher Avery, Harvard Kennedy School; William Bossert, Harvard John A. Paulson School of Engineering and Applied Sciences; Adam Clark, University of Graz; Glenn Ellison, Massachusetts Institute of Technology; and Sara Fisher Ellison, MIT.