During the COVID-19 outbreak we have often seen predictions of the epidemic created using models. These influence important decisions about disease control measures. But we hear that predictions made by models change, so how much can we trust them?
During the COVID-19 outbreak we have often seen predictions of the epidemic created using models. These influence important decisions about disease control measures. But we hear that predictions made by models change, so how much can we trust them? In this talk, I discuss what models are, their limitations, and why their predictions have uncertainty. Having a better understanding of models will help us be better consumers of the information they provide.
With training in both mathematics and biology, Alun L. Lloyd, Ph.D. is a mathematical biologist who primarily works on modeling the spread of infectious diseases. He was born and raised in Wales, which is part of the United Kingdom. As an undergraduate, he studied mathematics at Trinity College, Cambridge University, before moving to the University of Oxford to do a doctorate in Zoology. He came to the US in 1999, first at the Institute of Advanced Study (Princeton, NJ)--a place whose most famous faculty member was Albert Einstein. Since 2003 he has been at NC State, where he is now Drexel Professor of Mathematics and Director of the Biomathematics Graduate Program. His work takes him to many beautiful locations around the world, including Australia, South Africa, Peru, and many places in Europe and the US. When Dr. Lloyd is not thinking about math or biology, he loves to hike, particularly in the mountains and the desert of Utah and Arizona.
The Modeling Process
Modeling is usually seen as an iterative process. Observations of the real world lead to some idea of the processes that govern the system. These governing processes are translated into a mathematical model of the system. For example, Newton’s Laws of Motion might be used to describe the motion of a body in a physics problem. The model is then used to make predictions. Those predictions are compared against data---observations of what actually happens or happened in the real world. Discrepancies between observed and predicted behavior reflect errors or inaccuracies in the model. Sometimes this tells us that we didn’t understand the system as well as we thought, for instance we might learn that some particular process is more important than we initially thought. We update the model in light of our improved understanding. New predictions are generated from the updated model, and those are again tested against reality. This continues until the model is believed to be a sufficiently accurate representation.
Uncertainties in Forecasts: The Cone of Uncertainty for Hurricane Forecasts
Ensemble Forecasts: Do Different Models Agree or Disagree?
We see predictions from 17 different models, each over a 5 day period (time labeled in hours: 24, 48, 72, 96, 120). In this example, we see the different models all give fairly similar predictions over the first couple of days, but then some pretty significant differences emerge over the longer time periods.
Improvements in Models
“10-Years Later, Did We Learn Anything from Hurricane Katrina?” talks about how our ability to predict hurricanes has improved over the years. They make an interesting comparison between the cone of uncertainty that was given at the time (2005) and one that was generated using the models available ten years later (2015). The smaller cone of uncertainty from the 2015 models is a testament to the improved understanding of the forces shaping hurricanes that came during that decade.