Clarity from Chaos: How Climate Models Could Be Better than We Think

Chaos theory encompasses large swathes of mathematics and physics, but it was Edward Lorenz who immortalized it in popular culture. His now-famous 1972 presentation, which summarized his decade-long work in the field, focused on a single provocative question: Can the flap of a butterfly’s wings in Brazil set off a tornado in Texas? Although he declined to definitively answer the question, his “butterfly effect” changed the way climatologists and meteorologists view causality in atmospheric science.

This early investigation argued that in the long-term, weather patterns are highly susceptible to the smallest of perturbations. Using early computer simulations, Lorenz found that the atmosphere can be highly unstable, which allows similar weather systems to evolve totally differently. In this sense, he argued, a butterfly’s wingbeat* could mean the difference between fair sailing and a hurricane.

That isn’t to say that we can eliminate natural disasters with a particularly potent pesticide. Although this point has subsequently been omitted in many representations of the butterfly effect in popular culture, Lorenz was quick to point out that if something as small as a butterfly can affect weather patterns halfway across the globe, it must be impossible to break down all of the individual components that combine to create the perfect storm. Besides, he argues, if a butterfly can cause a tornado, it is equally likely that it could prevent one. “I am proposing that over the years minuscule disturbances neither increase nor decrease the frequency of occurrence of various weather events,” his abstract states. “The most that they may do is to modify the sequence in which these events occur.”

Since Lorenz’s research took the field by storm, climate scientists have found the need to relinquish the traditional scientific love of causality. Weather and climate patterns are technically deterministic—if a perfect computer simulation had perfectly precise measurements of the millions of factors affecting the atmosphere, it could hypothetically predict future behavior exactly. However, the impossibility of this task means that in practice, even a small error in measurement early on, or a slight misjudgment of, say, the number of butterflies passing through can easily compound. It is for this reason that meteorologists often refer to weather predictions in terms of probabilities, which acknowledges this uncertainty in initial conditions.

Spaghetti models are a familiar sight during hurricane season. Because no one model can correctly account for all of the factors in predicting a storm’s path, meteorologists often rely on a composite of many different models to determine the most likely outcomes. Although many of the models tend to agree, indicating a highly degree of certainty for their predicted outcomes, it is not uncommon for individual projections to differ greatly.

In the short term—weather patterns of less than two weeks—these uncertainties are manageable. However, when climate scientists want to investigate seasonal or even longer climate projections, the resulting chaos makes it extremely difficult to generate high-quality predictions.

To counteract this problem, scientists generally run a series of simulations, each one using slightly different initial temperatures, wind speeds, and other parameters, then recombine the results into a map of probable outcomes. These “ensemble predictions” tend to lose their predictive ability on longer time-scales—think of a spaghetti model with strands heading in totally different directions—but they do capture the uncertain nature of the science.

Or do they? Last year, Adam Scaife and Doug Smith of the UK’s Met Office published a review article in Nature Climate and Atmospheric Science which highlighted what they consider a “paradox” in ensemble predictions. The crux of the matter is this: for many models, the ensemble predictions provide a poor measure of the likelihood of a single simulated outcome… this might just suggest that chaos rules and there is little predictability, except for the surprising fact that the ensemble produces much more accurate predictions of the single real-world outcome. In other words, these models are better at predicting the real world than they are at predicting themselves!

Figure 1. This graph shows how ensemble predictions are better at predicting the real-world North Atlantic Oscillation (in black) than simulated ones (blue). The horizontal axis indicates the number of individual simulations contributing to each ensemble prediction, while the vertical axis measures the correlation between the ensemble average prediction and the year to year variations in the North Atlantic Oscillation (real or simulated). Image credit: Scaife and Smith

This may seem like a non-issue; if the models are better than we think then what’s the problem? But it means that scientists are prone to underestimate the value of their models in providing reliable forecasts. Even more importantly, Scaife and Smith showed that some phenomena which were previously considered unpredictable, including the fluctuations in atmospheric pressure known as the North Atlantic Oscillation, can actually be forecast relatively well given careful handling of the data.

Scientists aren’t sure what causes this paradox, but Scaife thinks that it may have a relatively simple interpretation. Individual climate model forecasts generally encapsulate the variability inherent in observed systems, he says, but most of that variability is due to noise in the data—regions where the results are unpredictable and unreliable. “This means that model forecasts each contain a smaller proportion of predictable variability than is found in the real world,” he says.

The large amount of noise in simulations means that models are, generally speaking, less predictable than the real world. Yet when an average is taken over many, many individual simulations—as is the case for an ensemble prediction—the noise effects tend to cancel themselves out, leaving only the predictable ‘signal’. As a single simulation contains lots of noise, it has a high probability of disagreeing with the ensemble average prediction—whereas the real-world outcome agrees better as it contains less noise. Hence the paradoxical result that the model predicts the real world better than itself.

There is no easy solution to this paradox, and some scientists aren’t even sure if it truly exists. However, this study does provide intriguing evidence that climate and weather patterns could be a lot more predictable than we thought.

Maybe our days of butterfly-blaming are finally at an end.

–Eleanor Hook

Eleanor Hook is a freelance science writer based in Chapel Hill, NC. She contributes regularly to Physics Buzz, where she writes about everything from dead fish to lasers in space.

*Lorenz actually adds the caveat that since the influence of a butterfly is confined to a very small volume, its effect is likely to spiral into a bigger one only in turbulent air.

You may also read these articles