A Look Into Weather Models - And What's Ahead

Posted: May 8, 2018, 10:14 am by ccastellano

The science of weather is not an easy one to grasp, as the atmosphere is a complex fluid of rising, sinking, and circulating air. Of course, there are old school methods of forecasting weather, by analyzing ground and air observations and extrapolating that data into figuring out approximately where certain airmasses will travel to using textbook principles of physics, thermodynamics, and yes... calculus. That can get time-consuming and complicated. Thankfully, in today’s need-to-know-now world, we have computer models.

These calculated simulations run worldwide and localized forecasts based on atmospheric data ingested through radiosonde (weather balloon) data and countless automated and human observations provided at the surface. The numbers provided by these data are plugged into complex mathematical formulas which are predictive tools in forecasting atmospheric motion. Models take a 4-D look at this, in an attempt to resolve how the atmosphere is behaving through time and going into the future.

Since this is such a complicated process that could make anyone’s head spin, every model requires high-powered supercomputers to run the complex arithmetic and come up with a reasonably accurate solution. Each model also has a different formula structure, and every model varies in terms of the available computer power to run the data.

If you watch weather reports on TV or even acquire your meteorological content online you hear a lot about these bastions of weather forecasting. The most common models we typically hear about are the European ECMWF (European Centre for Medium-Range Weather Forecasts, or “Euro” for short), the American Global Forecasting System (GFS) and North American Model (NAM), as well even Canada’s Canadian Meteorological Centre model (CMC). All of these serve a purpose to forecasters, guiding a meteorologist along with pattern recognition and meteorological knowledge in helping make predictions.

As the knowledge grows and meteorologists get tested in every situation (because let’s face it, when is weather ever consistent?), so do the computer simulations that predict weather. Like human forecasters, models also go through processes of verification and diagnostics to see where they can improve. It is with this in mind that every one of these guidance systems gets a little refresher or upgrade every now and then to make these electronic prognosticators a bit more accurate.

ECMWF

The European model, known with the acronym ECMWF or dubbed the “Euro,” is widely known and statistically proven to be one of the world’s top-performing mainstream models. With as efficiently as it executes on its forecasts, one would certainly think that the European’s process at looking into the atmospheric future needs very little maintenance. However, even it too needs a bit of T.L.C. every once in a while. The latest improvements were experimentally run end of 2016 into early 2017, before being put into official full force in June of 2017.

One of the most important improvements included how observation data was assimilated, by significantly increasing resolution. One of the key ways in which models crunch this data is by measuring the atmosphere as a grid space. These grid spaces are essentially cubes of air parcels, and the smaller that the boxes are, the higher the resolution. Since a much larger space of air can have much more uncertainty as to its condition, these larger grid spaces will result often in a much less accurate prediction. With smaller spaces to measure, readings can be more accurate, leading to a more precise forecast.

GFS Going to FV3

The American flagship model, the GFS, in the attempt to keep up with the elite predictive computers, is coming up with an upgrade of its own. One of the key features with the upgraded American is right along the lines of the ECMWF with a change in grid spacing and resolution. As an effort to increase the efficiency of such an upgrade, there will be a major increase in computing power, which the old system certainly lacked as years went by. Soon, you may hear the American model referred to exclusively as the FV3, which is the much shorter acronym of Finite Volume Cubed-Sphere Core, and that may be as soon as 2019.


The above image shows the new FV3 GFS model (known as a "parallel") versus the currently operational GFS. Time will tell, but the hope is that the FV3 will be a major improvement on the current operational GFS - Courtesy of Tropical Tidbits

It serves a dire need for what has been widely seen as a model falling behind in the race to get it right. It is argued that even the new FV3 will struggle to match the ECMWF in terms of accuracy due to still being short of its foreign counterpart's resolution and computing power, but any effort to improve the model is much welcomed regardless. The tightening up of the current grid spacing should be able to reduce the noise issues that often lead to its demise in forecasts. Much like a “butterfly effect,” any extreme issues confronted in model initialization will have major impacts with time in respect to forecast hours. One of the hopes in the new upgrade will be to mitigate the effects of convection (noise / thunderstorms) in model initialization leading to unrealistic scenarios often offered by the GFS model in the past. Reduce the noise, increase the clarity, and improve plausible outcomes for the weather.

In recent past, the North American Model, widely known as the NAM, had upgrades of its own to improve resolution in 2017. In the latest refresh, the high-resolution NAM went from a 4-kilometer spacing to 3-kilometer spacing with the hope of making it better in predicting mesoscale features such as thunderstorms. Though some improvement has been seen, it can only really be known in terms of forecaster perspective rather than real data, as it will take time to fully understand how it is really doing. Same will be thought of when it eventually becomes time to judge the FV3.

Honorable Mentions

Of course, the GFS and ECMWF are not the only tools that a meteorologist will use. There are several other models and model platforms to analyze especially when clarity is not so great between major players. The NAM and Canada’s CMC model are a couple commonly used. Another perceived as a fairly strong tool in forecasting is the United Kingdom’s UKMET model, which is often referenced by the National Weather Service in its forecast discussion products. Believe it or not, though, there are a few others that you may not have heard of.

There is also the Short Range Ensemble Forecast (SREF for short). This is one that has been in the back pockets in a “break glass in case of emergency” sense. This model, unlike others in the arsenal, does not have an operational model run. Rather, it is a conglomeration of several short-range models averaged together to form a mean. While from a precision sense it can be a bit of an unstable model, it does have strength in probabilistic forecasting, with a special nod to its ability to pick out properties that define strength of both winter and summertime convection.

Others that have been coming on of late are the RGEM (the Canadian’s short-range higher resolution model – cousin to the GEM / CMC) especially useful in winter forecasting. A personal shout-out also goes out Germany’s ICON model, which had shown some instances of increased reliability over the 2017-2018 winter. Countless other countries have their own computerized forecasts as well, such as Japan’s JMA, the MeteoFrance-AROME, with even Korea’s KMA and Switzerland’s COSMO model.

Private Industries Getting in on the Race

While weather models are normally a government entity, it does not mean that private industries can’t have a swing at it, especially tech companies with a wealth of resources at their disposal to do so. Companies such as IBM (2016) and Panasonic (2013) have only just begun their venture into the weather forecasting business, and have developed their own weather models. While success of these players it truly yet to be known, it does buck the trend of government-controlled data, perhaps signaling a path toward making forecast data a tradable business entity. Either way, it only adds to the wealth of resources now available to us going into the future to make weather prediction as accurate as possible.