Thursday, April 27, 2017

Secrets of  Weather Forecasting Models....Exposed !...By Jesse Ferrell, Meteorologist/Community Director, Accuweather.

1. Be wary of forecasts that are only on one model. If nearly every model is on board with a solution, then you can be more confident in your forecast. Look at all the models, preferably on one map.
2. Look at model trends. If the low pressure moved east with this run, what did it do the run before that? For the GFS, look at a couple days of 00Z and 12Z runs for consistency. Avoid the 06Z and 18Z runs when 00 or 12 is available; in the U.S. these runs don't include the weather balloon network data (balloons are only sent up twice per day), and are therefore radically different and more likely to have bias.
3. Remember that accuracy generally decreases with increasing time, decreasing resolution, and for snowfall (because snow is around 10 times the rainfall equivalent). Remember this when you're looking at a course 15-day forecast of snowfall (read my blog on White Christmas inaccuracy).
4. Ensembles help mitigate inaccuracy. If possible, look at the Ensembles instead of just one model. These exist for the GFS, Canadian, WRF, NMM and SREF models. As I've explained before, Ensembles take the same model and run it several times with slightly different input. This gives a range of possibilities and lets you know how "confident" the model is in itself.
5. The models have now built in new output to do things that meteorologists used to have to do in their heads. This includes precip type, snowfall amounts, and severe weather probability. Don't stay stuck on that 500 mb chart calculating thickness; save time and check out these other newer products.
I am by no means an expert on forecast modelling, 

"Why are the models so inaccurate?" The models are inaccurate because of a number of factors. Here are some:

1. LACK OF COMPUTING POWER: Many people believe that the limits of computing power is one problem. When I was in college, my professors told me that we could probably run a model with near 100% accuracy for tomorrow's forecast, but it wouldn't finish running until the day after tomorrow. That might be an exaggeration, but it's hard to believe that, as computing power increases, our forecasts won't, though they could reach a point where they can't get much better without more initialization data (see below). Right now because of that lack of computing power, we have to pick between high resolution, wide coverage area, or forecast length. For example, It takes about the same time to run the GFS worldwide out to Day 15 at poor resolution, as it takes to run the 4-KM WRF over half of the U.S. at extremely high resolution. Of course, if you believe in chaos theory, there are limits to what computing power will buy us.

2. INITIALIZATION DATA: In the latter part of the 20th Century, the models took in only data from a sparse network of upper-air stations, which send up balloons that transmit weather data back to earth and, when put together, form a 3-D picture of the current state of the atmosphere (called the "Initialization.") Then they work from there to apply algorithms to predict how that atmosphere will behave. As you'll see below, a bad initialization can ruin the forecast. This is what's known in the computer industry as "GIGO" (Garbage In, Garbage Out). Sadly, the "upper-air" network of balloon releases hasn't improved much over the years, but models now also take surface mesonets, satellite data, airport observations, and more into account when trying to put together the Initialization. One would assume that the more (accurate) data pumped in, the better the Initialization will be, and therefore the better the forecast. But the more data you input, the slower the initialization is, and you've run back into #1.

3. BUGS: Like any computer program, the models are subject to occasional bugs in their hundreds of thousands of lines of code, created by humans capable of typos or other mistakes. The major models are mature enough that this does not affect things over a wide area, but I've been a computer programmer long enough to know that there are probably dozens, if not hundreds of bugs hidden within these algorithms that are causing all sorts of small problems that may add up to inaccuracy.

4. MODEL BIAS: Each model seems to have "bias" regarding certain weather systems or situations, just as a human might have a biased political view. This could be due to the way the algorithms were originally built (because of the people who built them, or the types of storms that they programmed in equations for) or inaccuracies in #2 or #3.
By Jesse Ferrell


rohit aroskar said...

Model article


Sir,when will heat return in north india?

sset said...

Rajesh Sir excellent information. In fact these are real challenges faced by data scientist while preparing models(be it any domain like weather,medical,financial....) But naughty "Mother Nature" always has its own secrets which brings tears to scientist when models fail.

Nilay Wankawala said...

Great Insight to forecasting - clears misconceptions about Forecasting

Abizer Kachwala said...

Rajesh sir,a very informative article....some people just give out forecasts which are vague and only based on models and not through any personal initiative.You have been giving accurate forecasts....(More accurately than IMD OR OTHERS).Its better to forecast late,but correctly...rather than participating in haste for forecasting....The thing we can conclude that Nature always is fully dynamic & very difficult to predict.....Vagaries has proven to be a role model to other forecasters

Nilay Wankawala said...

Credit D D news

2017 to be one of the hottest years: study 

Updated on : 29-04-2017 10:45 AM

2017 will be among the hottest years on record, say scientists who have developed a new method for predicting global mean temperature. 

Researchers at Yale University in the US found that weak El Nino activity from 1998 until 2013, rather than a pause in long-term global warming, was the root cause for slower rates of increased surface temperature. They also found that volcanic activity played only a minor role. 

"From a practical perspective, our method, when combined with El Nino prediction, allows us to predict next-year global mean temperature," said Alexey Fedorov, professor at Yale University. "

Accordingly, 2017 will remain among the hottest years of the observational record, perhaps just a notch colder than 2016 or 2015," Fedorov said. El Nino events contribute to year-to-year variations in global mean temperature by modulating the heat that is released from tropical oceans into the atmosphere, researchers said. 

El Nino warms the atmosphere, while the cold phase of the phenomenon, La Nina, cools the atmosphere. The new model closely mirrors global mean surface temperature (GMST) changes since 1880, including the so-called global warming hiatus and the more recent temperature rise. 

"Our main conclusion is that global warming never went away, as one might imply from the term 'global warming hiatus,'" Fedorov said. "The warming can be masked by inter-annual and decadal natural climate variability, but then it comes back with a vengeance," said Fedorov. 

Multiple strong El Nino events occurred in the 1980s and 1990s. This was followed by much weaker El Nino activity, which lasted until 2014. "The recent rapid rise in global temperature mainly resulted from the prolonged 2014-2016 El Nino conditions in the tropics that reached an extreme magnitude in the winter of 2015," said Shineng Hu, first author of the study published in the journal Geophysical Research Letters. 

"The corresponding heat release into the atmosphere, together with the ongoing background global warming trend, made 2014, 2015, and 2016 the three consecutive warmest years of the instrumental record so far," Hu said.