Enhancing Weather Information with Probability Forecasts

The following statement(s) have expired and are here for historical purposes and do not represent statements of the AMS that are “in force” at this time.

 

The following statement has been updated and replaced. It is here for historical purposes and does not represent statements of the AMS that are “in force” at this time.

An Information Statement of the American Meteorological Society
(Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89

Summary
This information statement describes the current state of the science of probabilistic weather forecasting.Ideally all weather forecasts would include information that accurately quantifies their uncertainty.  However, though there are a number of exceptions, most weather forecasts currently do not contain such information.   The widespread dissemination of this probabilistic information would likely yield substantial economic and social benefits, because users could make better decisions by explicitly accounting for the uncertainty in weather forecasts.

Producing weather forecasts in probabilistic form for many weather parameters will require improvements in, or the implementation of, techniques for quantifying uncertainty, such as ensemble forecasting.   Forecasters will need to be trained not only on how to use probabilistic information in their final forecasts, but also in the diverse requirements of those who use probability forecasts.   In addition, users will require information on how to interpret and use probabilistic forecast information, needs that must be met if the communication of uncertainty in weather forecasts is to be effective.

Current Situation
In the last several decades, weather forecasts over most time and space scales have dramatically improved.  This success is due to a combination of more accurate and higher-resolution satellite and Earth-observational data and improved “Numerical Weather Prediction (NWP)” forecast models that exploit today’s more powerful computers to more realistically represent the atmosphere.   Concurrently, major sectors of the world economy (including agriculture, energy, transportation, and water supply) have made increasing use of weather forecasts, and have become more sophisticated in the integration of these improved forecasts into short- and long-term business planning and decision making.  Yet weather forecasts, by their very nature, involve some uncertainty.  For the most part, users still do not have ready access to information about this uncertainty, and the information available is often not effectively communicated to them.

Probability Forecasts
A probability forecast includes a numerical expression of uncertainty about the quantity or event being forecast.  Ideally, all elements (temperature, wind, precipitation, etc.) of a weather forecast would include information that accurately quantifies the inherent uncertainty.  Surveys have consistently indicated that users desire information about uncertainty or confidence of weather forecasts.  The widespread dissemination and effective communication of forecast uncertainty information is likely to yield substantial economic and social benefits, because users can make decisions that explicitly account for this uncertainty.

While much progress has been made in developing methods to create probabilistic forecasts, currently only a small fraction of the elements of weather, hydrologic, and climate forecasts are expressed probabilistically.  Forecasts of the probability of precipitation occurrence have been made for several decades and are well accepted, even if not always properly interpreted.  More recently, in the United States the NWShas issued probability forecasts for a variety of weather phenomena, ranging from daily outlooks of tornado hazard and wind-speed fields in tropical storms to weekly and seasonal outlooks for temperature and precipitation.

Explanation of Probabilities
Although everyone encounters probabilities in many aspects of everyday life, the concept of probability can still be difficult to master.  There is not a single, accepted way to interpret probabilities or to effectively communicate them.  Probability forecasts can be produced by several different methods.  For example, a probability forecast of a weather event can be a forecaster's judgment of the likelihood that the event will occur.  Probability forecasts can also be produced directly from NWP models or from statistical analyses of the output from these models.3

Uncertainty can be expressed in ways other than probabilistic terms, such as odds or frequencies. But studies by social scientists have indicated repeatedly that expressing uncertainty in qualitative terms, such as “likely,” creates unnecessary ambiguity, with one user interpreting the same term as reflecting a higher probability than would another user.

Brief explanations of key concepts related to the interpretation of probability forecasts are provided in the table below.5

Probability Concept

Probability Forecasting Context

Frequentist

Forecast reflects the relative frequency of occurrence of a weather event in past circumstances analogous to the situation being addressed.

Subjective

Forecast reflects the degree of certainty a forecaster has that a weather event will occur.

Climatological

Forecast reflects the long-term relative frequency of occurrence of a weather event.

Conditional

Refinement of climatological probability given other information (e.g., recent conditions or guidance from NWP models) that would make a particular weather event either more or less likely than usual.

Users may be familiar with the climatological probability of a weather event, for example, the probability of precipitation as estimated by the historical relative frequency that precipitation has occurred in the past at a given location for a particular time.  A “conditional” probability is a modification of this climatological probability, formulated by focusing on the relative frequency with which a particular weather event is associated with a given set of conditions (e.g., an El Niño year).

One highly desirable property of any probability forecast is that it be “reliable” (or “well-calibrated”).  For instance, over the long term, precipitation should occur on approximately 20% of the occasions for which the forecast probability is 20%.  Unreliable probability forecasts indicate that the uncertainty has not been properly estimated, so that a user will not necessarily be able to select the most appropriate action.  Given reliable probability forecasts, the potential benefits of a forecast system are directly related to the extent to which an individual forecast differs from the climatological probability (a verification measure called “sharpness”).6

The definition of the event being forecast must be clearly understood in order for probability forecasts to be communicated effectively and acted upon appropriately.  For example, for a forecast of 30% probability of precipitation for Boston tomorrow, a person may be unsure as to whether that means: (a) it will rain over 30% of the Boston area tomorrow; (b) it will rain for 30% of the time tomorrow somewhere in Boston; (c) there is a 30% probability it will rain somewhere in Boston tomorrow; or (d) at any given location in the Boston area, there is a 30% probability that it will rain tomorrow. The definition of a precipitation event used by the NWS is measurable precipitation within the stated time period at any point in the area for which the forecast is valid (i.e., (d) is the correct answer).

As more weather forecasts begin to include probabilistic information, significant efforts will be required not only to effectively communicate this new information, but also to ensure that users understand the definition of the event being forecast.7 Examples of such efforts are the ongoing discourse between the NWS, emergency managers, and the media regarding new forecast products that provide tropical cyclone wind-speed probabilities and the “cone of uncertainty” for tropical cyclone tracks issued by the National Hurricane Center. The fact that users often misunderstand the cone of uncertainty illustrates that additional work at communication of the concept to the public is still needed.

Production of Probability Forecasts
Producing probabilistic weather forecasts will require improvements in techniques for quantifying their uncertainty. In principle, uncertainty can be attached to any forecast product through an evaluation of past performance.  The four primary approaches currently used to produce probability forecasts reflect the fact that this uncertainty is situation-dependent:

  • Probabilities for an event based on an ensemble of predictions from NWP models;
  • Statistical post-processing of NWP output from a single model run or the output of ensemble-based NWP;
  • Observationally based techniques that involve analysis of historical weather and climate data to yield statistical relationships between currently observable predictors and unknown future observations (predictands) of interest; and
  • Subjective (i.e., human) interpretation of NWP forecasts and other information.

 

Each of these approaches has certain advantages and disadvantages, with the most appropriate one depending on the particular situation.  In particular, the first and second approaches are often used in combination because of the current limitations of ensemble predictions.

Ensemble forecasting methods.   Ensemble forecasting methods involve evaluating a set of runs from an NWP model, or different NWP models, from the same initial time.  Each of the model runs either begins from subtly different initial conditions (reflecting  incompleteness and uncertainty of the present weather observations) and/or uses different model assumptions and parameters (reflecting imperfect knowledge of atmospheric processes).  Each of the model runs produces a different forecast.  The result is a collection (or “ensemble”) of forecasts.  The differences among these forecasts reflects the uncertainty in the initial conditions and/or in model physics.  Ensemble methods are now starting to be applied to a variety of different kinds of prediction problems, from short-range forecasts of thunderstorms and hurricane motion to seasonal predictions of temperature and precipitation.  Because of certain deficiencies, probability forecasts based on ensembles are still not yet widely disseminated, especially without post-processing (see below).

Statistical post-processing or calibration methods.  These methods are typically applied to improve forecasts, such as ensemble predictions which are presently not capable of accounting adequately for all sources of uncertainty, especially imperfections in NWP models.  This process commonly makes use of information from prior forecasts and observations to produce probability forecasts or to improve their reliability.  A variety of statistical and numerical methods are used to calibrate and improve the predictions, with the best approach depending on the weather element being forecast as well as a number of other operationally oriented factors.  A well known technique, Model Output Statistics (MOS), produces well calibrated probability forecasts by relating past observations to NWP output.8

Observationally based statistical forecasting methods.  These methods are based on relationships between current observations (predictors) and unknown future observations (predictands).  They are useful at a variety of time scales including very short lead times, for which forecasts from NWP models are unavailable, and lead times of approximately two weeks and longer, for which forecasts from NWP models degrade very substantially due to their inherent inaccuracies and sensitivity to the initial conditions.

Subjective forecasting methods.  These methods are based on the experience, knowledge, and judgment of human forecasters.  The forecasters interpret information from current observations, NWP models, and other sources to decide on the content of the official forecast.  With adequate training and feedback, human forecasters in operational settings consistently demonstrate the ability to produce skillful and reliable probability forecasts.

Probability forecasts are produced by a number of national weather services around the globe as well as the private sector.  Examples of some of the types of probability forecasts that are currently produced by the NWS, and the methods used to generate them, are shown in the table below.

 

Forecast Element

Temporal Characteristics

Methods for Quantifying Uncertainty

Precipitation occurrence

1–5 day lead time

Post-processing (e.g., Model Output Statistics); subjective

Hurricane track location

1–3 days lead time

Analysis of past track errors; subjective

Tornado occurrence

1-day lead time

Subjective

Above/near/below normal forecasts of temperature and precipitation

Durations of 6–10 days, 8–14 days, one month, a season

Subjective and statistical synthesis from a variety of ensemble forecast, statistical post-processing, and analog tools

Benefits of Probability Forecasts
Because significant portions of the economy are weather-sensitive, increased dissemination and use of probability forecasts could produce large benefits.  Probability forecasts offer numerous advantages over non-probabilistic forecasts.  By explicitly and effectively communicating the uncertainty in a forecast, many users could weigh the uncertainty information against other factors in their decision process, such as the cost of taking a protective action based upon the forecast against the potential loss if no action is taken.  The issuance and effective communication of probability forecasts thus enable the decision maker and the forecaster to work together more effectively to utilize the expertise of the forecaster (i.e., knowledge about the limitations of the forecast) in combination with the expertise of the decision maker (i.e., intimate knowledge about the specific nature of the decision faced).

Three examples of the economics of weather-based decision making appear in the table below.  The columns represent the decision maker, the weather event having an impact, the costs and benefits to compare, and the probabilistic information needed as compared to the non-probabilistic information typically provided.  These examples should be viewed as prototypical, neglecting certain details that would be important in practice.

Decision Maker

Weather Event

Costs and Benefits

Probabilistic Information

Non-probabilistic Information

School superintendent

Snowfall

Cost to cancel school vs. benefit of safety of students

Probability of snowfall exceeding critical depth

Most likely snow accumulation

Emergency manager

Hurricane landfall

Cost to evacuate vs. benefit of safety of residents

Probability of landfall at a location

Most likely location of landfall

Reservoir operator

Heavy rainfall

Cost of releasing water (e.g., reduced irrigation) vs. benefit of reduced flood damage

Probability of exceeding critical rainfall threshold

Most likely amount of rainfall

In all three of the above examples, non-probabilistic information in the form of the most likely event can be quite different from the probabilistic information required by the decision maker to adopt the best strategy.  Unfortunately, studies that document the use and economic value of weather forecasts, in general, and probability forecasts, in particular, remain rather limited, with quantitative estimates of economic value only rarely being obtained.9

Challenges and Opportunities
The forecaster and user communities are poised to take advantage of probability forecasts.   However, a number of challenges must be addressed to ensure their optimal formulation, dissemination, and use:

  • Developing approaches for creating ensemble forecasts that represent the full range and all of the sources of forecast uncertainty associated with the forecasting process;
  • Improving methods for the post-processing, calibration, and verification of probability forecasts;
  • Developing improved methods and new tools for communicating uncertainty information (e.g., more effective ways to display such information);
  • Developing awareness among forecasters of the unique needs of specific users (e.g., emergency managers);
  • Assisting users in understanding probability forecasts and exactly what is being forecast;
  • Helping users make optimal use of the uncertainty information; and
  • Documenting the use and economic value of probability forecasts in real-world decision-making situations.

 

Nevertheless, the opportunities are great.  By providing users with uncertainty information communicated in an effective manner, forecasters can allow users to make better decisions resulting in greater economic and social benefits.

[This statement is considered in force until May 2013 unless superseded by a new statement issued by the AMS Council before this date]

© American Meteorological Society, 45 Beacon Street, Boston, MA 02108-3693

1Related information can be found in the AMS Information Statement on Weather Analysis and Forecasting (Bull. Amer. Met. Soc., 88, 2007); the National Academy of Sciences (NAS) Report “Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts” (NAS, 2006); and in the Glossary of Meteorology (AMS, 2000).

2 National Weather Service (NWS), National Oceanic and Atmospheric Administration (NOAA)

3 For information about NWP models, see above-mentioned statement on Weather Analysis and Forecasting.

4 See above-mentioned NAS 2006 report.

5 See above-mentioned NAS 2006 report for more detail.

6 For more about the verification of probability forecasts, see Jolliffe, I.T., and D.B. Stephenson, 2003: Forecast Verification: A Practitioner’s Guide in Atmospheric Science, Wiley, 240 pp.

7 See the above-mentioned NAS 2006 report for a number of specific recommendations about how to improve the communication of uncertainty.

8 Many of these methods are described in Wilks, D.S., 2006: Statistical Methods in Atmospheric Sciences (second edition). Academic Press, 627 pp.

9 For a summary of recent case studies of the economic value of weather and climate forecasts, see http://www.isse.ucar.edu/staff/katz/esig.html.