Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)

Slides:



Advertisements
Similar presentations
IRI Experience in Forecasting and Applications for the Mercosur Countries.
Advertisements

Guidance of the WMO Commission for CIimatology on verification of operational seasonal forecasts Ernesto Rodríguez Camino AEMET (Thanks to S. Mason, C.
1 of Introduction to Forecasts and Verification.
Enhancing the Scale and Relevance of Seasonal Climate Forecasts - Advancing knowledge of scales Space scales Weather within climate Methods for information.
Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
1 Seasonal Forecasts and Predictability Masato Sugi Climate Prediction Division/JMA.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Enhanced seasonal forecast skill following SSWs DynVar/SNAP Workshop, Reading, UK, April 2013 Michael Sigmond (CCCma) John Scinocca, Slava Kharin.
How random numbers improve weather and climate predictions Expected and unexpected effects of stochastic parameterizations NCAR day of networking and.
EG1204: Earth Systems: an introduction Meteorology and Climate Lecture 7 Climate: prediction & change.
Seasonal-to-Interannual Climate Forecasts Lisa Goddard International Research Institute for Climate & Society The Earth Institute of Columbia
How Climate Can Be Predicted Why Some Predictability Exists, and How Predictions Can Be Made.
Barbara Casati June 2009 FMI Verification of continuous predictands
© Crown copyright Met Office Andrew Colman presentation to EuroBrisa Workshop July Met Office combined statistical and dynamical forecasts for.
Consolidated Seasonal Rainfall Guidance for Africa Dec 2012 Initial Conditions Summary Forecast maps Forecast Background – ENSO update – Current State.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Multi-model Ensemble Forecast: El Niño and Climate Prediction International Research Institute for Climate and Society Columbia University Shuhua Li IAP.
1 Assessment of the CFSv2 real-time seasonal forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Creating Empirical Models Constructing a Simple Correlation and Regression-based Forecast Model Christopher Oludhe, Department of Meteorology, University.
Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex Lisa Goddard and Simon Mason International Research Institute for.
Sub-Saharan rainfall variability as simulated by the ARPEGE AGCM, associated teleconnection mechanisms and future changes. Global Change and Climate modelling.
THE CENTRAL WEATHER BUREAU REGIONAL CLIMATE DYNAMICAL DOWNSCALING FORECAST PRODUCTS FOR JFM 2011 HUI-LING WU and CHIH-HUI SHIAO.
Development of a downscaling prediction system Liqiang Sun International Research Institute for Climate and Society (IRI)
Challenges in Prediction of Summer Monsoon Rainfall: Inadequacy of the Tier-2 Strategy Bin Wang Department of Meteorology and International Pacific Research.
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
FORECAST SST TROP. PACIFIC (multi-models, dynamical and statistical) TROP. ATL, INDIAN (statistical) EXTRATROPICAL (damped persistence)
EUROBRISA WORKSHOP, Paraty March 2008, ECMWF System 3 1 The ECMWF Seasonal Forecast System-3 Magdalena A. Balmaseda Franco Molteni,Tim Stockdale.
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Page 1© Crown copyright 2006 Matt Huddleston With thanks to: Frederic Vitart (ECMWF), Ruth McDonald & Met Office Seasonal forecasting team 14 th March.
Verification of IRI Forecasts Tony Barnston and Shuhua Li.
Forecasting in CPT Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
Ben Kirtman University of Miami-RSMAS Disentangling the Link Between Weather and Climate.
Motivation Quantify the impact of interannual SST variability on the mean and the spread of Probability Density Function (PDF) of seasonal atmospheric.
Theoretical Review of Seasonal Predictability In-Sik Kang Theoretical Review of Seasonal Predictability In-Sik Kang.
Oceanic forcing of Sahel rainfall on interannual to interdecadal time scales A. Giannini (IRI) R. Saravanan (NCAR) and P. Chang (Texas A&M) IRI for climate.
1 Fourth IAP Meeting February ° Extreme Event: Winter US Tornado Outbreak --- Attribution challenge °2007 US Annual Precipitation Extremes ---
El Niño Forecasting Stephen E. Zebiak International Research Institute for climate prediction The basis for predictability Early predictions New questions.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
JAN | Feb-Mar-Apr Mar-Apr-May Apr-May-Jun May-Jun-Jul IRI’s monthly issued probability forecasts of seasonal global precipitation and temperature We issue.
Seasonal Predictability of SMIP and SMIP/HFP In-Sik Kang Jin-Ho Yoo, Kyung Jin, June-Yi Lee Climate Environment System Research Center Seoul National University.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Using the National Multi-Model Ensemble (NMME) System Johnna Infanti Advisor: Ben Kirtman.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
1 An Assessment of the CFS real-time forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Description of the IRI Experimental Seasonal Typhoon Activity Forecasts Suzana J. Camargo, Anthony G. Barnston and Stephen E.Zebiak.
1 A review of CFS forecast skill for Wanqiu Wang, Arun Kumar and Yan Xue CPC/NCEP/NOAA.
1/39 Seasonal Prediction of Asian Monsoon: Predictability Issues and Limitations Arun Kumar Climate Prediction Center
IRI Climate Forecasting System
Challenges of Seasonal Forecasting: El Niño, La Niña, and La Nada
IRI Multi-model Probability Forecasts
Verifying and interpreting ensemble products
Potential predictability, ensemble forecasts and tercile probabilities
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
Makarand A. Kulkarni Indian Institute of Technology, Delhi
Climate Predictability Tool (CPT)
The Climate System TOPICS ENSO Impacts Seasonal Climate Forecasts
Predictability of Indian monsoon rainfall variability
Measuring the potential predictability of seasonal climate predictions
IRI forecast April 2010 SASCOF-1
IRI’s ENSO and Climate Forecasts Tony Barnston, Shuhua Li,
Ensemble forecasts and seasonal precipitation tercile probabilities
Forecast system development activities
Seasonal Forecasting Using the Climate Predictability Tool
Power Regression & Regression estimation of event probabilities (REEP)
Presentation transcript:

Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)

Weather forecast – Initial Condition problem Climate forecast – Primarily boundary forcing problem

Climate Forecasts be probabilistic  ensembling be reliable and skillful  calibration and verification address relevant scales and quantities  downscaling

OUTLINE  Fundamentals of probabilistic forecasts  Identifying and correcting model errors  systematic errors  Random errors  Conditional errors  Forecast verification  Summary

Fundamentals of Probabilistic Forecasts

Basis of Seasonal Climate Prediction Changes in boundary conditions, such as SST and land surface characteristics, can influence the characteristics of weather (e.g. strength or persistence/absence), and thus influence the seasonal climate.

January 25, 2006UNAM Influence of SST on tropical atmosphere

FORECAST SST TROP. PACIFIC (multi-models, dynamical and statistical) TROP. ATL, INDIAN (statistical) EXTRATROPICAL (damped persistence) GLOBAL ATMOSPHERIC MODELS ECPC(Scripps) ECHAM4.5(MPI) CCM3.6(NCAR) NCEP(MRF9) NSIPP(NASA) COLA2 GFDL Forecast SST Ensembles 3/6 Mo. lead Persisted SST Ensembles 3 Mo. lead IRI DYNAMICAL CLIMATE FORECAST SYSTEM POST PROCESSING MULTIMODEL ENSEMBLING PERSISTED GLOBAL SST ANOMALY 2-tiered OCEAN ATMOSPHERE REGIONAL MODELS

RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 Probability Calculated Using the Ensemble Mean B o N o A o BfNfAfBfNfAf Contingency Table

RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 1) Count the # of ensembles in each category, e.g., Total 100 Ensembles, 40 ensembles in Category “A” 35 ensembles in Category “N”, and 25 ensembles in Category “B”. 2) Calibration RSMRSM RSMRSM RSMRSM Contingency tables for 3 subregions of Ceara State at local scales (FMA ) OBS Coast BNA B 532 N343 A235 Probability obtained from ensemble spread

Example of seasonal rainfall forecast (3-month average & Probabilistic)

Why seasonal averages? Rainfall correlation skill: ECHAM4.5 vs CRU Observations ( ) Should we only be forecasting for February for SW US & N Mexico?

Why seasonal averages? Partial Correlation Maps for Individual Months No independent skill for individual months.

Why seasonal averages?

Why probabilistic? Observed Rainfall (SON 2004)Model Forecast (SON 2004), Made Aug 2004 RUN #1 RUN #4 Two ensemble members from same AGCM, same SST forcing, just different initial conditions. Units are mm/season

Why probabilistic? Observed Rainfall Sep-Oct-Nov 2004 (CAMS-OPI) Model Forecast (SON 2004), Made Aug Seasonal climate is a combination of boundary-forced SIGNAL, and chaotic NOISE from internal dynamics of the atmosphere.

Why probabilistic? Observed Rainfall Sep-Oct-Nov 2004 (CAMS-OPI) Model Forecast (SON 2004), Made Aug 2004 ENSEMBLE MEAN Average model response, or SIGNAL, due to prescribed SSTs was for normal to below-normal rainfall over southern US/ northern Mexico in this season. Need to also communicate fact that some of the ensemble member predictions were actually wet in this region. Thus, there may be a ‘most likely outcome’, but there are also a ‘range of possibilities’ that must be quantified.

Forecast Mean Climate Forecast: Signal + Uncertainty “SIGNAL” The SIGNAL represents the ‘most likely’ outcome. The NOISE represents internal atmospheric chaos, uncertainties in the boundary conditions, and random errors in the models. “NOISE” Historical distribution Climatological Average Forecast distribution Below Normal Above Normal Near-Normal

Resolution: Probabilities should differ from climatology as much as possible, when appropriate Reliability: Forecasts should “mean what they say”. Probabilistic Forecasts Reliability Diagrams Showing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event. Good reliability is indicated by a 45° diagonal.

Identifying and Correcting Model Errors

Optimizing Probabilistic Information Eliminate the ‘bad’ uncertainty -- Reduce systematic errors e.g. MOS correction, calibration Reliably estimate the ‘good’ uncertainty -- Reduce probability sampling errors e.g. Gaussian fitting and Generalized Linear Model (GLM) -- Minimize the random errors e.g. multi-model approach (for both response & forcing) -- Minimize the conditional errors e.g. Conditional Exceedance Probabilities (CEPs)

Systematic Spatial Errors Systematic error in location of mean rainfall, leads to spatial error in interannual rainfall variability, and thus a resulting lack of skill locally.

Systematic Calibration Errors ORIGINAL RESCALED Dynamical models may have quantitative errors in the mean climate RECALIBRATED ORIGINAL … as well as in the magnitude of its interannual variability. Statistical recalibration of the model’s climate and its response characteristics can improve model reliability.

January 25, 2006UNAM before and after statistical correction DJFM rainfall anomaly correlation Reducing Systematic Errors MOS Correction (Tippett et al., 2003, Int. J. Climatol.)

Converges like S = Signal-to-noise ratio N = ensemble size “True” rms divide by. N=8N=16 N=24 N=39

Fitting with a Gaussian Two types of error: PDF not really Gaussian! Sampling error –Fit only mean –Fit mean and variance Error(Gaussian fit N=24) = Error(Counting N=40 )

Minimizing Random Errors Multi-model ensembling Combining models reduces deficiencies of individual models Probabilistic skill scores (RPSS for 2m Temperature (JFM )

Reliability! A Major Goal of Probabilistic Forecasts Reliability Diagrams Showing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event. Good reliability is indicated by a 45° diagonal.

Benefit of Increasing Number of AGCMs in Multi-Model Combination JAS Temperature JAS Precipitation (Robertson et al. 2004)

Correcting Conditional Biases METHODOLOGY

Conditional Exceedance Probabilities The probability that the observation exceeds the amount forecast depends upon the skill of the model. If the model were perfect, this probability would be constant. If it is imperfect, it will depend on the ensemble member’s value. Identify whether the exceedance probability is conditional upon the value indicated. Generalized linear models with binomial errors can be used, e.g.: Tests can be performed on  1 to identify conditional biases. If  1 = 0 then the system is reliable.  0 can indicate unconditional bias. (Mason et al. 2007, Mon Wea Rev)

Idealized CEPs (from Mason et al. 2007, Mon Wea Rev) PERFECT Reliability Positive skill SIGNAL too strong Positive skill SIGNAL too weak Negative skill NO skill β 1 =0 β 1 <0 β 1 = Clim. β 1 |Clim| β 1 >0

Conditional Exceedance Probabilities (CEPs) Standardized anomaly 100% 50% 0% Shift Scale Use CEPs to determine biased probability of exceedance. Shift model-predicted PDF towards goal of 50% exceedance probability. Note that scale is a parameter determined in minimizing the model-CEP slope.

Adjustment decreases signal Adjustment increases signal Adjustment increases MSE Adjustment decreases MSE CEP Recalibration can either strengthen or weaken SIGNAL CEP Recalibration consistently reduces MSE

Effect of Conditional Bias Correction

Forecast Verification

Verification of probabilistic forecasts How do we know if a probabilistic forecast was “correct”? “A probabilistic forecast can never be wrong!” As soon as a forecast is expressed probabilistically, all possible outcomes are forecast. However, the forecaster’s level of confidence can be “correct” or “incorrect” = reliable. Is the forecaster over- / under-confident?

Forecast verification – reliability and resolution If forecasts are reliable, the probability that the event will occur is the same as the forecast probability. Forecasts have good resolution, if the probability that the event will occur changes as the forecast probability changes.

UNAM Reliability diagram

Ranked Probability Skill Score (RPSS) RPSS measures the cumulative squared error between the categorical forecast probabilities and the observed category relative to some reference forecast (Epstein 1969). The most widely used reference strategy is that of “climatology.” The RPSS is defined as, where N=3 for tercile forecasts. fj, rj, and oj are the forecast probability, reference forecast probability, and observed probability for category j, respectively. The probability distribution of the observation is 100% for the category that was observed and is 0 for the other two categories. The reference forecast of climatology is assigned to 33.3% for each of the tercile categories.

Ranked Probability Skill Score (RPSS) The RPSS gives credits for forecasting the observed category with high probabilities, and also puts penalties for forecasting the wrong category with high probabilities.  According to its definition, the RPSS maximum value is 100%, which can only be obtained by forecasting the observed category with a 100% probability consistently.  A score of zero implies no skill in the forecasts, which is the same score one would get by consistently issuing a forecast of climatology. For the three category forecast, a forecast of climatology implies no information beyond the historically expected 33.3%-33.3%-33.3% probabilities.  A negative score suggests that the forecasts are underperforming climatology.  The skill for seasonal precipitation forecasts is generally modest. For example, IRI seasonal forecasts with 0-month lead for the period scored 1.8% and 4.8%, using the RPSS, for the global and tropical (30oS-30oN) land areas, respectively (Wilks and Godfrey 2002).

Real-Time Forecast Validation

Ranked Probability Skill Score (RPSS) Problem The expected RPSS with climatology as the reference forecast strategy is less than 0 for any forecast that differs from the climatological probability – lack of equitability There are two important implications:  The expected RPSS can be optimized by issuing climatological forecast probabilities.  The forecast may contain some potential usable information even when RPSS is less than 0, especially if the sharpness of the forecasts is high.

There is no single measure that gives a comprehensive summary of forecast quality.

GHACOF SOND +15% bias because of hedging -5% no skill hedging serious bias -20% good resolution: above-normal +10% below-normal +6% 0% resolution because of large biases weak bias +5 reasonable sharpness sharpest forecasts believable?

Summary  Seasonal forecasts are necessarily probabilistic  The models used to predict the climate are not perfect, but by identifying and minimizing their errors we can maximize their utility  The two attributes of probabilistic forecasts are reliability and resolution. Both these aspects require verification.  Skill in seasonal climate prediction varies with seasons and geographic regions - Requires research!