Can we distinguish wet years from dry years?

Slides:



Advertisements
Similar presentations
Verification of Probabilistic Forecast J.P. Céron – Direction de la Climatologie S. Mason - IRI.
Advertisements

ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Understanding the Relative Operating Characteristic (ROC) Simon Mason International Research Institute for Climate Prediction The Earth Institute of Columbia.
Climate Graphs.
Guidance of the WMO Commission for CIimatology on verification of operational seasonal forecasts Ernesto Rodríguez Camino AEMET (Thanks to S. Mason, C.
1 of Introduction to Forecasts and Verification.
Climate Predictability Tool (CPT)
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Seasonal Predictability in East Asian Region Targeted Training Activity: Seasonal Predictability in Tropical Regions: Research and Applications 『 East.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
THE IMPACT OF DIFFERENT SEA-SURFACE TEMPERATURE PREDICTION SCENARIOS ON SOUTHERN AFRICAN SEASONAL CLIMATE FORECAST SKILL Willem A. Landman Asmerom Beraki.
Item Analysis What makes a question good??? Answer options?
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
1 Intercomparison of low visibility prediction methods COST-722 (WG-i) Frédéric Atger & Thierry Bergot (Météo-France)
Determine whether each curve below is the graph of a function of x. Select all answers that are graphs of functions of x:
CORPUS CHRISTI CATHOLIC COLLEGE – GEOGRAPHY DEPARTMENT 1 How to draw a climate graph By the end of today’s lesson you will:  know how to draw a climate.
What is a drought?  A drought is an unusually long time where there is not enough water to meet the needs of people, animals and plants.  During droughts.
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
Seasonal Forecasting Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Chapter 8 Extension Normal Distributions. Objectives Recognize normally distributed data Use the characteristics of the normal distribution to solve problems.
Techniques to improve test items and instruction
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
Climate Graphs 20to%20draw%20a%20climate%20graph%20PP.ppt.
© 2008 Pearson Addison-Wesley. All rights reserved Chapter 1 Section 13-5 The Normal Distribution.
Consolidated Seasonal Rainfall Guidance for Africa, Jan 2013 Initial Conditions Summary Forecast maps Forecast Background – ENSO update – Current State.
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Mathematics of PCR and CCA Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January.
Forecasting in CPT Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Evaluating Results of Learning Blaž Zupan
Can we distinguish wet years from dry years? Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand,
Linear Regression Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Holt Algebra 2 11-Ext Normal Distributions 11-Ext Normal Distributions Holt Algebra 2 Lesson Presentation Lesson Presentation.
Verification of Seasonal Forecasts
BAE 6520 Applied Environmental Statistics
NWS Southern Region Climate Service Program Manager
Understanding the Relative Operating Characteristic (ROC)
Evaluating Results of Learning
Verifying and interpreting ensemble products
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
ROD'S WINTER FORECAST Rod's Winter Outlook 1 1
With special thanks to Prof. V. Moron (U
Section 2.2: Normal Distributions
Country Presentation SASCOF September 2016 Maldives
Leave-one-out cross-validation
Relative Operating Characteristics
Seasonal prediction of South Asian summer monsoon 2010: Met Office
Decision Errors and Power
Chapter Nine Part 1 (Sections 9.1 & 9.2) Hypothesis Testing
Seasonal Predictions for South Asia
Probabilistic forecasts
Using statistics to evaluate your test Gerard Seinhorst
Hypothesis Testing.
How good (or bad) are seasonal forecasts?
Changes in surface climate of the tropical Pacific
Seasonal Forecasting Using the Climate Predictability Tool
Estimating Population Parameters Based on a Sample
Seasonal Forecasting Using the Climate Predictability Tool
Normal Distributions 11-Ext Lesson Presentation Holt Algebra 2.
Volume 16, Issue 20, Pages (October 2006)
Seasonal Forecasting Using the Climate Predictability Tool
Skills test Numeracy support
Seasonal Forecasting Using the Climate Predictability Tool
Verification of SPE Probability Forecasts at SEPC
Seasonal Forecasting Using the Climate Predictability Tool
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Can we distinguish wet years from dry years? Simon Mason simon@iri.columbia.edu Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015

The ROC The ROC answers the question: Can the forecasts distinguish an event from a non-event? Are we more confident it will be dry when it is dry compared to when it is not? Do we forecast less rain when it is dry compared to when it is not dry? Do we issue a higher forecast probability for below-normal when it is below-normal compared to when it is not?

ROC Retroactive forecasts of MAM rainfall for Thailand. Which year are you most confident is a dry year?

ROC The most sensible strategy would be to list the years in order of increasing forecast rainfall. If the forecasts are good, the “dry” years should be at the top of the list.

ROC For the first guess: Repeat for all forecasts.

ROC

ROC Plot the correct scores (hit-rate) against the incorrect (false-alarm rate) scores. We want the correct scores to be larger than the incorrect scores, i.e., for the graph to be above the diagonal.

ROC What is the year we are most confident is “below”? Was it “below”? If so score a hit; if not score a false-alarm. What is the year we are next most confident is “below”? Cross-validation of MAM rainfall for Thailand, using Feb NIÑO4.

ROC What is the year we are most confident is not “below”? Was it “below”? If not we scored a false-alarm; if so we scored a hit. What is the year we are next most confident is not “below”? Cross-validation of MAM rainfall for Thailand, using Feb NIÑO4.

ROC The bottom left indicates whether the forecasts with strong indications of dry (or wet) are good. Can they indicate that an event will occur? The top right indicates whether the forecasts with strong indications of not dry (or not wet) are good. Can they indicate that an event will not occur?

Relative Operating Characteristics

Two-Alternative Forced Choice Test In which of these two Januaries did El Niño occur (Niño3.4 index >27°C)? What is the probability of getting the answer correct? 50% (assuming that you do not have inside information about ENSO).

Two-Alternative Forced Choice Test In which of these two Januaries did El Niño occur (Niño3.4 index >27°C)? What is the probability of getting the answer correct? That depends on whether we can believe the forecasts. Select the forecast with the highest temperature.

Two-Alternative Forced Choice Test We can ask the same question if the forecasts are probabilistic: In which of these two Januaries did El Niño occur (Niño3.4 index >27°C)? What is the probability of getting the answer correct? That depends on whether we can believe the forecasts. Select the forecast with the higher probability.

Two-Alternative Forced Choice Test Retroactive forecasts of MAM rainfall for Thailand. How well do the forecasts distinguish “dry” years (driest 20%) from other years? Do we forecast less rain when it is dry compared to other years?

Two-Alternative Forced Choice Test It is easier to calculate by sorting the forecasts so the driest forecast are at the top. We can then count how many of the non-dry years are lower in the table than the dry years. For 2010: 14 of the 15 non-dry years have wetter forecasts. For 1998: 14 of 15 For 1995: 14 of 15 For 2005: 14 of 15 For 1992: 12 of 15 In total: 68 of 75 ≈ 91%.

Two-Alternative Forced Choice Test If the forecasts could perfectly discriminate the dry years, the forecasts would be drier than for all the non-dry years, and the dry years would be listed at the top of the table. If the forecasts could not discriminate the dry years at all, they would be randomly distributed through the table, and there would be a 50% chance of the forecast being drier than on a non-dry year.

ROC The area beneath the red curve, 0.91, gives us the probability that we will successfully discriminate a “dry” year from a non-dry year. The area beneath the blue curve, 0.85, gives us the probability that we will successfully discriminate a “wet” year from a non-wet year.

ROC score The ROC score indicates how successfully we can distinguish an event (e.g., “below-normal”) from a non-event (“normal” or “above-normal”). How often do we predict less rain when we observe “below-normal” compared to when we observe “normal” or “above-normal”? But the predictions could be in probabilities: How often do we predict a higher chance of below-normal when we observe “below-normal” compared to when we observe “normal” or “above-normal”? … or in categories: How often do we predict a drier category when we observe “below-normal” compared to when we observe “normal” or “above-normal”?

Summary The ROC answers the question: Can the forecasts distinguish an event from a non-event? The graph can help identify conditional skill (e.g., can we forecast wet conditions better than dry conditions?) It can be used to verify deterministic (discrete and continuous) and probabilistic forecasts.

Exercises Diagnose the quality of your forecast models by analysing the ROC graphs.

CPT Help Desk web: iri.columbia.edu/cpt/ @climatesociety cpt@iri.columbia.edu @climatesociety …/climatesociety