NCAR, 15 April 2008 1 Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.

Slides:



Advertisements
Similar presentations
Robin Hogan Ewan OConnor Cloudnet level 3 products.
Advertisements

Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
Venugopal, Basu, and Foufoula-Georgiou, 2005: New metric for comparing precipitation patterns… Verification methods reading group April 4, 2008 D. Ahijevych.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
1 Verification of nowcasts and very short range forecasts Beth Ebert BMRC, Australia WWRP Int'l Symposium on Nowcasting and Very Short Range Forecasting,
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
“New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts and Stefano Migliorini.
Dubrovnik - EWGLAM/SRNWP 8-11/10/ 2007 COSMO strategy for Verification Adriano Raspanti COSMO WG5 Coordinator – “Verification and Case studies” Head of.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Priority project « Advanced interpretation and verification.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecast in the Alps Verification.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
DIAMET meeting 7 th-8th March 2011 “New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts.
EMS 2013 (Reading UK) Verification techniques for high resolution NWP precipitation forecasts Emiel van der Plas Kees Kok Maurice.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Generate ξ m,i for each direction i given H, σ 1 and m (Eq. 2) calculate X’ m,i for each direction i (Eq. 1) given ξ m,i and X m, which corresponds for.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Spatial Forecast Methods Inter-Comparison Project -- ICP Spring 2008 Workshop NCAR Foothills Laboratory Boulder, Colorado.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Probabilities from COSMO-2 derived with the neighborhood.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
WG4 Oct 2006 – Sep 2007 plans COSMO General Meeting, 21 September 2006 Pierre Eckert.
Predicting Intense Precipitation Using Upscaled, High-Resolution Ensemble Forecasts Henrik Feddersen, DMI.
User-Focused Verification Barbara Brown* NCAR July 2006
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Evaluation of Precipitation from Weather Prediction Models, Satellites and Radars Charles Lin Department of Atmospheric and Oceanic Sciences McGill University,
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Fuzzy Verification toolbox: definitions and results Felix.
Intensity-scale verification technique
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
Multi-scale validation of high resolution precipitation products
General framework for features-based verification
Verification of nowcasting products: Issues and methods
Quantitative verification of cloud fraction forecasts
Composite Method Results Artificial Cases April 2008
COSMO-LEPS Verification
Numerical Weather Prediction Center (NWPC), Beijing, China
Verification of Tropical Cyclone Forecasts
the performance of weather forecasts
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology

NCAR, 15 April Look in a space / time neighborhood around the point of interest –Evaluate using categorical, continuous, probabilistic scores / methods –Will only consider spatial neighborhood for fake cases Fuzzy (neighborhood) verification t t + 1 t - 1 Forecast value Frequency Forecast value Frequency forecast observation

NCAR, 15 April Fuzzy verification framework Fuzzy methods use one of two approaches to compare forecasts and observations: single observation – neighborhood forecast (user-oriented) neighborhood observation – neighborhood forecast (model-oriented) observationforecast observationforecast

NCAR, 15 April Fuzzy verification framework good performance poor performance

NCAR, 15 April Upscaling Neighborhood observation - neighborhood forecast Average the forecast and observations to successively larger grid resolutions, then verify as usual % change in ETS Weygandt et al. (2004)

NCAR, 15 April Fractions skill score Neighborhood observation - neighborhood forecast observedforecast Compare forecast fractions with observed fractions (radar) in a probabilistic way over different sized neighbourhoods Roberts and Lean (2008)

NCAR, 15 April single threshold ROC Spatial multi-event contingency table Single observation - neighborhood forecast Vary decision thresholds: magnitude (ex: 1 mm h -1 to 20 mm h -1 ) distance from point of interest (ex: within 10 km,...., within 100 km) timing (ex: within 1 h,..., within 12 h) anything else that may be important in interpreting the forecast Fuzzy methodology – compute Hanssen and Kuipers score HK = POD – POFD Measure how close the forecast is to the place / time / magnitude of interest. Atger (2001)

NCAR, 15 April Practically perfect hindcasts Single observation - neighborhood forecast Q: If the forecaster had all of the observations in advance, what would the "practically perfect" forecast look like? –Apply a smoothing function to the observations to get probability contours, choose yes/no threshold that maximizes CSI when verified against obs –Did the actual forecast look like the practically perfect forecast? –How did the performance of the actual forecast compare to the performance of the practically perfect forecast? Fuzzy methodology – compute forecast PracPerf CSI forecast = 0.34CSI PracPerf = 0.48 Kay and Brooks (2000)

NCAR, 15 April st geometric case 50 pts to the right bad good 12.7 mm 25.4 mm

NCAR, 15 April nd geometric case 200 pts to the right bad good

NCAR, 15 April th geometric case 125 pts to the right and huge bad good

NCAR, 15 April st case vs. 5 th case ~same Case 1 better Case 5 better

NCAR, 15 April Perturbed cases 1000 km "Observed" (6) Shift 12 pts right, 20 pts down, intensity*1.5 (4) Shift 24 pts right, 40 pts down Which forecast is better?

NCAR, 15 April th perturbed case 24 pts right, 40 pts down bad good

NCAR, 15 April th perturbed case 12 pts right, 20 pts down, intensity*1.5 bad good

NCAR, 15 April Difference between cases 6 and 4 Case 4 - Shift 24 pts right, 40 pts down Case 6 - Shift 12 pts right, 20 pts down, intensity*1.5 Case 6 – Case 4 6 4

NCAR, 15 April How do fuzzy results for shift + amplification compare to results for the case of shifting only? Case 6 - Shift 12 pts right, 20 pts down, intensity*1.5 Case 3 - Shift 12 pts right, 20 pts down, no intensity change Case 6 – Case 3 3 Why does the case with incorrect amplitude sometimes perform better?? Baldwin and Kain (2005): When the forecast is offset from the observations most scores can be improved by overestimating rain area, provided rain is less common than "no rain". 6

NCAR, 15 April Some observations about methods Traditional Measures direct correspondence of forecast and observed values at grid scale Hard to score well unless forecast is ~perfect Requires overlap of forecasts and obs Entity-based (CRA) Measures location error and properties of blobs (size, mean/max intensity, etc.) Scores well if forecast looks similar to observations Does not require much overlap to score well Fuzzy Measures scale- and intensity-dependent similarity of forecast to observations Forecast can score well at some scales and not at others Does not require overlap to score well

NCAR, 15 April Some final thoughts… Object-based and fuzzy verification seem to have different aims Object-based methods Focus on describing the error What is the error in this forecast? What is the cause of this error (wrong location, wrong size, wrong intensity, etc.)? Fuzzy neighborhood methods Focus on skill quantification What is the forecast skill at small scales? Large scales? Low/high intensities? What scales and intensities have reasonable skill? Different fuzzy methods emphasize different aspects of skill

NCAR, 15 April Some final thoughts… When can each type of method be used? Object-based methods When rain blobs are well defined (organized systems, longer rain accumulations) When it is important to measure how well the forecast predicts the properties of systems When size of domain >> size of rain systems Fuzzy neighborhood methods Whenever high density observations are available over a reasonable domain When knowing scale- and intensity-dependent skill is important When comparing forecasts at different resolutions