Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Descriptive Measures MARE 250 Dr. Jason Turner.
QPF verification of the 4 model versions at 7 km res. (COSMO-I7, COSMO-7, COSMO-EU, COSMO-ME) with the 2 model versions at 2.8 km res. (COSMO- I2, COSMO-IT)
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
An Assessment of CMAQ with TEOM Measurements over the Eastern US Michael Ku, Chris Hogrefe, Kevin Civerolo, and Gopal Sistla PM Model Performance Workshop,
WRF Verification: New Methods of Evaluating Rainfall Prediction Chris Davis NCAR (MMM/RAP) Collaborators: Dave Ahijevych, Mike Baldwin, Barb Brown, Randy.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Object-based Spatial Verification for.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Describing Data: Numerical
MSE 600 Descriptive Statistics Chapter 10 in 6 th Edition (may be another chapter in 7 th edition)
A Radar Data Assimilation Experiment for COPS IOP 10 with the WRF 3DVAR System in a Rapid Update Cycle Configuration. Thomas Schwitalla Institute of Physics.
Graphical Summary of Data Distribution Statistical View Point Histograms Skewness Kurtosis Other Descriptive Summary Measures Source:
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
SNOPAC: Western Seasonal Outlook (8 December 2011) Portland, OR By Jan Curtis.
Biostatistics: Measures of Central Tendency and Variance in Medical Laboratory Settings Module 5 1.
Evaluation of the ability of Numerical Weather Prediction models run in support of IHOP to predict the evolution of Mesoscale Convective Systems Steve.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
Introduction to Biostatistics, Harvard Extension School © Scott Evans, Ph.D.1 Descriptive Statistics, The Normal Distribution, and Standardization.
SEASONAL COMMON PLOT SCORES A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013.
WORKSHOP ON SHORT-RANGE ENSEMBLE PREDICTION USING LIMITED-AREA MODELS Instituto National de Meteorologia, Madrid, 3-4 October 2002 Limited-Area Ensemble.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
It Never Rains But It Pours: Modeling Mixed Discrete- Continuous Weather Phenomena J. McLean Sloughter This work was supported by the DoD Multidisciplinary.
Skewness & Kurtosis: Reference
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
HNMS contribution to CONSENS Petroula Louka & Flora Gofa Hellenic National Meteorological Service
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
Designing multiple biometric systems: Measure of ensemble effectiveness Allen Tang NTUIM.
Renata Gonçalves Tedeschi Alice Marlene Grimm Universidade Federal do Paraná, Curitiba, Paraná 1. OBJECTIVES 1)To asses the influence of ENSO on the frequency.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Typhoon Forecasting and QPF Technique Development in CWB Kuo-Chen Lu Central Weather Bureau.
NARCCAP Meeting September 2009 Results from NCEP-driven RCMs ~ Overview ~ Results from NCEP-driven RCMs ~ Overview ~ William J. Gutowski, Jr. & Raymond.
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Latest results in the precipitation verification over Northern Italy Elena Oberto, Marco Turco, Paolo Bertolotto (*) ARPA Piemonte, Torino, Italy.
HMS 320 Understanding Statistics Part 2. Quantitative Data Numbers of something…. (nominal - categorical Importance of something (ordinal - rankings)
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Standard Deviation Variance and Range. Standard Deviation:  Typical distance of observations from their mean  A numerical summary that measures the.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
LEPS VERIFICATION ON MAP CASES
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Verifying Precipitation Events Using Composite Statistics
West Virginia Floods June 2016 NROW 2016 Albany NY
Verifying and interpreting ensemble products
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
of Temperature in the San Francisco Bay Area
A Review of the CSTAR Ensemble Tools Available for Operations
Chaos Seeding within Perturbation Experiments
Numerical Weather Prediction Center (NWPC), Beijing, China
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology

Idealized cases - geometric (3) 125 pts to the right and big Observed(1) 50 pts to the right(2) 200 pts to the right (4) 125 pts to the right and rotated (5) 125 pts to the right and huge Which forecast is best?

Traditional verification yields same statistics for cases 1 and 2 forecast

5 th case – traditional verification forecast THE WINNER forecast

1 st case – CRA verification

2 nd case – CRA verification CRA Technique yields similar results with cases 3 and 4

5 th case – CRA verification

RESULTS ARE SENSITIVE TO SEARCH BOX FOR DISPLACEMENTS

Increase of size of rectangle (extra 90 instead of 30 pts) affects results

Further increase from 90 pts to 150 pts does not result in additional change

1 st case vs. 5 th case ObservedCase 1Case 5 Area (# gridpoints) Average rain rate (mm/h) Maximum rain rate (mm/h)25.4 Rain volume (km 3 ) Displacement east 2.34° 6.65° Displacement north 0° RMS error after shift Correlation coefficient after shift Displacement error (%)100%12% Volume error (%)0%47% Pattern error (%)0%41% THE WINNER

Perturbed cases 1000 km "Observed" (2) Shift 12 pts right, 20 pts down, intensity*1.5 (1) Shift 24 pts right, 40 pts down Which forecast is better?

1 st case – traditional verification (1) Shift 24 pts right, 40 pts down

2 nd case – traditional verification (2) Shift 12 pts right, 20 pts down, intensity*1.5 THE WINNER

CRA verification Threshold=5 mm/h Case 1 Case 2

CRA verification Threshold=5 mm/h Case 1 Case 2

CRA verification Threshold=5 mm/h Case 1 Case 2

System far from boundary in Case 1 – small shift – behaves as expected Problem?

System closer to boundary yields unexpected results – not all error is displacement

Problem is more serious for smaller system at edge of domain

Central system works well through medium displacements

But…. Larger displacement yields odd results

Summary CRA requirement for forecast and observed systems to be contiguous may limit some applications Problems occur for systems near the domain boundaries – not yet clear what causes the problems

Results from separate study using object-oriented techniques to verify ensembles Both CRA and MODE have been applied to 6-hr forecasts from two 15km 8 member WRF ensembles integrated for 60 h for 72 cases This results in 10 x 16 x 72 = 11,520 evaluations (plots, tables….) from each approach Results were compared to Clark et al (2008) study

Clark et al. study Clark et al. (2008) looked at two 8 member WRF ensembles, one using mixed IC/LBC, the other mixed physics/dynamic cores Spread & skill initially may have been better in mixed physics ensemble vs. IC/LBC one, but spread grew much faster in the IC/LBC one, and it performed better than the mixed physics ensemble at later times (after h) in these 120 h integrations.

Areas under ROC curves for both ensembles (Clark et al. 2007) Skill initially better in mixed ensemble but IC/LBC becomes better after hour mm 2.5 mm

Variance continues to grow in IC/LBC ensemble but levels off after hour 30 in mixed ensemble. MSE always worse for mean of mixed ensemble – and performance worsens with time relative to IC/LBC ensemble. Diurnal Cycle

Spread Ratio also shows dramatically different behavior with increasing spread in IC/LBC ensemble but little or no growth in mixed ensemble after first 24 hours 0.5 mm 2.5 mm

Questions: Do the object parameters from the CRA and MODE techniques show the different behaviors between the Mix and IC/LBC ensembles? Do the object parameters from the CRA and MODE techniques show an influence from the diurnal trends in observed precipitation?

Rain Rate Standard Deviation (in.) – mean usually around.5 inch Forecast Hour Mix-CRA IC/LBC-CRA Mix-MODE IC/LBC-MODE Wet times in blue Diurnal signal not pronounced, only weak hint of IC/LBC tendency to have increasing spread with time – and only in MODE results

Mix-CRA IC/LBC- MODE IC/LBC-CRA Mix-MODE Standard Deviation of Rain Volume (km 3 ) – MODE values multiplied by 10 (mean ~ 1) No diurnal signal, hard to see different trends between 2 ensembles

Mix-CRA IC/LBC- CRA Mix-MODE IC/LBC-MODE Areal Coverage Standard Deviation (number of points above.25 inch) – Mean ~ 800 pts CRA results show both ensembles with growing spread, and IC/LBC having faster growth

Mix-MODE IC/LBC- MODE Mix-CRA IC/LBC-CRA No clear diurnal signal, both CRA & MODE show max in h

Mix-CRA Mix-MODE IC/LBC-CRA IC/LBC-MODE No diurnal signal, no obvious differences in behavior of Mix and IC/LBC

Other questions: Is the mean of the ensemble’s distribution of object-based parameters a good forecast (better than ensemble mean put into CRA/MODE)? Does an increase in spread imply less predictability? How should a forecaster handle a case where only a subset of members show an object? These questions have been examined using CRA results

Mix Ensemble – in general, slight positive bias in rain rate, with Probability Matching forecast slightly less intense than mean of rates from members (PM usually better but not by much). Only during period does observed rate not fall within forecasted range. wet mean PM dry IC/LBC – usually too dry with rain rate (at all hours except ), Probability Matching forecast exhibits much more variable behavior, again its performance is comparable to mean of rates of members wet mean dry PM

Notice that at all times, the observed rain rate falls within the range of values from the full 16 member ensemble – indicating potential value for forecasting

Mix Ensemble – clear diurnal signal, usually too much rain volume except at times of observed peak, when it is too small. Probability Matching equal in skill to mean of member volumes IC/LBC Ensemble – also clear diurnal signal, less volume than Mix ensemble, Probability Matching usually a little wetter but generally comparable to mean of members wet mean PM dry wet dry PM mean

Mix IC/LBC Mix-PM IC/LBC-PM NOTE: Even with all 16 members, there are still times when observed volume does NOT fall within range of predictions --- not enough spread (indicated with red bar)

Areal Coverage IC/LBC Mix Rate - Mix Rate- IC/LBC Volume- Mix Volume- IC/LBC Percentage of times the observed value fell within the min/max of the ensemble

Skill (MAE) as a function of spread (> 1.5*SD cases vs <.5*SD cases) Rate*10 low SD Rate*10 big SD Vol big SD Vol low SD Area/1000 big SD Area/1000 low SD CRA applied to Mix Ensemble (IC/LBC similar)

It thus appears that total system rain volume and total system areal coverage of rainfall show a clear signal for better skill when spread is smaller Rain rate does not show such a clear signal – (especially when 4 bins of SDs are examined). Perhaps average rain rate for systems is not as big a problem in the forecasts as areal coverage (and thus volume)? Seems to be ~5-10% error for rate, % for volume, 10-20% for area

Summary Ensemble spread behavior for object- oriented parameters may not behave like traditional ensemble measures Some similarities but some differences also in output from CRA vs MODE Some suggestion that ensembles may give useful information on probability of systems having a particular size, intensity, volume

Acknowledgments Thanks to Eric (and others?) for organizing the workshop Thanks to John Halley-Gotway and Randy Bullock for help with MODE runs for ensemble work, and Adam Clark for precip forecast output Partial support for the work was provided by NSF Grant ATM

Gulf near-boundary system with increased rainfall