Presentation is loading. Please wait.

Presentation is loading. Please wait.

Forecast Verification Research

Similar presentations


Presentation on theme: "Forecast Verification Research"— Presentation transcript:

1 Forecast Verification Research
Barbara Brown, NCAR With thanks to Beth Ebert and Laurie Wilson S2S Workshop, 5-7 Feb 2013, Met Office

2 Verification working group members
Beth Ebert (BOM, Australia) Laurie Wilson (CMC, Canada) Barb Brown (NCAR, USA) Barbara Casati (Ouranos, Canada) Caio Coelho (CPTEC, Brazil) Anna Ghelli (ECMWF, UK) Martin Göber (DWD, Germany) Simon Mason (IRI, USA) Marion Mittermaier (Met Office, UK) Pertti Nurmi (FMI, Finland) Joel Stein (Météo-France) Yuejian Zhu (NCEP, USA)

3 Aims Verification component of WWRP, in collaboration with WGNE, WCRP, CBS (“Joint” between WWRP and WGNE) Develop and promote new verification methods Training on verification methodologies Ensure forecast verification is relevant to users Encourage sharing of observational data Promote importance of verification as a vital part of experiments Promote collaboration among verification scientists, model developers and forecast providers

4 Relationships / collaboration
WGCM WGNE TIGGE SDS-WAS HyMeX Polar Prediction SWFDP YOTC Subseasonal to Seasonal Prediction CG-FV WGSIP SRNWP COST-731

5 FDPs and RDPs Sydney 2000 FDP Beijing 2008 FDP/RDP SNOW-V10 RDP
FROST-14 FDP/RDP MAP D-PHASE Other FDPs: Lake Victoria Intend to establish collaboration with SERA on verification of tropical cyclone forecasts and other high impact weather warnings Typhoon Landfall FDP Severe Weather FDP

6 SNOW-V10 Nowcast and regional model verification at obs sites
User-oriented verification Tuned to decision thresholds of VANOC, whole Olympic period Model-oriented verification Model forecasts verified in parallel, January to August 2010 User Relatively high concentration of data available for the Olympic period. Status Significant effort to process and quality-control observations Multiple observations at some sites  observation error

7 Wind speed verification (model-oriented)
Visibility verification (user-oriented)

8 FROST-14 User-focused verification Model-focused verification
Threshold-based as in SNOW-V10 Timing of events – onset, duration, cessation Real-time verification Road weather forecasts? Model-focused verification Neighborhood verification of high-resolution NWP Spatial verification of ensembles Account for observation uncertainty Anatoly Muravyev and Evgeny Atlaskin came to the Verification Methods Workshop in December, and will be working on the FROST-14 verification.

9 Promotion of best practice
Recommended methods for evaluating cloud and related parameters Cloud document is just out! Originally requested by WGNE, has been in the works for some time. Has recommendations for standard verification of cloud amount and related variables such as cloud base height, vertical profile of cloud amount, using both point-based and spatial observations (satellite, cloud radar, etc.)

10 Promotion of best practice
Verification of tropical cyclone forecasts Introduction Observations and analyses Forecasts Current practice in TC verification – deterministic forecasts Current verification practice – Probabilistic forecasts and ensembles Verification of monthly and seasonal tropical cyclone forecasts Experimental verification methods Comparing forecasts Presentation of verification results JWGFVR is also preparing a document describing methods for verifying tropical cyclone forecasts, in support of GIFS-TIGGE and the WMO Typhoon Landfall FDP. It will include standard methods for assessing track and intensity forecasts, probabilistic and ensemble forecast verification, and a review of recent developments in this field. In addition to track and intensity, we also recommend methodologies for TC-related hazards – wind, heavy precipitation, storm surge.

11 Verification of deterministic TC forecasts

12 Beyond track and intensity…
Track error distribution TC genesis Wind speed Most tropical cyclone verification (at least operationally) focuses on only 2 variables: track location and intensity. Since a great deal of the damage associated with tropical storms is related to other factors, this seems overly limiting Some additional important variables: Storm structure and size Precipitation Storm surge Landfall time, position, and intensity Consistency Uncertainty Info to help forecasters (e.g., steering flow) Other? Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance information Precipitation (MODE spatial method)

13 Promotion of best practice
Verification of forecasts from mesoscale models (early DRAFT) Purposes of verification Choices to be made Surface and/or upper-air verification? Point-wise and/or spatial verification? Proposal for 2nd Spatial Verification Intercomparison Project in collaboration with Short-Range NWP (SRNWP)

14 Spatial Verification Method Intercomparison Project
International comparison of many new spatial verification methods Phase 1 (precipitation) completed Methods applied by researchers to same datasets (precipitation; perturbed cases; idealized cases) Subjective forecast evaluations Weather and Forecasting special collection Phase 2 in planning stage Complex terrain MAP D-PHASE / COPS dataset Wind and precipitation, timing errors 14

15 Outreach and training Verification workshops and tutorials
On-site, travelling SWFDP (e.g., east Africa) EUMETCAL training modules Verification web page Sharing of tools

16 5th International Verification Methods Workshop Melbourne 2011
Tutorial 32 students from 23 countries Lectures and exercises (took tools home) Group projects - presented at workshop Workshop ~120 participants Topics: Ensembles and probabilistic forecasts Seasonal and climate Aviation verification User-oriented verification Diagnostic methods and tools Tropical cyclones and high impact weather Weather warning verification Uncertainty Special issue of Meteorol. Applications in early 2013 THANKS FOR WWRP’S SUPPORT!! Had some trouble with participants getting their visas on time – some countries missed out (Ethiopia, China came late). Could use advice/help from WMO on this.

17 Seamless verification
Seamless forecasts - consistent across space/time scales single modelling system or blended likely to be probabilistic / ensemble climate change local point regional global Spatial scale forecast aggregation time minutes hours days weeks months years decades NWP nowcasts decadal prediction seasonal sub- very short range Which scales / phenomena are predictable? Different user requirements at different scales (timing, location, …)

18 "Seamless verification" – consistent across space/time scales
Modelling perspective – is my model doing the right thing? Process approaches LES-style verification of NWP runs (first few hours) T-AMIP style verification of coupled / climate runs (first few days) Single column model Statistical approaches Spatial and temporal spectra Spread-skill Marginal distributions (histograms, etc.) Seamless verification It was not clear to the group how to define seamless verification, and the WG had a lively discussion on this topic. One possible interpretation is consistent verification across a range of scales by for example applying the same verification scores to all forecasts being verified to allow comparison. This would entail greater time and space aggregation as longer forecast ranges are verified. Averaging could be applied to the EPS medium range and monthly time range, as these two forecast ranges have an overlapping period. Similarly the concept of seamless verification could be applied to the EPS medium range forecast and seasonal forecast. For example, verification scores could be calculated using tercile exceedance and the ERA Interim could be used as the reference system. Verification across scales could involve conversion of forecast types, for example, from precipitation amounts (weather scales) to terciles (climate scales). A probabilistic framework would likely be the best approach to connect weather and climate scales. Perkins et al., J.Clim. 2007

19 "Seamless verification" – consistent across space/time scales
User perspective – can I use this forecast to help me make a better decision? Neighborhood approaches - spatial and temporal scales with useful skill Generalized discrimination score (Mason & Weigel, MWR 2009) consistent treatment of binary, multi-category, continuous, probabilistic forecasts Calibration - accounting for space-time dependence of bias and accuracy? Conditional verification based on larger scale regime Extreme Forecast Index (EFI) approach for extremes JWGFVR activity Proposal for research in verifying forecasts in weather-climate interface Assessment component of UK INTEGRATE project Models may be seamless – but user needs are not! Nowcasting users can have very different needs for products than short-range forecasting users (more localized in space and time; wider range of products which are not standard in SR NWP and may be difficult to produce with an NWP model; some products routinely measured, others not; …) Temporal/spatial resolution go together. On small spatial /temporal scales modelling/verification should be inherently probabilistic. The predictability of phenomena generally decreases (greatly) from short to very short time/spatial scales. How to assess/show such limits to predictability in verification? Need to distinguish “normal” and “extreme” weather? Nowcasting more than SR forecasting is interested not just in intensities of phenomena, but also in exact timing/duration and location. Insight in errors of timing/location is needed. Different demands on observations, possibly not to be met with the same data sources? From Marion: We have two work packages kicking off this FY (i.e. now or soon). I am co-chair of the assessment group for INTEGRATE which is our 3-year programme for improving our global modelling capability. The INTEGRATE project follows on from the CAPTIVATE project. INTEGRATE project pages are hosted on the collaboration server. A password is needed (as UM partners you have access to these pages). The broad aim of INTEGRATE is to pull through model developments from components of the physical earth system (Atmosphere, Oceans, Land, Sea-Ice and Land-Ice, and Aerosols) and integrate them into a fully coupled global prediction system, for use across weather and climate timescales. The project attempts to begin the process of integrating coupled atmosphere-ocean (COA) forecast data into a conventional weather forecast verification framework, and consider the forecast skill of surface weather parameters in the existing operational seasonal COA system, GloSea4 and 5, over the first 2 weeks of the forecast. Within that I am focusing more on applying weather-type verification tools on global, longer time scales, monthly to seasonal. A part of this is a comparison of atmosphere-only (AO) and coupled ocean-atmosphere (COA) forecasts for the first 15 days (initially). Both are approaching the idea of seamless forecasting, i.e. can we used COA models to do NWP-type forecasts for the first 15 days, and seamless verification, i.e. finding some common ground in the way we can compare longer simulations and short-range NWP.

20 Questions What should be the role of JWGFVR in S2S?
Defining protocols? Metrics? Guidance on methods? Participation in activities? Linking forecasting and applications? What should be the interaction with other WMO verification activities? E.g., Standardized Verification System for Long-range Forecasts (SVS-LRF); WGNE/WGCM Climate Metrics Panel How do metrics need to change for S2S? How do we cope with small sample sizes Is a common set of metrics required for S2S?

21 Database comments Database should be designed to allow easy access for
Applications Verification Will need observations for evaluations and applications Will these (or links to these) be included in the database? Lack of obs can be a big challenge / detriment to use of the database Access to data For applications and verification often will not want a whole field or set of fields Also may want to be able to examine time series of forecasts at points Data formats and access can limit uses

22 Opportunities! New challenges
Methods for evaluating extremes Sorting out some of the thorny problems (small sample sizes, limited observations, etc.) Defining meaningful metrics associated with research questions Making a useful connection between forecast performance and forecast usefulness/value Application areas (e.g., precipitation onset in Africa) A new research area Using spatial methods for evaluation of S2S forecast patterns

23 Thank you


Download ppt "Forecast Verification Research"

Similar presentations


Ads by Google