Presentation on theme: "Forecast Verification Research"— Presentation transcript:
1Forecast Verification Research Barbara Brown, NCARWith thanks to Beth Ebert and Laurie WilsonS2S Workshop, 5-7 Feb 2013, Met Office
2Verification working group members Beth Ebert (BOM, Australia)Laurie Wilson (CMC, Canada)Barb Brown (NCAR, USA)Barbara Casati (Ouranos, Canada)Caio Coelho (CPTEC, Brazil)Anna Ghelli (ECMWF, UK)Martin Göber (DWD, Germany)Simon Mason (IRI, USA)Marion Mittermaier (Met Office, UK)Pertti Nurmi (FMI, Finland)Joel Stein (Météo-France)Yuejian Zhu (NCEP, USA)
3AimsVerification component of WWRP, in collaboration with WGNE, WCRP, CBS(“Joint” between WWRP and WGNE)Develop and promote new verification methodsTraining on verification methodologiesEnsure forecast verification is relevant to usersEncourage sharing of observational dataPromote importance of verification as a vital part of experimentsPromote collaboration among verification scientists, model developers and forecast providers
4Relationships / collaboration WGCMWGNETIGGESDS-WASHyMeXPolar PredictionSWFDPYOTCSubseasonal to Seasonal PredictionCG-FVWGSIPSRNWPCOST-731
5FDPs and RDPs Sydney 2000 FDP Beijing 2008 FDP/RDP SNOW-V10 RDP FROST-14 FDP/RDPMAP D-PHASEOther FDPs: Lake VictoriaIntend to establish collaboration with SERA on verification of tropical cyclone forecasts and other high impact weather warningsTyphoon Landfall FDPSevere Weather FDP
6SNOW-V10 Nowcast and regional model verification at obs sites User-oriented verificationTuned to decision thresholds of VANOC, whole Olympic periodModel-oriented verificationModel forecasts verified in parallel, January to August 2010UserRelatively high concentration of data available for the Olympic period.StatusSignificant effort to process and quality-control observationsMultiple observations at some sites observation error
8FROST-14 User-focused verification Model-focused verification Threshold-based as in SNOW-V10Timing of events – onset, duration, cessationReal-time verificationRoad weather forecasts?Model-focused verificationNeighborhood verification of high-resolution NWPSpatial verification of ensemblesAccount for observation uncertaintyAnatoly Muravyev and Evgeny Atlaskin came to the Verification Methods Workshop in December, and will be working on the FROST-14 verification.
9Promotion of best practice Recommended methods for evaluating cloud and related parametersCloud document is just out! Originally requested by WGNE, has been in the works for some time. Has recommendations for standard verification of cloud amount and related variables such as cloud base height, vertical profile of cloud amount, using both point-based and spatial observations (satellite, cloud radar, etc.)
10Promotion of best practice Verification of tropical cyclone forecastsIntroductionObservations and analysesForecastsCurrent practice in TC verification – deterministic forecastsCurrent verification practice – Probabilistic forecasts and ensemblesVerification of monthly and seasonal tropical cyclone forecastsExperimental verification methodsComparing forecastsPresentation of verification resultsJWGFVR is also preparing a document describing methods for verifying tropical cyclone forecasts, in support of GIFS-TIGGE and the WMO Typhoon Landfall FDP. It will include standard methods for assessing track and intensity forecasts, probabilistic and ensemble forecast verification, and a review of recent developments in this field. In addition to track and intensity, we also recommend methodologies for TC-related hazards – wind, heavy precipitation, storm surge.
12Beyond track and intensity… Track error distributionTC genesisWind speedMost tropical cyclone verification (at least operationally) focuses on only 2 variables: track location and intensity.Since a great deal of the damage associated with tropical storms is related to other factors, this seems overly limitingSome additional important variables:Storm structure and sizePrecipitationStorm surgeLandfall time, position, and intensityConsistencyUncertaintyInfo to help forecasters (e.g., steering flow)Other?Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance informationPrecipitation (MODE spatial method)
13Promotion of best practice Verification of forecasts from mesoscale models (early DRAFT)Purposes of verificationChoices to be madeSurface and/or upper-air verification?Point-wise and/or spatial verification?Proposal for 2nd Spatial Verification Intercomparison Project in collaboration with Short-Range NWP (SRNWP)
14Spatial Verification Method Intercomparison Project International comparison of many new spatial verification methodsPhase 1 (precipitation) completedMethods applied by researchers to same datasets (precipitation; perturbed cases; idealized cases)Subjective forecast evaluationsWeather and Forecasting special collectionPhase 2 in planning stageComplex terrainMAP D-PHASE / COPS datasetWind and precipitation, timing errors14
15Outreach and training Verification workshops and tutorials On-site, travellingSWFDP (e.g., east Africa)EUMETCAL training modulesVerification web pageSharing of tools
165th International Verification Methods Workshop Melbourne 2011 Tutorial32 students from 23 countriesLectures and exercises (took tools home)Group projects - presented at workshopWorkshop~120 participantsTopics:Ensembles and probabilistic forecastsSeasonal and climateAviation verificationUser-oriented verificationDiagnostic methods and toolsTropical cyclones and high impact weatherWeather warning verificationUncertaintySpecial issue of Meteorol. Applications in early 2013THANKS FOR WWRP’S SUPPORT!!Had some trouble with participants getting their visas on time – some countries missed out (Ethiopia, China came late). Could use advice/help from WMO on this.
17Seamless verification Seamless forecasts - consistent across space/time scalessingle modelling system or blendedlikely to be probabilistic / ensembleclimatechangelocalpointregionalglobalSpatial scaleforecast aggregation timeminuteshoursdaysweeksmonthsyearsdecadesNWPnowcastsdecadalpredictionseasonalsub-veryshortrangeWhich scales / phenomena are predictable?Different user requirements at different scales (timing, location, …)
18"Seamless verification" – consistent across space/time scales Modelling perspective – is my model doing the right thing?Process approachesLES-style verification of NWP runs (first few hours)T-AMIP style verification of coupled / climate runs (first few days)Single column modelStatistical approachesSpatial and temporal spectraSpread-skillMarginal distributions (histograms, etc.)Seamless verificationIt was not clear to the group how to define seamless verification, and the WG had a lively discussion on this topic.One possible interpretation is consistent verification across a range of scales by for example applying the same verification scores to all forecasts being verified to allow comparison. This would entail greater time and space aggregation as longer forecast ranges are verified. Averaging could be applied to the EPS medium range and monthly time range, as these two forecast ranges have an overlapping period. Similarly the concept of seamless verification could be applied to the EPS medium range forecast and seasonal forecast. For example, verification scores could be calculated using tercile exceedance and the ERA Interim could be used as the reference system. Verification across scales could involve conversion of forecast types, for example, from precipitation amounts (weather scales) to terciles (climate scales). A probabilistic framework would likely be the best approach to connect weather and climate scales.Perkins et al., J.Clim. 2007
19"Seamless verification" – consistent across space/time scales User perspective – can I use this forecast to help me make a better decision?Neighborhood approaches - spatial and temporal scales with useful skillGeneralized discrimination score (Mason & Weigel, MWR 2009)consistent treatment of binary, multi-category, continuous, probabilistic forecastsCalibration - accounting for space-time dependence of bias and accuracy?Conditional verification based on larger scale regimeExtreme Forecast Index (EFI) approach for extremesJWGFVR activityProposal for research in verifying forecasts in weather-climate interfaceAssessment component of UK INTEGRATE projectModels may be seamless – but user needs are not!Nowcasting users can have very different needs for products than short-range forecasting users (more localized in space and time; wider range of products which are not standard in SR NWP and may be difficult to produce with an NWP model; some products routinely measured, others not; …)Temporal/spatial resolution go together. On small spatial /temporal scales modelling/verification should be inherently probabilistic. The predictability of phenomena generally decreases (greatly) from short to very short time/spatial scales. How to assess/show such limits to predictability in verification?Need to distinguish “normal” and “extreme” weather?Nowcasting more than SR forecasting is interested not just in intensities of phenomena, but also in exact timing/duration and location. Insight in errors of timing/location is needed.Different demands on observations, possibly not to be met with the same data sources?From Marion:We have two work packages kicking off this FY (i.e. now or soon). I am co-chair of the assessment group for INTEGRATE which is our 3-year programme for improving our global modelling capability. The INTEGRATE project follows on from the CAPTIVATE project. INTEGRATE project pages are hosted on the collaboration server. A password is needed (as UM partners you have access to these pages). The broad aim of INTEGRATE is to pull through model developments from components of the physical earth system (Atmosphere, Oceans, Land, Sea-Ice and Land-Ice, and Aerosols) and integrate them into a fully coupled global prediction system, for use across weather and climate timescales. The project attempts to begin the process of integrating coupled atmosphere-ocean (COA) forecast data into a conventional weather forecast verification framework, and consider the forecast skill of surface weather parameters in the existing operational seasonal COA system, GloSea4 and 5, over the first 2 weeks of the forecast. Within that I am focusing more on applying weather-type verification tools on global, longer time scales, monthly to seasonal. A part of this is a comparison of atmosphere-only (AO) and coupled ocean-atmosphere (COA) forecasts for the first 15 days (initially). Both are approaching the idea of seamless forecasting, i.e. can we used COA models to do NWP-type forecasts for the first 15 days, and seamless verification, i.e. finding some common ground in the way we can compare longer simulations and short-range NWP.
20Questions What should be the role of JWGFVR in S2S? Defining protocols? Metrics?Guidance on methods?Participation in activities?Linking forecasting and applications?What should be the interaction with other WMO verification activities?E.g., Standardized Verification System for Long-range Forecasts (SVS-LRF); WGNE/WGCM Climate Metrics PanelHow do metrics need to change for S2S?How do we cope with small sample sizesIs a common set of metrics required for S2S?
21Database comments Database should be designed to allow easy access for ApplicationsVerificationWill need observations for evaluations and applicationsWill these (or links to these) be included in the database?Lack of obs can be a big challenge / detriment to use of the databaseAccess to dataFor applications and verification often will not want a whole field or set of fieldsAlso may want to be able to examine time series of forecasts at pointsData formats and access can limit uses
22Opportunities! New challenges Methods for evaluating extremesSorting out some of the thorny problems (small sample sizes, limited observations, etc.)Defining meaningful metrics associated with research questionsMaking a useful connection between forecast performance and forecast usefulness/valueApplication areas (e.g., precipitation onset in Africa)A new research areaUsing spatial methods for evaluation of S2S forecast patterns