Presentation on theme: "Forecast Verification Research"— Presentation transcript:
1Forecast Verification Research Beth Ebert, Bureau of MeteorologyLaurie Wilson, Meteorological Service of CanadaWWRP-JSC, Geneva, April 2012
2Verification working group members Beth Ebert (BOM, Australia)Laurie Wilson (CMC, Canada)Barb Brown (NCAR, USA)Barbara Casati (Ouranos, Canada)Caio Coelho (CPTEC, Brazil)Anna Ghelli (ECMWF, UK)Martin Göber (DWD, Germany)Simon Mason (IRI, USA)Marion Mittermaier (Met Office, UK)Pertti Nurmi (FMI, Finland)Joel Stein (Météo-France)Yuejian Zhu (NCEP, USA)
3AimsVerification component of WWRP, in collaboration with WGNE, WCRP, CBSDevelop and promote new verification methodsTraining on verification methodologiesEnsure forecast verification is relevant to usersEncourage sharing of observational dataPromote importance of verification as a vital part of experimentsPromote collaboration among verification scientists, model developers and forecast providers
4Relationships / collaboration WGCMWGNETIGGESDS-WASHyMeXPolar PredictionSWFDPYOTCSubseasonal to Seasonal PredictionCG-FVWGSIPSRNWPCOST-731
5FDPs and RDPs Sydney 2000 FDP Beijing 2008 FDP/RDP SNOW-V10 RDP FROST-14 FDP/RDPMAP D-PHASEOther FDPs: Lake VictoriaIntend to establish collaboration with SERA on verification of tropical cyclone forecasts and other high impact weather warningsTyphoon Landfall FDPSevere Weather FDP
6SNOW-V10 Nowcast and regional model verification at obs sites User-oriented verificationTuned to decision thresholds of VANOC, whole Olympic periodModel-oriented verificationModel forecasts verified in parallel, January to August 2010UserRelatively high concentration of data available for the Olympic period.StatusSignificant effort to process and quality-control observationsMultiple observations at some sites observation error
8FROST-14 User-focused verification Model-focused verification Threshold-based as in SNOW-V10Timing of events – onset, duration, cessationReal-time verificationRoad weather forecasts?Model-focused verificationNeighborhood verification of high-resolution NWPSpatial verification of ensemblesAccount for observation uncertaintyAnatoly Muravyev and Evgeny Atlaskin came to the Verification Methods Workshop in December, and will be working on the FROST-14 verification.
9Promotion of best practice Recommended methods for evaluating cloud and related parametersIntroductionData sourcesDesigning a verification or evaluation studyVerification methodsReporting guidelinesSummary of recommendationsCloud document is just out! Originally requested by WGNE, has been in the works for some time. Has recommendations for standard verification of cloud amount and related variables such as cloud base height, vertical profile of cloud amount, using both point-based and spatial observations (satellite, cloud radar, etc.)
10Promotion of best practice Verification of tropical cyclone forecastsIntroductionObservations and analysesForecastsCurrent practice in TC verification – deterministic forecastsCurrent verification practice – Probabilistic forecasts and ensemblesVerification of monthly and seasonal tropical cyclone forecastsExperimental verification methodsComparing forecastsPresentation of verification resultsJWGFVR is also preparing a document describing methods for verifying tropical cyclone forecasts, in support of GIFS-TIGGE and the WMO Typhoon Landfall FDP. It will include standard methods for assessing track and intensity forecasts, probabilistic and ensemble forecast verification, and a review of recent developments in this field. In addition to track and intensity, we also recommend methodologies for TC-related hazards – wind, heavy precipitation, storm surge.
12Beyond track and intensity… Track error distributionTC genesisWind speedMost tropical cyclone verification (at least operationally) focuses on only 2 variables: track location and intensity.Since a great deal of the damage associated with tropical storms is related to other factors, this seems overly limitingSome additional important variables:Storm structure and sizePrecipitationStorm surgeLandfall time, position, and intensityConsistencyUncertaintyInfo to help forecasters (e.g., steering flow)Other?Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance informationPrecipitation (MODE spatial method)
13Verification of probabilistic TC forecasts TIGGE ensemble intensity error before bias correctionAfter bias correctionCourtesy Yu Hui(STI)
14Issues in TC verification Observations contain large uncertaintiesSome additional important variables:Storm structure and sizeRapid intensificationLandfall time, position, and intensityPrecipitationStorm surgeConsistencyUncertaintyInfo to help forecasters (e.g., steering flow)Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance informationFalse alarms (incl. forecast storms outliving actual storm) and misses (unforecasted storms) currently ignoredHow best to evaluate ensemble TC predictions?
15Promotion of best practice Verification of forecasts from mesoscale models (early DRAFT)Purposes of verificationChoices to be madeSurface and/or upper-air verification?Point-wise and/or spatial verification?Proposal for 2nd Spatial Verification Intercomparison Project in collaboration with Short-Range NWP (SRNWP)
16Spatial Verification Method Intercomparison Project International comparison of many new spatial verification methodsPhase 1 (precipitation) completedMethods applied by researchers to same datasets (precipitation; perturbed cases; idealized cases)Subjective forecast evaluationsWeather and Forecasting special collectionPhase 2 in planning stageComplex terrainMAP D-PHASE / COPS datasetWind and precipitation, timing errors16
17Outreach and training Verification workshops and tutorials On-site, travellingEUMETCAL training modulesVerification web pageSharing of tools
185th International Verification Methods Workshop Melbourne 2011 Tutorial32 students from 23 countriesLectures and exercises (took tools home)Group projects - presented at workshopWorkshop~120 participantsTopics:Ensembles and probabilistic forecastsSeasonal and climateAviation verificationUser-oriented verificationDiagnostic methods and toolsTropical cyclones and high impact weatherWeather warning verificationUncertaintySpecial issue of Meteorol. Applications in early 2013THANKS FOR WWRP’S SUPPORT!!Had some trouble with participants getting their visas on time – some countries missed out (Ethiopia, China came late). Could use advice/help from WMO on this.
19Seamless verification Seamless forecasts - consistent across space/time scalessingle modelling system or blendedlikely to be probabilistic / ensembleclimatechangelocalpointregionalglobalSpatial scaleforecast aggregation timeminuteshoursdaysweeksmonthsyearsdecadesNWPnowcastsdecadalpredictionseasonalsub-veryshortrangeWhich scales / phenomena are predictable?Different user requirements at different scales (timing, location, …)
20"Seamless verification" – consistent across space/time scales Modelling perspective – is my model doing the right thing?Process approachesLES-style verification of NWP runs (first few hours)T-AMIP style verification of coupled / climate runs (first few days)Single column modelStatistical approachesSpatial and temporal spectraSpread-skillMarginal distributions (histograms, etc.)Seamless verificationIt was not clear to the group how to define seamless verification, and the WG had a lively discussion on this topic.One possible interpretation is consistent verification across a range of scales by for example applying the same verification scores to all forecasts being verified to allow comparison. This would entail greater time and space aggregation as longer forecast ranges are verified. Averaging could be applied to the EPS medium range and monthly time range, as these two forecast ranges have an overlapping period. Similarly the concept of seamless verification could be applied to the EPS medium range forecast and seasonal forecast. For example, verification scores could be calculated using tercile exceedance and the ERA Interim could be used as the reference system. Verification across scales could involve conversion of forecast types, for example, from precipitation amounts (weather scales) to terciles (climate scales). A probabilistic framework would likely be the best approach to connect weather and climate scales.Perkins et al., J.Clim. 2007
21"Seamless verification" – consistent across space/time scales User perspective – can I use this forecast to help me make a better decision?Neighborhood approaches - spatial and temporal scales with useful skillGeneralized discrimination score (Mason & Weigel, MWR 2009)consistent treatment of binary, multi-category, continuous, probabilistic forecastsCalibration - accounting for space-time dependence of bias and accuracy?Conditional verification based on larger scale regimeExtreme Forecast Index (EFI) approach for extremesJWGFVR activityProposal for research in verifying forecasts in weather-climate interfaceAssessment component of UK INTEGRATE projectModels may be seamless – but user needs are not!Nowcasting users can have very different needs for products than short-range forecasting users (more localized in space and time; wider range of products which are not standard in SR NWP and may be difficult to produce with an NWP model; some products routinely measured, others not; …)Temporal/spatial resolution go together. On small spatial /temporal scales modelling/verification should be inherently probabilistic. The predictability of phenomena generally decreases (greatly) from short to very short time/spatial scales. How to assess/show such limits to predictability in verification?Need to distinguish “normal” and “extreme” weather?Nowcasting more than SR forecasting is interested not just in intensities of phenomena, but also in exact timing/duration and location. Insight in errors of timing/location is needed.Different demands on observations, possibly not to be met with the same data sources?From Marion:We have two work packages kicking off this FY (i.e. now or soon). I am co-chair of the assessment group for INTEGRATE which is our 3-year programme for improving our global modelling capability. The INTEGRATE project follows on from the CAPTIVATE project. INTEGRATE project pages are hosted on the collaboration server. A password is needed (as UM partners you have access to these pages). The broad aim of INTEGRATE is to pull through model developments from components of the physical earth system (Atmosphere, Oceans, Land, Sea-Ice and Land-Ice, and Aerosols) and integrate them into a fully coupled global prediction system, for use across weather and climate timescales. The project attempts to begin the process of integrating coupled atmosphere-ocean (COA) forecast data into a conventional weather forecast verification framework, and consider the forecast skill of surface weather parameters in the existing operational seasonal COA system, GloSea4 and 5, over the first 2 weeks of the forecast. Within that I am focusing more on applying weather-type verification tools on global, longer time scales, monthly to seasonal. A part of this is a comparison of atmosphere-only (AO) and coupled ocean-atmosphere (COA) forecasts for the first 15 days (initially). Both are approaching the idea of seamless forecasting, i.e. can we used COA models to do NWP-type forecasts for the first 15 days, and seamless verification, i.e. finding some common ground in the way we can compare longer simulations and short-range NWP.
22Final thoughtsJWGFVR would like to strengthen its relationship with WWRP Tropical Meteorology WGTyphoon Landfall FDPYOTCTIGGESubseasonal to Seasonal PredictionCLIVAR“Good will” participation (beyond advice) in WWRP and THORPEX projects getting harder to provideVideoconferencingCapacity building of “local” scientistsInclude verification component in funded projects
24Summary of recommendations for cloud verification We recommend that the purpose of a verification study is considered carefully before commencing.Depending on the purpose:For user-oriented verification we recommend that, at least the following cloud variables be verified: total cloud cover and cloud base height (CBH). If possible low, medium and high cloud should also be considered. An estimate of spatial bias is highly desirable, through the use of, e.g., satellite cloud masks;More generally, we recommend the use of remotely sensed data such as satellite imagery for cloud verification. Satellite analyses should not be used at short lead times, because of a lack of independence.For model-oriented verification there is a preference for a comparison of simulated and observed radiances, but ultimately what is used should depend on the pre-determined purpose. For model-oriented verification the range of parameters of interest is more diverse, and the purpose will dictate the parameter and choice of observations, but we strongly recommend that vertical profiles are considered in this context.We also recommend the use of post-processed cloud products created from satellite radiances for user- and model-oriented verification, but these should be avoided for model inter-comparisons if the derived satellite products require model input since the model that is used to derive the product could be favoured.We recommend that verification be done both against:gridded observations and vertical profiles (model-oriented verification), with model inter-comparison done on a common latitude/longitude grid that accommodates the coarsest resolution;the use of cloud analyses should be avoided because of any model-specific "contamination" of observation data sets;surface station observations (user-oriented verification).For synoptic surface observations we recommend that: all observations should be used but if different observation types exist (e.g., automated and manual) they should not be mixed;automated cloud base height observations be used for low thresholds (which are typically those of interest, e.g., for aviation).We recognize that a combination of observations is required when assessing the impact of model physics changes. We recommend the use of cloud radar and lidar data as available, but recognize that this may not be a routine activity.We recommend that verification data and results be stratified by lead time, diurnal cycle, season, and geographical region.The recommended set of metrics is listed in Section 4. Higher priority should be given to those labeled with three stars. The optional measures are also desirable.We recommend that the verification of climatology forecasts be reported along with the forecast verification. The verification of persistence forecasts and use of model skill scores with respect to persistence, climatology, or random chance is highly desirable.For model-oriented verification in particular, it is recommended that all aggregate verification scores be accompanied by 95% confidence intervals, and reporting of the median and inter-quartile range for each score is highly desirable.