Bookmaker or Forecaster? By Philip Johnson. Jersey Meteorological Department.

Slides:



Advertisements
Similar presentations
A unified linear Model Output Statistics scheme for both deterministic and ensemble forecasts S. Vannitsem Institut Royal Météorologique de Belgique Monterey,
Advertisements

ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Robin Hogan Ewan OConnor University of Reading, UK What is the half-life of a cloud forecast?
Chapter 13 – Weather Analysis and Forecasting
Giving Feedback. The right and the wrong. >> giving feedback
What is a good ensemble forecast? Chris Ferro University of Exeter, UK With thanks to Tom Fricker, Keith Mitchell, Stefan Siegert, David Stephenson, Robin.
THOUGHTS ON MODEL ASSESSMENT Porco, DAIDD, December 2012.
Guidance of the WMO Commission for CIimatology on verification of operational seasonal forecasts Ernesto Rodríguez Camino AEMET (Thanks to S. Mason, C.
14 May 2001QPF Verification Workshop Verification of Probability Forecasts at Points WMO QPF Verification Workshop Prague, Czech Republic May 2001.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
Verification of probability and ensemble forecasts
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
1 Using a Knowledge-Based System to Predict Thunderstorms Harvey Stern Australian Bureau of Meteorologyhttp://
Part 5. Human Activities Chapter 13 Weather Forecasting and Analysis.
Probabilistic forecasts of precipitation in terms of quantiles John Bjørnar Bremnes.
HFIP Regional Ensemble Call Audio = Passcode = # 16 September UTC.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
A knowledge-based system to generate internet weather forecasts. Dr Harvey Stern, Bureau of Meteorology, Australia Dr Harvey Stern, Bureau of Meteorology,
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Evaluation of Potential Performance Measures for the Advanced Hydrologic Prediction Service Gary A. Wick NOAA Environmental Technology Laboratory On Rotational.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
Verification in NCEP/HPC Using VSDB-fvs Keith F. Brill November 2007.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Summary/Future Re-anal.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Forecasting process, issues and the public Joe Koval Senior Software Developer The Weather Channel, Atlanta, GA.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Hydrometeorological Prediction Center HPC Experimental PQPF: Method, Products, and Preliminary Verification 1 David Novak HPC Science and Operations Officer.
ENSEMBLES RT4/RT5 Joint Meeting Paris, February 2005 Overview of the WP5.3 Activities Partners: ECMWF, METO/HC, MeteoSchweiz, KNMI, IfM, CNRM, UREAD/CGAM,
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
Retrospective Evaluation of the Performance of Experimental Long-Lead Columbia River Streamflow Forecasts Climate Forecast and Estimated Initial Soil Moisture.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Experimental Design and Statistical Analysis Seth Price Department of Chemical Engineering New Mexico Tech Rev. 11/4/14.
CLIMATE SERVICE DIVISION / OCWWS / NWS L3MTO QC and Accuracy Marina Timofeyeva Contributors: Annette Hollingshead, Dave Unger and Andrea Bair.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
WSN05, 5-9 Sep.2005, Toulouse, France 1 Portable HRM Station Value Verification Package “ORMVERIF” Sultan Salim AL-Yahyai Fauzi Bader Al-Busaidi Khalid.
Sources of Skill and Error in Long Range Columbia River Streamflow Forecasts: A Comparison of the Role of Hydrologic State Variables and Winter Climate.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Common verification methods for ensemble forecasts
FUNCTIONS FUNCTIONS DOMAIN: THE INPUT VALUES FOR A RELATION. USUALLY X INDEPENDENT VARIABLE RANGE: THE OUTPUT VALUES FOR A RELATION. USUALLY.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
Verification methods - towards a user oriented verification The verification group.
3 STUDENT ASSESSMENT DEPARTMENT
Statistics Unit Check your understanding…. Can you answer these? What does the standard deviation of a sample represent? What is the difference between.
Evaluation of Skill and Error Characteristics of Alternative Seasonal Streamflow Forecast Methods Climate Forecast and Estimated Initial Soil Moisture.
Project Quality Management
Verifying and interpreting ensemble products
سرنوشت آب و هواشناسی در ایران
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Forecast Assimilation: A Unified Framework for the
Post Processing.
Probabilistic forecasts
Caio Coelho (Joint CBS/CCl IPET-OPSLS Co-chair) CPTEC/INPE, Brazil
COSMO-LEPS Verification
Deterministic (HRES) and ensemble (ENS) verification scores
La Plata Basin Originated from sub-seasonal workshop focussing on link seamless prediction climate to weather.
Environment Canada Monthly and Seasonal Forecasting Systems
the performance of weather forecasts
What is a good ensemble forecast?
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Bookmaker or Forecaster? By Philip Johnson. Jersey Meteorological Department.

What was wrong with the “Old” methods. Can categorical forecasts be improved? Do the public understand percentages? How can we produce the odds? Do we know if we are getting it right. Why use probabilities?

The product that started us in Probability forecasting.

Producing the forecast. Various model outputs from different centres MOS (model output statistics) Climatology Local knowledge Feedback of forecast assessment

UKMO forecast chart

American MRF model

American MRF model

MSL pressure from an American ensemble forecast.

Verification To assess the output To provide quality assurance Feedback of results to improve forecast

Verification Techniques Reliability Diagrams Brier Scores Skill Scores

Reliability Diagrams Number of forecasts Brier % score Under and over forecasting

Brier and Skill Scores Brier Scores use same information as reliability diagrams. Shows a mean square error of the forecast. 0 is perfect, 1 is worst possible score. Skill Score compares two Brier scores.

Brier and Skill Score v Sample Climate Brier score

Hedging your bets. What if you only forecast the climate value? All forecasts around 28% and you must be right!

Only 25% forecast. Reliable forecast. No sharpness.

Single value forecast. Brier goes up. Skill goes down.

Over the last 6 years we have assessed our forecasts. These have been compared with MOS output. Sample and fixed Climatology has been used for evaluation. What results have we had?

Brier Results. These show that the forecaster used a good range of % values, and good skill levels especially up to T +78 From T+90 the forecasts showed an increase in Brier Scores inline with model variability.

Skill scores The forecast skill when compared with climate shows good positive values Model and forecast skill deteriorate noticeably from T+90

Brier and Skill Score v Climate Brier score for fixed 25% forecast

Did the forecaster make it as a successful Bookmaker? YES

What next? Continued feedback of results Improved modeling More use of Ensemble Forecasts Neural networks

Bookmaker or Forecaster? By Philip Johnson. Jersey Meteorological Department.