Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007.

Similar presentations


Presentation on theme: "Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007."— Presentation transcript:

1 Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007

2 TWO STUDIES WERE CONDUCTED USING THE CFS AND EUROPEAN DEMETER DATA TO EVALUATE THE FOLLOWING : 1.How extensive (long) should hindcasts be? 2.Does the NCEP CFS forecasts add to the skill of the European DEMETER-3 forecasts to produce a viable International Multi Model Ensemble (IMME) ?

3 How extensive (long) should hindcasts be? Huug van den Dool Climate Prediction Center, NCEP/NWS/NOAA Suranjana Saha Environmental Modeling Center, NCEP/NWS/NOAA

4 MODEL SEC CFSECPLAMETFUKMINGVLODCERFMME8 (EW) ALL MODELS MME3 (EW) CFS+EC +UKM SEC0 (NO SE) 2.11.20.0 0.40.20.00.20.9 SEC8 (last 8 years) 4.37.11.4 7.51.40.42.23.88.6 SEC21 (all 21 years) 11.2 (0.33 cor) 8.00.4 8.60.60.10.52.017.0 Explained Variance (%) Feb 1981-2001; lead 3 (Nov starts); monthly T2m (US, CD data) Explained Variance=Square of Anom Correlation SEC : Systematic Error Correction; EW : Equal Weights CFS=CFS, USA; EC=ECMWF; PLA=Max Planck Inst, Germany; METF=MeteoFrance, France; UKM=UKMetOffice; INGV=INGV, Italy, LOD=LODYC, France; CERF=CERFACS, France

5 Anomaly Correlation (%) Feb 1981-2001; lead 3 (Nov starts); monthly T2m (US, CD data) WITH SEC8 WITH SEC21 SEC8-SEC21 SEC : Systematic Error Correction Need more years to determine the SEC where/when the inter annual standard deviation is large

6 CONCLUSIONS Without SEC (systematic error correction) there is no skill by any method (for presumably the best month: Feb) With SEC (1 st moment only), there is skill by only a few models (5 out of 8 are still useless) MME not good when quality of models varies too much MME3 works well, when using just three good models

7 CONCLUSIONS (contd) CFS improves the most from extensive hindcasts (21 years noticeably better than 8) and has the most skill. Other models have far less skill with all years included. Cross validation (CV) is problematic (leave 3 years out when doing 8 year based SEC?) Need more years to determine the SEC where/when the inter annual standard deviation is large

8 15-member CFS reforecasts

9 Does the NCEP CFS add to the skill of the European DEMETER-3 to produce a viable International Multi Model Ensemble (IMME) ? Huug van den Dool Climate Prediction Center, NCEP/NWS/NOAA Suranjana Saha and Åke Johansson Environmental Modeling Center, NCEP/NWS/NOAA

10 DATA USED DEMETER-3 (DEM3) = ECMWF + METFR + UKMO CFS IMME = DEM3 + CFS 1981 – 2001 4 Initial condition months : Feb, May, Aug and Nov Leads 1-5 Monthly means

11 DATA/Definitions USED (contd) Anomaly Correlation (deterministic) and Brier Score (probabilistic) Ensemble Mean and PDF T2m and Prate Europe and United States Verification Data : T2m : Fan and van den Dool Prate : CMAP “ NO consolidation, equal weights, NO Cross-validation “

12 BRIER SCORE FOR 3-CLASS SYSTEM 1. Calculate tercile boundaries from observations 1981-2001 (1982-2002 for longer leads) at each gridpoint. 2. Assign departures from model’s own climatology (based on 21 years, all members) to one of the three classes: Below (B), Normal (N) and Above (A), and find the fraction of forecasts (F) among all participating ensemble members for these classes denoted by FB, FN and FA respectively, such that FB+ FN+FA=1. 3. Denoting Observations as O, we calculate a Brier Score (BS) as : BS={(FB-OB)**2 +(FN-ON)**2 + (FA-OA)**2}/3, aggregated over all years and all grid points. {{For example, when the observation is in the B class, we have (1,0,0) for (OB, ON, OA) etc.}} 4. BS for random deterministic prediction: 0.444 BS for ‘always climatology’ (1/3 rd,1/3 rd,1/3 rd ) : 0.222 5. RPS: The same as Brier Score, but for cumulative distribution (no- skill=0.148)

13 Number of times IMME improves upon DEM-3 : out of 20 cases (4 IC’s x 5 leads): RegionEUROPE USA VariableT2mPrateT2mPrate Anomaly Correlation 914 Brier Score 1618.51920 RPS141519.520 “The bottom line”

14 Frequency of being the best model in 20 cases in terms of Anomaly Correlation of the Ensemble Mean “Another bottom line” CFSECMWFMETFRUKMO T2mUSA4556 T2mEUROPE3565 PrateUSA7336 PrateEUROPE11005

15 Frequency of being the best model in 20 cases in terms of Brier Score of the PDF “ Another bottom line” CFSECMWFMETFRUKMO T2mUSA11215 T2mEUROPE10313 PrateUSA17002 PrateEUROPE17011

16 Frequency of being the best model in 20 cases in terms of Ranked Probability Score (RPS) of the PDF “ Another bottom line” CFSECMWFMETFRUKMO T2mUSA9416 T2mEUROPE9343 PrateUSA19001 PrateEUROPE18001

17 CONCLUSIONS Overall, NCEP CFS contributes to the skill of IMME (relative to DEM3) for equal weights. This is especially so in terms of the probabilistic Brier Score And for Precipitation When the skill of a model is low, consolidation of forecasts (based on a-priori skill estimates) will reduce the chance that this model will be included in the IMME, and thus may lead to improvements in the skill of the IMME as obtained from equal weighting

18 CONCLUSIONS (Contd) In comparison to ECMWF, METFR and UKMO, the CFS as an individual model does: well in deterministic scoring (AC) for Prate very well in probability scoring (BS) for Prate and T2m over both domains of USA and EUROPE.

19 CONCLUSIONS (Contd) The weakness of the CFS is in the deterministic scoring (AC) for T2m (which is near average of the other models) over both EUROPE and USA While CFS contributes to IMME, it is questionable whether all other models contribute to CFS.

20 CONCLUSIONS (Contd) Skill (if any) over EUROPE or USA is very modest for any model, or any combination of models. The AC for the ensemble mean gives a more “positive” impression about skill than the Brier Score, which rarely improved over climatological probabilities in this study

21 EUROPEAN IMME UPDATE RESULTS OF THIS STUDY WERE SENT TO THE ECMWF. THE DIRECTOR, ECMWF SHOWED INTEREST, BUT WANTED HIS OWN SCIENTISTS TO CARRY OUT THE EVALUATION. DR. DOBLAS-REYES (ECMWF) HAS DOWNLOADED THE CFS RETROSPECTIVE DATA FROM THE CFS SERVER AND IS IN THE PROCESS OF EVALUATING THE IMME, BUT USING THEIR LATEST EUROSIP DATA (INSTEAD OF THE DEMETER DATA). RISKS THE EUROPEANS MAY WELL WANT TO KEEP THEIR MME EUROPEAN. THEIR OPERATIONAL SEASONAL FORECAST PRODUCTS ARE NOT RELEASED IN REAL TIME (ONLY TO MEMBER STATES). BILATERAL AGREEMENTS MAY HAVE TO BE MADE TO OBTAIN THESE IN REAL TIME FOR ANY OPERATIONAL USE IN AN IMME WITH THE CFS.

22 OTHER COUNTRIES IN IMME UPDATE BMRC, AUSTRALIA THE AUSTRALIANS ARE IN THE PROCESS OF COMPLETING THE RETROSPECTIVE FORECASTS WITH THEIR COUPLED MODEL. WHEN THESE FORECASTS ARE COMPLETED, A SIMILAR STUDY WILL BE CONDUCTED TO EVALUATE WHETHER THE AUSTRALIAN MODEL FORECASTS WILL BRING ADDITIONAL SKILL TO THE CFS FORECASTS. BCC, BEIJING, CHINA A SIMILAR SITUATION PERTAINS TO THE CHINESE METEOROLOGICAL AGENCY. WHEN THEY HAVE COMPLETED THE RETROSPECTIVE FORECASTS WITH THEIR COUPLED MODEL, WE WILL EVALUATE WHETHER THE CHINESE MODEL FORECASTS WILL BRING ADDITIONAL SKILL TO THE CFS FORECASTS.

23 NATIONAL MME UPDATE GFDL HINDCAST DATA HAS BEEN OBTAINED FOR 4 INITIAL MONTHS (APR, MAY, OCT, NOV) FROM GFDL. THIS DATA IS BEING PROCESSED AND TRANSFERRED TO NCEP GRIDS FOR COMPARISON AND INCLUSION IN A MME WITH THE CFS. NASA NOT READY TO START THEIR HINDCASTS NCAR NOT READY TO START THEIR HINDCASTS. BEN KIRTMAN (COLA) HAS DONE A FEW HINDCASTS WITH THE NCAR MODEL WHICH SHOW PROMISE. A FULL HINDCAST NEEDS TO BE DONE FOR EVALUATION IN A MME WITH THE CFS.


Download ppt "Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007."

Similar presentations


Ads by Google