Presentation is loading. Please wait.

Presentation is loading. Please wait.

CONSENS Priority Project Status report COSMO year 2009/2010 Involved scientists: Chiara Marsigli, Andrea Montani, Tiziana Paccagnella, Tommaso Diomede.

Similar presentations


Presentation on theme: "CONSENS Priority Project Status report COSMO year 2009/2010 Involved scientists: Chiara Marsigli, Andrea Montani, Tiziana Paccagnella, Tommaso Diomede."— Presentation transcript:

1 CONSENS Priority Project Status report COSMO year 2009/2010 Involved scientists: Chiara Marsigli, Andrea Montani, Tiziana Paccagnella, Tommaso Diomede (ARPA-SIMC) Andrea Corigliano (Uni BO), Michele Salmi (Uni FE) Flora Gofa, Petroula Louka (HNMS)

2 Overview  Task 1: Running of the COSMO-SREPS suite  suite maintenance  implementation of the back-up suite  Task 2: Model perturbations  perturbation of physics parameters  perturbation of soil fields  Task 3: Ensemble merging  COSMO-LEPS – COSMO-SREPS comparison  Multi-clustering  Task 4: Calibration

3 COSMO-SREPS IFS (15km) – ECMWF global GME (30km) – DWD global UM – UKMO global GFS (50 km) – NCEP global INT2LM (v 1.14) COSMO (v 4.12) 00 UTC and12 UTC 7 km 40 levels 16 members 48 h 16 physics perturbations

4 T1 - Running of the COSMO-SREPS suite (C. Marsigli)  Maintenance of the COSMO-SREPS suite at ECMWF  Implementation of the back-up suite:  The work involves also DWD (even if implicitly!)  A BC suite is being implemented by DWD at ECMWF, to provide BCs to COSMO-DE-EPS  The BC suite will provide the 4 control members to COSMO- SREPS  Direct nesting on the global models  Domain enlargement and resolution increase (7 km)  12 members are available every day (IFS, GME, GFS branches)

5 Suite set-up convection scheme: 0 Tiedtke 1 Kain-Fritsch maximal turbulent length scale length scale of thermal surface patterns scaling factor of the laminar layer depth ratio of laminar scaling factors for heat over sea minimal stomata resistance

6 The new COSMO-SREPS suite – first results  Direct nesting of COSMO at 10 km (!) on IFS (15km) and GME (30 km)  Analysis for MAM 2010 (76 dates, suite running from mid March)  Scores computed for:  total precipitation  2m temperature and dew-point temperature

7 2m T – deterministic scores Northern Italy data - Nearest grid point ifs gme MAM10 BIASMAE

8 2m T – deterministic scores Northern Italy data - Nearest grid point Tiedtke Kain-Fritsch MAM10 BIASMAE

9 2m T – deterministic scores Northern Italy data - Nearest grid point MAM10 BIASMAE pat_len > tur_len < tur_len > rlam_heat < rat_sea < crsmin >

10 2m Td – deterministic scores Northern Italy data - Nearest grid point ifs gme MAM10 BIASMAE

11 2m Td – deterministic scores Northern Italy data - Nearest grid point MAM10 BIASMAE Tiedtke Kain-Fritsch

12 2m Td – deterministic scores Northern Italy data - Nearest grid point MAM10 BIASMAE pat_len > tur_len < tur_len > rlam_heat < rat_sea < crsmin >

13 Northern Italy network Average over 0.5 x 0.5 deg boxes 24h precipitation 0-24h BSS ROC MAM10

14 Northern Italy network Average over 0.5 x 0.5 deg boxes 24h precipitation BSS 24-48h ROC MAM10

15 Remarks for COSMO-SREPS  IFS and GME driven runs are of similar quality in terms of t and td, but have different BIAS (especially for td)  For precipitation forecasts, a “well-mixed” 4 members ensemble is as skilful as the full 8 member ensemble, even in the members are of different quality  The runs with physics perturbations have similar scores, the main differences are in td

16 T2.1 - Model perturbations: parameters (F. Gofa, P. Louka, C. Marsigli)  New parameter perturbations are tested in a dedicated test suite (CSPERT), where IC and BCs are not perturbed (IFS operational run)  BUs are provided from Italian Special Projects  New runs of the CSPERT suite were performed, from Spring 2009 to Spring 2010  Analysis of the results for MAM and SON 2009

17 The CSPERT suite memberconvpat_lenrlam_heatrat_seacrsmincloudmu_raingscp 1T5000.1202005.00e+080.54 2KF5000.1202005.00e+080.54 3T500112005.00e+080.54 4KF500112005.00e+080.54 5T5001201505.00e+070.54 6KF5001201505.00e+070.54 7T5001201505.00e+0804 8KF5001201505.00e+0804 9T5001201505.00e+080.53 (no gra) 10KF5001201505.00e+080.53 (no gra) 11T100001201505.00e+070.54 12KF100001201505.00e+070.54 13T5001201505.00e+0704 14KF5001201505.00e+0704 15T5001201505.00e+080.54 16KF5001201505.00e+080.54 15: ctrl T 16: ctrl KF

18 tp > 1mm /6h tp > 10mm /6h 6h precipitation – Northern Italy MAM09

19 Precipitation – Greece MAM09

20 T and Td – Greece

21 tp > 1mm /6h tp > 10mm /6h 6h precipitation – Northern Italy SON09

22 Remarks from the CSPERT suite  Mu_rain=0:  Less precipitation for low threshold  Improve the high thresholds, especially Tiedtke member  Cloud_num=5e+07:  No strong impact  Pat_len=10000:  Increase the precipitation, especially Tiedtke member  Little POD improvement with small effect on FA  the set crsmin=200 (largest) and rat_sea=1 (smallest) seems to “improve” bias for T and Td, (over Greece)

23 2.2 Model perturbations: Developing perturbations for the lower boundary (F.Gofa, P.Louka) Aim Implement a technique for perturbing soil moisture conditions and explore its impacts Reasoning The lack of spread is typically worse near the surface rather than higher in the troposphere. Also, soil moisture is of primary importance in determining the partition of energy between surface heat fluxes, thus affecting surface temperature forecasts

24 T3.1 - Ensemble merging: comparison of the methodologies (C. Marsigli)  COSMO-LEPS (EPS downscaling + physics perturbations) and COSMO-SREPS (IC and BCs from a multi-model + physics perturbations) are compared:  12UTC runs, over SON 2009 (34 runs, 12 members each)  spread/error relationship  precipitation scores  During the last year of the project, a more clean comparison has been scheduled:  16 runs of both systems available every day  same model version  same namelists  same perturbations of the physics parameters

25 +24h med05

26 +48h med05

27 +24h max05

28 +48h max05

29 T3.2 - Ensemble merging: development of the COSMO-LEPS clustering (A. Montani, A. Corigliano)  Aim: perform a dynamical downscaling where driving members for COSMO are taken from more than one global ensemble  ECMWF EPS and UKMO MOGREPS have been considered  The cluster analysis is applied on different sets of members coming from the global ensembles initial conditions by EPS initial conditions by MOGREPS

30 Issues  How does the spread/skill relationship of the mixed global ensembles look like?  Where do the best (and the worst) elements of REDU ensembles come from? How to they score depending on their “origin”?  How do “REDUs” ensembles rank with respect to the correspondent full ensembles?  Is weighting according to the cluster population rewarding?  Is there added value with respect to single-model ensemble:  BEFORE dowscaling  AFTER downscaling -> not done

31 Forecast and analysis datasets data from TIGGE-PORTAL (everything in GRIB2) 90 days (MAM09) of ECMWF-EPS and UKMO-MOGREPS run at 00 and 12 UTC all fields are interpolated at 0.5°x0.5° (about 50 km), over Central and Southern Europe (30-60N, 10W-30E) use Z500 at fc+96h as clustering variable verifying analysis: “consensus analysis” (average of UKMO and ECMWF high-res analyses) generate the following global ensembles: EPS (50+1): 51 members MOGREPS (23+1): 24 members MINI-MIX (EPS24 + MOGREPS24): 48 members MEGA-MIX (EPS51 +MOGREPS24): 75 members

32 The global ensembles Ope MOGREPS, 24 members EC-EPS 24 MOGREPS 24 Ope ECMWF EPS, 51 members EC-EPS 51 24 members of the ECMWF EPS (random choice) Single-model ensembles Multi-model ensembles MINIMIX 48 EC-EPS 24MOGREPS 24 MEGAMIX 75 EC-EPS 51MOGREPS 24

33 Performance of models: spread-skill relation EC-EPS 24 Spread = 29.8 m RMSE_EM = 31.6 m MOGREPS 24 Spread = 24.8 m RMSE_EM = 35.6 m EC-EPS 51 Spread = 30.1 m RMSE_EM = 31.5 m PERFORMANCE OF SINGLE-MODEL ENSEMBLES PERFORMANCE OF MULTI-MODEL ENSEMBLES MINIMIX 48 Spread = 30.1 m RMSE_EM = 31.2 m MEGAMIX 75 Spread = 30.8 m RMSE_EM = 30.7 m RMSE_EM – Spread = 1.8 m RMSE_EM – Spread = 10.8 m RMSE_EM – Spread = 1.4 m RMSE_EM – Spread = 1.1 mRMSE_EM – Spread = -0.1 m MINIMIX 48 performs similarly to ECMWF-EPS 51 MEGAMIX 75 has the lowest RMSE_EM and the RMSE_EM is almost equal to the spread

34 Performance of models: spread-skill relation MOGREPS 24 MEGAMIX 75

35 Where do the best (and the worst) elements come from? MEMBER bestworst RM bestworst MINIMIX Percentage and RMSE

36 Where do the best (and the worst) elements come from? MEMBER bestworst RM bestworst MEGAMIX Percentage and RMSE

37 Impact of ensemble reduction MEGAMIX 75: RMSE_EM = 30.7 m REDU-MEGAMIX: RMSE_EM = 32.4 m

38 Impact of RM weighting MEGAMIX 75: RMSE_EM = 30.7 m REDU-MEGAMIX: RMSE_EM = 32.4 m REDU-MEGAMIX weighted: RMSE_EM_W = 31.8 m

39 Conclusions  The multi model MINIMIX 48 performs similarly to ECMWF-EPS 51  MEGAMIX 75 has the best spread-error relationship  Best and worst members and RMs come from both ensembles (with comparable RMSE)  There is no substantial information loose going form the full to the reduced ensembles  There is a relation between the cluster population and the score of the RM  For the REDU ensembles, the weighted ensemble mean has lower RMSE than the not weighted one

40 Future plans  Continue the work outside the CONSENS project (since no programming of the work is possible at this stage)  Implement dynamical downscaling: nest COSMO model in the selected RMs and generate “hybrid” COSMO-LEPS using boundaries from members of different global ensembles.  For a number of case, compare operational COSMO-LEPS and “hybrid” COSMO-LEPS.

41 T4 - Calibration (T. Diomede)  Data collection: Data over Switzerland, provided by MeteoSwiss (interpolated with the SYMAP method on the 417 COSMO-LEPS grid points covering Switzerland; more than 450 stations, originally) Data over Germany, provided by DWD (1038 stations, interpolated with an inverse-squared-distance weighting method over the 3566 Germany grid points)  calibration over Switzerland and Germany, also on sub-areas  test on the use of the specific humidity at 700 hPa for performing the analog search  test on the application of calibration functions which are specific for underestimation and overestimation model conditions over ER;  comparison among results obtained for different lengths of the reforecast dataset over Switzerland and Emilia-Romagna;  verification of the calibration process by the coupling of QPFs with an hydrologic model (implemented for the Reno river basin, Emilia- Romagna).

42 Calibration – choice of the methods  choice of methodologies which enable a calibration of the quantitative precipitation forecasts, not only of the probabilities of exceeding a threshold  aim:  improve COSMO-LEPS output (QPF)  hydrological applications  chosen methods up to now:  Cumulative Distribution Function (CDF) based  Linear regression  Analogues, based on the similarity of forecast fields

43 Calibration – analogues example on the methodology used for the analogue search in terms of geopotential at 700 hPa Germany Emilia-Romagna Switzerland domains used for the analogue search

44 Calibration over Germany autumn80th percentile +18-42h+66-90h

45 Calibration over Germany +18-42h+66-90h summer80th percentile

46 Calibration over Germany summerautumn 95th percentile +66-90h

47 Calibration over Switzerland lead time: +18-42 h lead time: +66-90 h 80th percentile summer

48 Calibration over Switzerland lead time: +18-42 h lead time: +66-90 h 80th percentile autumn

49 Differences among COSMO regions 95th percentile 80th percentile autumn summer

50 Impact of using a reduced reforecast data-set +68-92h autumn summer +20-44h

51 calibration specific for over- and under-estimation autumn summer winter 95th percentile using a predictor to identify if the current forecast will fall in the underestimation or in the overestimation category the forecast of a certain field compared against to the best analog of the same field, which identify the category

52 upper mountainous macro-areas overestimationunderestimation Wind S-SW-W Linear Regression autumn [m2/s2]

53 Impact on hydrological predictions autumn 95th percentile 90th percentile missed false alarms

54 Remarks  The performance of the calibration methodologies are very much dependent on the geographic area  The lack of improvement over Emilia-Romagna can be ascribed to the lack of a strong relationship between forecast and observed data (it is necessary to generate correction functions which are weather-type specific)  The effect of the calibration should be evaluated from the user point of view

55 Next developments  multi-variable approach based on the evaluation of upper air fields at different pressure levels and times of the day  complete the calibration over all COSMO countries included in the domain (Greece, Italy, Poland, Romania), if dense and long precipitation data series are available

56 Final Remarks

57 Next milestones  the back-up suite has been implemented, with 12 members. During next season, it will move to 16 members, probably using only the 3 global models fully available (IFS, GME, GFS)  the new microphysics perturbations will be added to the suites during within autumn 2010  test the soil moisture perturbation technique in the COSMO-SREPS suite over a period (two seasons)

58 Next milestones  Carry on the intercomparison between COSMO- LEPS and COSMO-SREPS for a period (from now to February 2011):  16 runs of both systems available every day  same model version  same namelists  same perturbations of the physics parameters  EPS now having EnDA+SVs  Decide about the implementation of the calibration of COSMO-LEPS outputs

59 Hints for discussion  COSMO-SREPS:  Problems with the UM boundary conditions  Use of 3 sets of global models only (but still 16 members)  Which are the needs for BCs to run convective- permitting ensembles in the COSMO countries?  Calibration:  The performance of the calibration methodology is dependent on the precipitation threshold and on the considered area => different calibration methods for different areas?  Difficulty in “catching the bias” of precipitation over Emilia-Romagna, dependent on weather type


Download ppt "CONSENS Priority Project Status report COSMO year 2009/2010 Involved scientists: Chiara Marsigli, Andrea Montani, Tiziana Paccagnella, Tommaso Diomede."

Similar presentations


Ads by Google