Presentation is loading. Please wait.

Presentation is loading. Please wait.

WFM 6311: Climate Change Risk Management

Similar presentations


Presentation on theme: "WFM 6311: Climate Change Risk Management"— Presentation transcript:

1 WFM 6311: Climate Change Risk Management
Lecture-6: Approaches to Select GCM data AKM Saiful Islam Institute of Water and Flood Management (IWFM) Bangladesh University of Engineering and Technology (BUET) 29 June, 2016

2 Approaches for selecting a Global Climate Model for an Impact Study

3 The IPCC has a guidance document of interest…
IPCC-TGICA, 2007 “General Guidelines on the use of Scenario Data for Climate Impact and Adaptation Assessment” Version 2, June 2007 Prepared by T.R. Carter with contributions from other authors The Task Group on Data and Scenario Support for Impact and Climate Assessment (TGICA) of IPCC This PDF is provided on the CCCSN Training DVD 3

4 From the Range of Projections…
IPCC recommends * the use of more than simply ONE model or scenario projection (one should use an ‘ensemble’ approach) – we saw why earlier The use of a limited number of models or scenarios provides no information of the uncertainty involved in climate modelling Alternatives to an ‘ensemble approach’ might involve the selection of models/scenario combinations which ‘bound’ the max/min of reasonable model projections (used in IJC Lake Ontario-St. Lawrence Regulatory Study) * (IPCC-TGICA, 2007) 4

5 Two Tests for the selection of a Model:
How well does a model reproduce the historical climate? Commonly called ‘Model Validation’ TEST 2: How does the model compare with all other models for future projections? 5

6 First test: Baseline (historical) climate
We can test how well a model has reproduced the historical baseline climate (Model VALIDATION) A model should be able to accurately reproduce past climate (baseline) as a criterion for further consideration Require reliable, long-term observed climate data from the location of interest OR we could use GRIDDED global datasets at the same scale as the models IMPORTANT: Remember we are comparing site-specific to a grid cell average, so an exact match is not to be expected. 6

7 A model should not be an outlier in the community of model results
Second test: Future Projection We can check how a model performs in comparison with many others in a future projection 5 criteria outlined by IPCC: Consistency with other model projections Physical plausibility (realistic?) Applicability for use (correct variables? timescale?) Representative Accessibility of data A model should not be an outlier in the community of model results 7

8 Reasonable pattern, with models slightly cold
Check maps - CGCM3 - Temperature? OBS Stations NCEP GRIDDED CGCM3T47 Mean ANNUAL TEMPERATURE Reasonable pattern, with models slightly cold 8

9 The model is too cold, but the TREND is good
Example: CGCM3 – Timeseries in the Historical Period The model is too cold, but the TREND is good 9

10 Pattern not quite right –units here are mm/day
Check maps - CGCM3 - Precipitation? OBS Stations NCEP GRIDDED CGCM3T47 Mean ANNUAL PRECIPITATION Pattern not quite right –units here are mm/day 10

11 The model is too wet, TREND is reasonable
Example: CGCM3 – Timeseries in the Historical Period The model is too wet, TREND is reasonable 11

12 Test 1: Baseline Methodology:
Comparison of Annual, Seasonal, Monthly means over the same historical period Use the variables of interest – most common – precipitation and temperature from the Archive Keep in mind that we are comparing a single site location (meteorological station) against a gridded value An improved method would be to include other nearby stations in the analysis as well with long records We then obtain from CCCSN the model baseline values for the same location using the SCATTERPLOT 12

13 Test 1: (continued) Compare the annual values and the distribution of temperature over the year Models which best match the annual mean and the monthly distribution pattern can be identified NOTE: it doesn’t matter which emission scenario we select since for the historical period, the models use the same baseline 13

14 Annual Temperature Annual Precipitation
Test 1: Baseline Methodology… observed means too warm too wet too cold too dry Annual Temperature Annual Precipitation 14

15 Test 1: Baseline Methodology… Looking at Temp and Precip together
Again, SCATTERPLOT on CCCSN – simply select BOTH variables at the same time and all models or combine the 2 initial results in a single spreadsheet ‘Perfect’ model Almost all models are too wet Most models are too cold Outliers can be identified 15

16 Lowest Score Model is Closest to Baseline
Test 1: Baseline Methodology… Rank the models for the baseline period - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank + Lowest Score Model is Closest to Baseline 16

17 This method is best used to reject models (models with largest scores)
Test 1: Baseline Methodology The same analysis can be done on a month and seasonal basis –this can be very important This method is best used to reject models (models with largest scores) We effectively remove from consideration those models with lowest agreement (largest scores) The moderating effect of lakes, local elevation effects, lake-induced precip are all complicating factors 17

18 Test 2: Future Projections
No complications like observed data! We look at the range of model projections for the same location and see how they vary Models with outlier projections (excessive anomalies – which are too large or too small) are best rejected Finding the anomalies is a simple process using SCATTERPLOT on CCCSN 18

19 Test 2: Future Projections
The or period as baseline? Which projection period are we interested in? (2050s is a common period for planning purposes) Is an annual, seasonal or monthly projection needed? - depends on the study 19

20 Median T and P for all models/scenarios
Annual Temperature/Precipitation Change Scatterplot for Toronto Grid Cell: 2050s (ONLY SRES) Median T and P for all models/scenarios 1 Std. Dev 20

21 What do all the models and emission scenarios tell us for this gridcell?
Median Annual Temperature Change in 2050s Toronto Pearson A Observed Normal LOWER UPPER o o o +1.8 +2.6 +3.3 o 7.2 C Median Annual Precipitation Change in 2050s LOWER UPPER +0.4% +5.0% +9.7% 780.8mm 21

22 Lowest Score Model is Closest to ALL MODEL MEDIAN
TEST 2: Which Models are Closest to the Median Projection? Rank the models for the 2050s Projections - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank + Lowest Score Model is Closest to ALL MODEL MEDIAN 22

23 TEST 1 TEST 2 (baseline) (projections)
Is there a ‘best’ model for both tests? TEST TEST (baseline) (projections) Resulting Models Resulting Models Best Models from both TESTS NCARCCSM3 HADCM3 INMCM3.0 GISSAOM CGCM3T47-Mean CGCM3T63 GISSE-R CNRMCM3 HadGEM1 FGOALS-g1.0.SR-A1B CSIROMk3.0.SR-A2 MRI-CGCM2.3.2a.SR-A1B GISSAOM.SR-A1B CGCM3T63.SR-B1 GFDLCM2.0.SR-B1 GFDLCM2.1.SR-A2 HADCM3.SR-A2 BCM2.0.SR-A1B BCM2.0.SR-B1 MRI-CGCM2.3.2a.SR-A2 HADCM3 GISSAOM CGCM3T63 23

24 ‘Extreme variables’ have greater uncertainty than normals
The Caveats: We have only considered ANNUAL values, not SEASONAL or MONTHLY baseline (TEST 1) or projections (TEST 2) The seasonal and monthly options are available on the SCATTERPLOT selector) ‘Extreme variables’ have greater uncertainty than normals Models can show good ANNUAL agreement with baseline and good agreement with all model projections, but they can still have incorrect seasonal or monthly distributions

25 Will Regional Climate Model (RCM)s help?
They offer higher spatial resolution (~50 x 50 km) versus GCM at km The models are driven by an overlying model or gridded data source – so biases in those gridded datasets will also be included in the RCM The time requirements and processing power available means there are fewer emission scenarios available = fewer future pathways for consideration Some investigations will always require further statistical downscaling

26 Annual Temperature Annual Precipitation
Will RCMs Help in TEST 1? CRCM3.7.1: C CRCM4.1.1: 4.9 C CRCM4.2.2: 6.1 C CRCM3.7.1: mm too dry CRCM4.1.1: mm too dry CRCM4.2.2: mm too wet all cold too warm too wet too cold too dry Annual Temperature Annual Precipitation

27 Will RCMs Help in TEST 2? Median T and P 1 Std. Dev crcm4.2.0

28 Running Scatterplots for all parameters

29 CCCSN.CA website Select Scenarios - Visualization

30 Select Scatterplots

31 Get data Input lat long Select AR4 Select Variable Tmean
Select Model(s) validated to Tmean Click Get Data

32 Website Output Plus output table under chart

33 Get data for all variables including climate extremes
You can select an ensemble of models by using Ctrl-Enter

34 Ensemble of CCCSN.CA Results for Ptotal at Windsor

35 Climate Extremes available for some models

36 Future Consecutive Dry Days at Windsor Using 3 GCM model output
Can average all model results for ensemble


Download ppt "WFM 6311: Climate Change Risk Management"

Similar presentations


Ads by Google