Presentation is loading. Please wait.

Presentation is loading. Please wait.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets.

Similar presentations


Presentation on theme: "Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets."— Presentation transcript:

1 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets and Evaluation Tools USEPA PM Model Evaluation Workshop, RTP, NC February 9-10, 2004 Gail Tonnesen, Chao-Jung Chien, Bo Wang Youjun Qin, Zion Wang, Tiegang Cao

2 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Acknowledgments Funding from the Western Regional Air Partnership Modeling Forum and VISTAS. Assistance from EPA and others in gaining access to ambient data. 12km Plots and analysis from Jim Boylan at State of Georgia

3 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Outline UCR Model Evaluation Software –Problems we had to solve Choice of metrics for clean conditions. Judging performance for high resolution nested domains.

4 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Motivation Needed to evaluate model performance for WRAP annual regional haze modeling: –Required a very large number of sites, and days. –For several different ambient monitoring networks Evaluation would be repeated many times: –Many iterations on the “base case” –Several model sensitivity/diagnostic cases to evaluate Limited time and resources were available to complete the evaluation.

5 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Solution Develop model evaluation software to: –Compute 17 statistical metrics for model evaluation. –Generate graphical plots in a variety of formats: –Scatter Plots all sites for one month All sites for full year One site for all days One day for all sites –Time series for each site

6 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Ambient Monitoring Networks IMPROVE (The Interagency Monitoring of Protected Visual Environments) CASTNET (Clean Air Status and Trend Network) EPA’s AQS (Air Quality System) database EPA’s STN (Speciation Trends Network) NADP (National Atmospheric Deposition Program) SEARCH daily & hourly data PAMS (Photochemical Assessment Monitoring Stations) PM Supersites.

7 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Number of Sites Evaluated by Network

8 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Overlap Among Monitoring Networks PM25, PM10 O3, SO2 PM25, PM10 O3, NOx, CO, Pb, etc EPA PM Sites Other monitoring stations from state, local agencies O3, NOx VOCs Sp. PM25, Visibility HNO3, NO3, SO4,

9 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Species Mapping Specify how to compare model with data for each network. Unique species mapping for each air quality model.

10 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model vs. Obs. Species Mapping Table CompoundIMPROVESEARCHSTNCMAQ Mapping SO4 PCM1_SO4M_SO4ASO4J + ASO4I NO3 PCM1_NO3M_NO3ANO3J + ANO3I NH4 0.375*SO4 + 0.29*NO3 PCM1_NH4M_NH4ANH4J + ANH4I OC 1.4*(OC1+OC2+ OC3+OC4+OP) 1.4*PCM3_OC + 1.4*SAF* BackupPCM3_ OC OCM_adjAORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI EC EC1+EC2 + EC3- OP PCM3_ECEC_NIOSHAECJ + AECI SOIL 2.2*Al + 2.49*Si + 1.63*Ca + 2.42*Fe + 1.94*Ti PM25_MajorM etalOxides CrustalA25I +A25J CM MT – FMACORS + ASEAS + ASOIL PM25 a FM TEOM_Mass pm2_5frm or pm2_5mass ASO4J + ASO4I + ANO3J + ANO3I + ANH4J + ANH4I + AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI + AECJ + AECI + A25J + A25I PM10 MT ASO4J + ASO4I + ANO3J + ANO3I + ANH4J + ANH4I + AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI + AECJ + AECI + A25J + A25I + ACORS + ASEAS + ASOIL Bext_Recon (1/Mm) 10 b + 3*f(RH) c (1.375*SO4 + 1.29*NO3) + 4*OC + 10*EC + SOIL + 0.6*CM 10 b + 3*f(RH) c [1.375*(ASO4J + ASO4I) + 1.29*(ANO3J + ANO3I)] + 4*1.4*(AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI) + 10*(AECJ + AECI) + 1*(A25J + A25I) + 0.6*(ACORS + ASEAS + ASOIL)

11 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Gaseous compounds, wet deposition, and others CompoundAQSNADPCASTNetCMAQ Mapping O3, ppmv O3 CO, ppmv CO NO2, ppmv NO2 SO2, ppmv SO2 SO2, ug/m3 Total_SO22211.5*DENS*SO2 HNO3, ug/m3 NHNO32176.9*DENS*HNO3 Total_NO3, ug/m3 Total_NO3ANO3J + ANO3I + 0.9841*2211.5*DENS*HNO3 SO4_wdep, kg/ha WSO4ASO4J + ASO4I (from WDEP1) NO3_wdep, kg/ha WNO3ANO3J + ANO3I (from WDEP1) NH4_wdep, kg/ha WNH4ANH4J + ANH4I (from WDEP1)

12 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside No EPA guidance available for PM. Everyone has their personal favorite metric. Several metrics are non-symmetric about zero causing over predictions to be exaggerated compared to under-predictions. Is coefficient of determination (R 2 ) a useful metric? Recommended Performance Metrics?

13 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Statistical measures used in model performance evaluation Measure Mathematical Expression Notation Accuracy of unpaired peak (Au) O peak = peak observation; P u peak = unpaired peak prediction within 2 grid cells of peak observation site Accuracy of paired peak (Ap) P = paired in time and space peak prediction Coefficient of determination Pi = prediction at time and location i; Oi =observation at time and location i; =arithmetic average of Pi, i=1,2,…, N; =arithmetic average of Oi, i=1,2,…,N Normalized Mean Error (NME) Reported as % Root Mean Square Error (RMSE) Mean Absolute Gross Error (MAGE)

14 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside MeasureMathematical ExpressionNotation Fractional Gross Error (FE) Reported as % Mean Normalized Gross Error (MNGE) Reported as % Mean Bias (MB) Mean Normalized Bias (MNB) Reported as % Mean Fractionalized Bias (Fractional Bias, MFB) Reported as % Normalized Mean Bias (NMB) Reported as % Bias Factor (BF) Bias Factor = 1 + MNB; Reported as ratio notation (prediction : observation)

15 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Mean Normalized Bias (MNB) from -100% to + inf. Normalized Mean Bias (NMB) from -100% to + inf. Fractional Bias (FB) from –200% to +200% Fractional Error (FE) from 0% to +200% Bias Factor (Knipping ratio) is MNB + 1, reported as a ratio, for example: – 4:1 for over prediction – 1:4 for under-prediction. Most Used Metrics

16 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside UCR Java-based AQM Evaluation Tools

17 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside UCR Java-based AQM Evaluation Tools

18 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside SAPRC99 vs. CB4 NO3; IMPROVE FE%FB% SAPRC99108.449.1 CB4107.445.6 CB4-2002109.252.0 cross comparisons

19 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside SAPRC99 vs. CB4 SO4; IMPROVE FE%FB% SAPRC9954.99.4 CB456.010.2 CB4-200256.512.4 cross comparisons

20 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Time series plot for CMAQ vs. CAMx at SEARCH site – JST (Jefferson St.)

21 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside 1 With 60 ppb ambient cutoff 2 Using 3*elemental sulfur 3 No data available in WRAP domain 4 Measurements available at 3 sites

22 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Viewing Spatial Patterns Problem: Model performance metrics and time-series plots do not identify cases where the model is “off by one grid cell”. Process ambient data in the I/O API format so that data can be compared to model using PAVE.

23 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SO4, Jan 5

24 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SO4, June 10

25 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE NO3, Jan 5

26 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE NO3, July 1

27 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SOA, Jan 5

28 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SOA, June 25

29 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Spatially Weighted Metrics PAVE plots qualitatively indicate error relative to spatial patterns, but do we also need to quantify this? –Wind error of 30 degrees can cause model to miss peak by one or more grid cells. –Interpolate model using surrounding grid cells? –Use average of adjacent grid cells? –Within what distance?

30 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Judging Model Performance Many plots and metrics – but what is the bottom line? Need to stratify the data for model evaluation –Evaluate seasonal performance. –Group by related types of sites. –Judge model for each site or similar groups. –How best to group or stratify sites? Want to avoid wasting time analyzing plots and metrics that are not useful.

31 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside 12km vs. 36km, Winter SO4 FB% 36km-35 12km-39

32 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside FB% 36km-34 12km-13 12km vs. 36km, Winter NO3

33 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Recommended Evaluation for Nests Comparing performance metrics is not enough: –Performance metrics show mixed response. –Possible for better model to have poorer metrics Diagnostic analysis is needed to compare nested grid to coarse grid model.

34 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Example Diagnostic Analysis Some sites had worse metrics for 12km. Analysis by Jim Boylan comparing differences in 12 km and 36 km results showed major effects from: –Regional precipitation –Regional transport (wind speed & direction) –Plume definition

35 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate Change (36 km – 12 km)

36 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Wet Sulfate on July 9 at 01:00 36 km Grid 12 km Grid

37 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Regional Transport (Wind Speed)

38 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 05:00 36 km Grid 12 km Grid

39 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 06:00 36 km Grid 12 km Grid

40 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 07:00 36 km Grid 12 km Grid

41 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 08:00 36 km Grid 12 km Grid

42 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Plume Definition and Artificial Diffusion

43 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 00:00 36 km Grid 12 km Grid

44 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 06:00 36 km Grid 12 km Grid

45 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 09:00 36 km Grid 12 km Grid

46 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 12:00 36 km Grid 12 km Grid

47 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 16:00 36 km Grid 12 km Grid

48 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 21:00 36 km Grid 12 km Grid

49 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 11 at 00:00 36 km Grid 12 km Grid

50 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate Change (36 km – 12 km)

51 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Nested Grid Recommendations Diagnostic evaluation is needed to judge nested grid performance. Coarse grid might have compensating errors that produce better performance metrics. Diagnostic evaluation is resource extensive. Should we just assume that higher resolution implies better physics?

52 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Conclusions – Key Issues Air quality models should include a model evaluation module that produces performance plots and metrics. Recommend bias factor as best metric for haze. Much more work needed to address error relative to spatial patterns. If different models have similar error use the model with the best science (even if more computationally expensive).

53 Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Additional Work on Evaluation Tools Need to adapt evaluation software for PAMS and PM Supersites. Develop GUI to facilitate viewing of plots, include an open source tools for spatial animations. Develop software to produce more useful plots, e.g., contour plots of bias and error.


Download ppt "Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets."

Similar presentations


Ads by Google