Template U.S. EPA’s Evaluation of Long Range Transport Models Ralph E. Morris ENVIRON International Corporation A&WMA CPANS Conference University of Calgary.

Slides:



Advertisements
Similar presentations
Matthew Hendrickson, and Pascal Storck
Advertisements

Template Use of Photochemical Grid Models to Assess Single-Source Impacts Ralph Morris, Tanarit Sakulyanontvittaya, Darren Wilton and Lynsey Parker ENVIRON.
2002 MM5 Model Evaluation 12 vs. 36 km Results Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris ENVIRON International Corporation Zion Wang UCR.
Georgia Chapter of the Air & Waste Management Association Annual Conference: Improved Air Quality Modeling for Predicting the Impacts of Controlled Forest.
Template Development and Testing of PinG and VBS modules in CMAQ 5.01 Prakash Karamchandani, Bonyoung Koo, Greg Yarwood and Jeremiah Johnson ENVIRON International.
Coupled NMM-CALMET Meteorology Development for the CALPUFF Air Dispersion Modelling in Complex Terrain and Shoreline Settings Presented at: European Geoscience.
Meteorological Data Issues for Class II Increment Analysis.
U.S. EPA Office of Research & Development October 30, 2013 Prakash V. Bhave, Mary K. McCabe, Valerie C. Garcia Atmospheric Modeling & Analysis Division.
8 th Conference on Air Quality Modeling – A&WMA AB-3 Comments on CALPUFF Implementation Issues By Mark Bennett, CH2M HILL.
The Use of High Resolution Mesoscale Model Fields with the CALPUFF Dispersion Modelling System in Prince George BC Bryan McEwen Master’s project
Template Reactive Plume Modeling to Investigate NO x Reactions and Transport in Nighttime Plumes and Impact on Next-day Ozone Prakash Karamchandani, Greg.
Dynamical Downscaling of CCSM Using WRF Yang Gao 1, Joshua S. Fu 1, Yun-Fat Lam 1, John Drake 1, Kate Evans 2 1 University of Tennessee, USA 2 Oak Ridge.
Evaluation of the Comprehensive Air Quality Model with Extensions (CAMx) Against Three Classical Mesoscale Tracer Experiments Bret A. Anderson 1, Kirk.
12 h Annual CMAS Conference Chapel Hill, NC, October 28-30, Recent Advances in High-Resolution Lagrangian Transport Modeling Roland Draxler, Ariel.
Christian Seigneur AER San Ramon, CA
INTEGRATION OF MEASURED, MODELLED & REMOTELY SENSED AIR QUALITY DATA & IMPACTS ON THE SOUTH AFRICAN HIGHVELD Kubeshnie Bhugwandin - November 2009.
1 Modelled Meteorology - Applicability to Well-test Flaring Assessments Environment and Energy Division Alex Schutte Science & Community Environmental.
Jenny Stocker, Christina Hood, David Carruthers, Martin Seaton, Kate Johnson, Jimmy Fung The Development and Evaluation of an Automated System for Nesting.
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling Division, Applied Modeling Research Branch October 8, 2008.
Development and Testing of Comprehensive Transport and Dispersion Model Evaluation Methodologies Steven Hanna (GMU and HSPH) OFCM Hanna 19 June 2003.
PROTOCOL FOR DETERMINING THE BEST PERFORMING AIR QUALITY MODEL 8 TH MODELING CONFERENCE SEPTEMBER 22,
Tracer Simulation and Analysis of Transport Conditions Leading to Tracer Impacts at Big Bend Bret A. Schichtel ( NPS/CIRA.
Development and Application of a State-of-the-Science Plume-in-Grid Model CMAQ-APT Prakash Karamchandani, Christian Seigneur, Krish Vijayaraghavan and.
Proof-of-Concept Evaluation of Use of Photochemical Grid Model Source Apportionment Techniques for Prevention of Significant Deterioration of Air Quality.
TEMIS user workshop, Frascati, 8-9 October 2007 Long range transport of air pollution service: Part 1: Trajectories Bart Dils, M. De Mazière, J. van Geffen,
CONCEPTUAL MODELING PROTOCOL FOR THE NEIGHBORHOOD ASSESSMENT PROGRAM.
Modeling Overview For Barrio Logan Community Health Neighborhood Assessment Program Andrew Ranzieri Vlad Isakov Tony Servin Shuming Du October 10, 2001.
1 Neil Wheeler, Kenneth Craig, and Clinton MacDonald Sonoma Technology, Inc. Petaluma, California Presented at the Sixth Annual Community Modeling and.
Evaluation and Application of Air Quality Model System in Shanghai Qian Wang 1, Qingyan Fu 1, Yufei Zou 1, Yanmin Huang 1, Huxiong Cui 1, Junming Zhao.
Georgia Environmental Protection Division IMPACTS OF MODELING CHOICES ON RELATIVE RESPONSE FACTORS IN ATLANTA, GA Byeong-Uk Kim, Maudood Khan, Amit Marmur,
Building Aware Flow and T&D Modeling Sensor Data Fusion NCAR/RAL March
James S. Ellis National Atmospheric Release Advisory Center Lawrence Livermore National Laboratory Department of Energy (DOE) OFCM Special Session Atmospheric.
Session 3, Unit 5 Dispersion Modeling. The Box Model Description and assumption Box model For line source with line strength of Q L Example.
WRAP Workshop July 29-30, 2008 Potential Future Regional Modeling Center Cumulative Analysis Ralph Morris ENVIRON International Corporation Novato, California.
Introduction to Modeling – Part II
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling and Analysis Division Office of Research and Development.
Comparison of the AEOLUS3 Atmospheric Dispersion Computer Code with NRC Codes PAVAN and XOQDOQ 13th NUMUG Conference, October 2009, San Francisco, CA.
1 Tracer Experiments Barrio Logan Working Draft Do Not Cite or Quote Tony Servin, P.E. Shuming Du, Ph.D. Vlad Isakov, Ph.D. September 12, 2002 Air Resources.
Photo image area measures 2” H x 6.93” W and can be masked by a collage strip of one, two or three images. The photo image area is located 3.19” from left.
1 Impact on Ozone Prediction at a Fine Grid Resolution: An Examination of Nudging Analysis and PBL Schemes in Meteorological Model Yunhee Kim, Joshua S.
Template Reducing Vertical Transport Over Complex Terrain in Photochemical Grid Models Chris Emery, Ed Tai, Ralph Morris, Greg Yarwood ENVIRON International.
GOING FROM 12-KM TO 250-M RESOLUTION Josephine Bates 1, Audrey Flak 2, Howard Chang 2, Heather Holmes 3, David Lavoue 1, Mitchel Klein 2, Matthew Strickland.
Types of Models Marti Blad Northern Arizona University College of Engineering & Technology.
HF Modeling Task Mike Williams November 19, 2013.
Report to WESTAR Technical Committee September 20, 2006 The CMAQ Visibility Model Applied To Rural Ozone In The Intermountain West Patrick Barickman Tyler.
WRAP Stationary Sources Joint Forum Meeting August 16, 2006 The CMAQ Visibility Model Applied To Rural Ozone In The Intermountain West Patrick Barickman.
1 An Improved Approach To Updating Regulatory Dispersion Models 8 th Modeling Conference RTP, NC September 23, 2005.
AoH/MF Meeting, San Diego, CA, Jan 25, 2006 WRAP 2002 Visibility Modeling: Summary of 2005 Modeling Results Gail Tonnesen, Zion Wang, Mohammad Omary, Chao-Jung.
Template Application of SCICHEM-2012 for 1-Hour NO 2 Concentration Assessments Prakash Karamchandani, Ralph Morris, Greg Yarwood, Bart Brashers, S.-Y.
The application of Models-3 in national policy Samantha Baker Air and Environment Quality Division, Defra.
1 8 th Conference on Air Quality Modeling – AWMA AB3 Comments on Lagrangian and Eulerian Long Range Transport/Regional Models By Bob Paine, ENSR.
WRAP Technical Work Overview
Single-Source Impacts with SCICHEM and CAMx
CENRAP Modeling and Weight of Evidence Approaches
Steven Hanna (GMU and HSPH) OFCM Hanna 19 June 2003
Types of Models Marti Blad PhD PE
The Use of AMET and Automated Scripts for Model Evaluation
A New Method for Evaluating Regional Air Quality Forecasts
Quantifying uncertainty in volcanic ash forecasts
Assessing Volcanic Ash Hazard by Using the CALPUFF System
Hybrid Plume/Grid Modeling for the Allegheny County PM2.5 SIPs
Martin Piringer, Claudia Flandorfer, Sirma Stenzel, Marlies Hrad
Models of atmospheric chemistry
B913.
Overview of WRAP 2014 Platform develop and Shake-Out project update
Introduction to Modeling – Part II
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
WRAP 2014 Regional Modeling
Regional Modeling for Stationary Source Control Strategy Evaluation
Presentation transcript:

Template U.S. EPA’s Evaluation of Long Range Transport Models Ralph E. Morris ENVIRON International Corporation A&WMA CPANS Conference University of Calgary April 23-24, 2012

773 San Marin Drive, Suite 2115, Novato, CA P: F: ENVIRON’s work for USEPA LRT Model Evaluation Study Mesoscale Model Interface (MMIF) tool –Pass WRF/MM5 meteorological model output directly to CALPUFF with minimal modification (CALMET not needed) Document evaluation of CALPUFF and other LRT dispersion models using tracer field study data –EPA performed all of the model simulations Perform a Consequence Analysis analyzing CALPUFF and CAMx for LRT AQ and AQRV Evaluate CALPUFF, SCICHEM and CAMx using atmospheric plume chemistry data 2

MMIF Tool for CALPUFF Met Inputs CALPUFF needs meteorological variables defined at grid cell centers WRF uses an Arakawa C grid configuration –U and V winds defined on cell edges –Other met variables defined on cell centers –Impossible to pass-through winds directly to CALPUFF Some level of interpolation is needed – MMIF elected to interpolate winds to cell centers Arakawa C 3

773 San Marin Drive, Suite 2115, Novato, CA P: F: Inert Tracer Experiment Evaluation LRT transport and dispersion algorithms evaluated using inert tracer field experiments –Release known amount of inert tracer –Measure at downwind receptor sites –Evaluate LRT models ability to predict tracer concentrations Examples of past tracer experiment evaluation studies – model study –1990 Rocky Mountain Acid Deposition Model Assessment –1998 EPA evaluates CALPUFF using 2 tracer experiments –1998 IWAQM Phase II recommendations –1994 European Tracer Experiment (ETEX)  ATMES = real time model evaluation  ATMES-II historical model evaluation 4

773 San Marin Drive, Suite 2115, Novato, CA P: F: USEPA LRT Tracer Test Evaluation EPA evaluated CALPUFF using five tracer test field study measurements: –1975 Savannah River Laboratory (SRL75) –1980 Great Plains (GP80) –1983 Cross-Appalachia Tracer Experiment (CAPTEX) –1994 European Tracer Experiment (ETEX) Compare CALPUFF GP80/SRL76 with1998 EPA study –Determine how advances in CALPUFF improve performance For some experiments compare multiple LRT dispersion models: –CALPUFF; HYSPLIT; FLEXPART; SCIPUFF; CAMx; CALGRID 5

GP80 and SRL 75 Tracer Experiments GP80 Receptor arcs at 100 km and 600 km Evaluate CALPUFF separately on each arc using different configurations Compare with 1998 study SRL75 One 100 km receptor arc Evaluate CALPUFF using different configurations Compare with 1998 study 6

CAPTEX and ETEX CAPTEX (CTEX3 & CTEX5) Many CALMET configurations Evaluate CALMET & MM5 & CALPUFF Evaluate LRT models using default configuration ETEX Evaluate LRT models using default configuration “out-of- the-box” Additional CAMx. HYSPLIT and CALPUFF sensitivity tests 7

Number LRT Dispersion Model Runs 8 TestCALPUFFSCIPUFFHYSPLITFLXPRTCAMxCALGRD GP80100km GP80600km SRL75100km CAPTEXCTEX3a CAPTEXCYEX5a CAPTEXCTEX3b CAPTEXCTEX5b ETEX Total

Cross Appalachian Tracer Experiment (CAPTEX) 5 tracer releases from Dayton, OH or Sudbury, Canada –CTEX3 – Oct 2, 1983 –CTEX5 – Oct 25-26, 1983 Meteorological Evaluation –Evaluate MM5 and CALMET –Identify best performing CALMET settings –Used in to determine USEPA’s recommended CALMET settings Six LRT Models Evaluated – CALPUFF & SCIPUFF – HYSPLIT & FLEXPART – CAMx & CALGRID 9

CTEX3: CALMET Sensitivity Tests Evaluation 31 CALMET Sens Tests : 80, 36 & 12 km  How MM5 is used within CALMET –First Guess –Step 1 Winds –None 18, 12 & 4 km –RMAX1/RMAX2 (OA)  A = 500/1000  B = 100/200 –EPA Recommended  C = 10/100  D = NOOBS 3 MMIF Sensitivity Tests: – 36, 12 & 4 km CALMETSTAT to evaluate wind speed and direction – Compare against common benchmarks used to interpret MM5/WRF evaluation 10

WS & WD Bias/Error CALMET w/ 12 km MM5 EXP4 & EXP6 = 12 & 4 km CALMET A,B,C,D: RMAX1/RMAX2 = 500/1000, 100/200, 10/100, 0/0 11 WS BiasWD Bias WS Error WD Error

773 San Marin Drive, Suite 2115, Novato, CA P: F: Conclusions CTEX3 CALMET Model Performance CTEX3 MM5 and CALMET wind model performance: –The EPA-recommended settings of RMAX1/RMAX2 (100/200, “B” Series) produced best wind model performance. –Use of 4 km grid resolution in CALMET tended to produce better wind model performance than 12 or 18 km.  Can’t say anything about finer grid resolution than 4 km. –The CALMET wind model performance was better using the 12 and 36 km MM5 data as input than using the 80 km MM4 data. CTEX5 also found RMAX1/RMAX2 = 100/200 best wind performance Note: Not an independent evaluation since WS/WD obs were also used as input to most CALMET tests 12

CTEX3: CALPUFF/CALMET Sens Tests CALPUFF evaluation using 25 CALMET sensitivity tests –CALPUFF runs for CTEX3 tracer release –Evaluate CALPUFF using ATMES-II 12 statistical performance metrics –Determine which CALMET configuration results in best CALPUFF performance CALPUFF evaluation using 3 MMIF met inputs Evaluate using ATMES-II statistical performance metrics: – Spatial, Temporal & Global – Bias and Error – Scatter – Correlation – Cumulative Distribution 13

14 Two Composite Statistics for Ranking Models RANK combines statistics for correlation (R 2 ), bias (FB), spatial (FMS) and cumulative distribution (KS) with perfect model giving score of 4.0  RANK statistics needs to be revised, propose replace FB w/ NMSE AVERAGE averages the N model’s rankings from 1 to N across the 11 ATMES-II statistics with the best model being model with lowest score closest to 1.0 –1.0 = model is highest ranked model across all 11 ATMES-II statistical metrics 14

CALPUFF MPE using 12 km MM5 EXP4 & EXP6 = 12 & 4 km CALMET A,B,C,D: RMAX1/RMAX2 = 500/1000, 100/200, 10/100, 0/0 15

773 San Marin Drive, Suite 2115, Novato, CA P: F: CTEX3 CALPUFF Evaluation Conclusions The CALPUFF MMIF best performing for CTEX3 (worst for CTEX5). The CALPUFF/CALMET RMAX1/RMAX2 = 100/200 worst performing RMAX1/RMAX2 configuration –Contrast to best performing CALMET configuration for WS/WD CALPUFF/CALMET with higher MM5 resolution (36 and 12 km) performs better than using 80 km MM4 data. Generally, CALPUFF/CALMET using 4 km resolution performs better 16

773 San Marin Drive, Suite 2115, Novato, CA P: F: CTEX3 Six LRT Model Evaluation Use common 36 km MM5 meteorology in all models Run 6 LRT models out-of-the-box using default configuration Evaluate using ATMES-II statistical metrics 6 LRT dispersion models evaluated: –Two Lagrangian Puff Models  CALMET (w/ MMIF, best performing for CTEX3) and SCIPUFF –Two Lagrangian Particle Models  FLEXPART and HYSPLIT –Two Eulerian grid models  CAMx and CALGRID 17

18 CTEX3 Tracer Test LRT Model Evaluation (best performing model has lowest value) 18

CTEX3 Tracer Test LRT Model Evaluation (best performing model has highest value) 19

773 San Marin Drive, Suite 2115, Novato, CA P: F: Conclusions of EPA LRT Evaluation (1 of 3) GP80 Tracer Field Experiment –Using different valid CALMET configurations, the maximum CALPUFF concentrations vary by factor of 3  Need standardized model configuration for regulatory modeling –Since less options, less variation using MMIF –CALPUFF “SLUG” near-field option needed to reproduce “good” model performance on 600 km arc from 1998 EPA study  Raises question of validity of 1998 EPA study since SLUG typically not used for LRT dispersion modeling SRL75 Tracer Field Experiment –Fitted Gaussian plume evaluation approach can be flawed  The a priori assumption that the observed plume has a Gaussian plume distribution may not be valid at LRT distances 20

773 San Marin Drive, Suite 2115, Novato, CA P: F: Conclusions of EPA LRT Evaluation (2 of 3) CAPTEX Tracer Field Experiment –RMAX1/RMAX2 = 100/200 (EPA recommendation) produces best CALMET WS/WD but worst CALPUFF tracer performance  EPA needs help from modeling community to identify regulatory CALMET settings –CALPUFF/MMIF performs better than CALPUFF/CALMET for CTEX3 but worst for CTEX5 –CTEX3: CAMx/SCIPUFF perform best followed by CALPUF/FLEXPART with HYSPLIT/CALGRID worst –CTEX5: CAMx/HYSPLIT performs best followed by SCIPUFF/FLEXPART with CALPUFF/CALGRID worst 21

773 San Marin Drive, Suite 2115, Novato, CA P: F: Conclusions of EPA LRT Evaluation (3 of 3) ETEX Tracer Field Experiment –Objective was to evaluate models run with default configuration using common 36 km MM5 inputs  Exception was to use more all hours puff splitting in CALPUFF rather than default of once per day CAMx, HYSPLIT & SCIPUFF perform best FLEXPART & CALPUFF perform worst –Sensitivity tests for CALPUFF, CAMx and HYSPLIT  CALPUFF more aggressive puff splitting didn’t help  CAMx found default configuration best performing  HYSPLIT found some hybrid particle/puff configurations perform better than particle-only default 22

773 San Marin Drive, Suite 2115, Novato, CA P: F: Questions? Full draft LRT tracer report available on EPA SCRAM website (consequence analysis & plume evaluation to come) 23