How well can we model air pollution meteorology in the Houston area? Wayne Angevine CIRES / NOAA ESRL Mark Zagar Met. Office of Slovenia Jerome Brioude,

Slides:



Advertisements
Similar presentations
Fong (Fantine) Ngan and DaeWon Byun IMAQS, Department of Earth Sciences, University of Houston 7 th Annual CMAS Conference, October 6th, 2008.
Advertisements

Quantification of the sensitivity of NASA CMS-Flux inversions to uncertainty in atmospheric transport Thomas Lauvaux, NASA JPL Martha Butler, Kenneth Davis,
Intense Spring Sea Breezes Along the New York - New Jersey Coast Stanley David Gedzelman and Kwan-Yin Kong EAS Department and NOAA CREST Center City College.
Updates on NOAA MM5 Assessment Where we left off Buoy assessment Temperature problems Solar radiation assessment Z T simulation Analysis nudging Where.
An Intercomparison of Surface Observations and High-Resolution Forecasting Model Output for the Lake Okeechobee Region By Kathryn Shontz July 19, 2006.
Jared H. Bowden Saravanan Arunachalam
Bay breeze enhanced air pollution event in Houston, Texas during the DISCOVER-AQ field campaign Christopher P. Loughner (University of Maryland) Melanie.
Danielle M. Kozlowski NASA USRP Intern. Background Motivation Forecasting convective weather is a challenge for operational forecasters Current numerical.
Atmospheric phase correction for ALMA Alison Stirling John Richer Richard Hills University of Cambridge Mark Holdaway NRAO Tucson.
Island Effects on Mei-Yu Jet/Front Systems and Rainfall Distribution during TIMREX IOP#3 Yi-Leng Chen and Chuan-Chi Tu Department of Meteorology SOEST,
The effect of terrain and land surface on summer monsoon convection in the Himalayan region Socorro Medina, Robert Houze, Anil Kumar, and Dev Niyogi 13.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Initial 3D isotropic fractal field An initial fractal cloud-like field can be generated by essentially performing an inverse 3D Fourier Transform on the.
Objective: Work with the WRAP, CenSARA, CDPHE, BLM and EPA Region 8 to use satellite data to evaluate the Oil and Gas (O&G) modeled NOx emission inventories.
Introduction and Methodology Daniel T. Lindsey*, NOAA/NESDIS/STAR/RAMMB Louie Grasso, Cooperative Institute for Research in the Atmosphere
Tanya L. Otte and Robert C. Gilliam NOAA Air Resources Laboratory, Research Triangle Park, NC (In partnership with U.S. EPA National Exposure Research.
National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Tropospheric Emission Spectrometer Evaluating.
Office of Research and Development Atmospheric Modeling and Analysis Division, National Exposure Research Laboratory Application and evaluation of the.
Chapter 2 Section 3 Winds.
Development of WRF-CMAQ Interface Processor (WCIP)
OThree Chemistry MM5 Model Diagnostic and Sensitivity Analysis Results Central California Ozone Study: Bi-Weekly Presentation 1 T. W. Tesche Dennis McNally.
13th EMS & 11th ECAM Model and scatterometer sea surface winds for storm surge applications in the Adriatic Sea Francesco De Biasio*, Stefano Zecchetto,
Tracer Simulation and Analysis of Transport Conditions Leading to Tracer Impacts at Big Bend Bret A. Schichtel ( NPS/CIRA.
Template Improving Sources of Stratospheric Ozone and NOy and Evaluating Upper Level Transport in CAMx Chris Emery, Sue Kemball-Cook, Jaegun Jung, Jeremiah.
NOAA Contributions to the Central California Ozone Study and Ongoing Meteorological Monitoring Jim Wilczak Jian-Wen Bao, Sara Michelson, Ola Persson, Laura.
Operational Forecasting of Turbulence in Radial Bands around Mesoscale Convective Systems (MCS’s) 06 August 2013 Midwest US Melissa Thomas, Lead & Training.
COSMO General Meeting, Offenbach, 7 – 11 Sept Dependance of bias on initial time of forecasts 1 WG1 Overview
Jonathan Pleim 1, Robert Gilliam 1, and Aijun Xiu 2 1 Atmospheric Sciences Modeling Division, NOAA, Research Triangle Park, NC (In partnership with the.
Slide 1 Impact of GPS-Based Water Vapor Fields on Mesoscale Model Forecasts (5th Symposium on Integrated Observing Systems, Albuquerque, NM) Jonathan L.
08/20031 Volcanic Ash Detection and Prediction at the Met Office Helen Champion, Sarah Watkin Derrick Ryall Responsibilities Tools Etna 2002 Future.
10/28/2014 Xiangshang Li, Yunsoo Choi, Beata Czader Earth and Atmospheric Sciences University of Houston The impact of the observational meteorological.
Meteorology of Winter Air Pollution In Fairbanks.
Accounting for Uncertainties in NWPs using the Ensemble Approach for Inputs to ATD Models Dave Stauffer The Pennsylvania State University Office of the.
Ligia Bernardet 1*, E. Uhlhorn 2, S. Bao 1* & J. Cione 2 1 NOAA ESRL Global Systems Division, Boulder CO 2 NOAA AOML Hurricane Research Division, Miami.
Melanie Follette-Cook Christopher Loughner (ESSIC, UMD) Kenneth Pickering (NASA GSFC) CMAS Conference October 27-29, 2014.
OThree Chemistry MM5/CAMx Model Diagnostic and Sensitivity Analysis Results Central California Ozone Study: Bi-Weekly Presentation 2 T. W. Tesche Dennis.
Seasonal Modeling (NOAA) Jian-Wen Bao Sara Michelson Jim Wilczak Curtis Fleming Emily Piencziak.
Low level jet study from the ISS Zhaoxia Pu Department of Atmospheric Sciences University of Utah ISS Winds Mission Science Workshop Miami, FL February.
Dispersion conditions in complex terrain - a case study of the January 2010 air pollution episode in Norway Viel Ødegaard Norwegian Meteorological.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Infrared Temperature and.
USE OF AIRS/AMSU DATA FOR WEATHER AND CLIMATE RESEARCH Joel Susskind University of Maryland May 12, 2005.
10/05/041 Satellite Data in the Verification of Model cloud forecasts Christoph Zingerle Tartu, 24. – 26. Jan HiRLAM mini workshop on clouds and.
Regional Modeling Joseph Cassmassi South Coast Air Quality Management District USA.
Melanie Follette-Cook (MSU/GESTAR) Christopher Loughner (ESSIC, UMD) Kenneth Pickering (NASA GSFC) Rob Gilliam (EPA) Jim MacKay (TCEQ) CMAS Oct 5-7, 2015.
© University of Reading 2008www.reading.ac.ukTTISS September 2009 Impact of targeted dropsondes on European forecasts Emma Irvine Sue Gray and John Methven.
CBH statistics for the Provisional Review Curtis Seaman, Yoo-Jeong Noh, Steve Miller and Dan Lindsey CIRA/Colorado State University 12/27/2013.
Photo image area measures 2” H x 6.93” W and can be masked by a collage strip of one, two or three images. The photo image area is located 3.19” from left.
Boulder TAMDAR Meeting - Ed Szoke 1 August 25, 2005 RUC – RAOB – TAMDAR SOUNDINGS Ed Szoke* NOAA Forecast Systems Laboratory *Joint collaboration with.
1 Impact on Ozone Prediction at a Fine Grid Resolution: An Examination of Nudging Analysis and PBL Schemes in Meteorological Model Yunhee Kim, Joshua S.
1. How is model predicted O3 sensitive to day type emission variability and morning Planetary Boundary Layer rise? Hypothesis 2.
Observed & Simulated Profiles of Cloud Occurrence by Atmospheric State A Comparison of Observed Profiles of Cloud Occurrence with Multiscale Modeling Framework.
Lagrangian particle models are three-dimensional models for the simulation of airborne pollutant dispersion, able to account for flow and turbulence space-time.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
Boundary layer depth verification system at NCEP M. Tsidulko, C. M. Tassone, J. McQueen, G. DiMego, and M. Ek 15th International Symposium for the Advancement.
Trials of a 1km Version of the Unified Model for Short Range Forecasting of Convective Events Humphrey Lean, Susan Ballard, Peter Clark, Mark Dixon, Zhihong.
MODIS Winds Assimilation Impact Study with the CMC Operational Forecast System Réal Sarrazin Data Assimilation and Quality Control Canadian Meteorological.
TES and Surface Measurements for Air Quality Brad Pierce 1, Jay Al-Saadi 2, Jim Szykman 3, Todd Schaack 4, Kevin Bowman 5, P.K. Bhartia 6, Anne Thompson.
Comparison between Forecasting and Retrospective Air Quality Simulations of 2006 TexAQS-II Daewon W. Byun* D.-G. Lee, F. Ngan, H.-C. Kim, B. Czader Arastoo.
PRELIMINARY VALIDATION OF IAPP MOISTURE RETRIEVALS USING DOE ARM MEASUREMENTS Wayne Feltz, Thomas Achtor, Jun Li and Harold Woolf Cooperative Institute.
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
Slide 1 Investigations on alternative interpretations of AMVs Kirsti Salonen and Niels Bormann 12 th International Winds Workshop, 19 th June 2014.
Ship emission effect on Houston Ship Channel CH2O concentration ——study with high resolution model Ye Cheng.
A New Method for Evaluating Regional Air Quality Forecasts
Volcanic Ash Detection and Prediction at the Met Office
17th Annual CMAS Conference
Mark A. Bourassa and Qi Shi
Upper air Meteorological charts
The Value of Nudging in the Meteorology Model for Retrospective CMAQ Simulations Tanya L. Otte NOAA Air Resources Laboratory, RTP, NC (In partnership with.
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
The impact of ocean surface and atmosphere on TOA mircowave radiance
Presentation transcript:

How well can we model air pollution meteorology in the Houston area? Wayne Angevine CIRES / NOAA ESRL Mark Zagar Met. Office of Slovenia Jerome Brioude, Robert Banta, Christoph Senff, HyunCheol Kim, Daewon Byun

Orientation  Surface sites to be used for temperature and wind comparisons  LaPorte wind profiler in green Galveston Bay Gulf of Mexico 55km 50km

Orientation  Satellite image on 1 September LST  Coasts low and sandy, little elevation change or terrain

Measurements and simulations  Texas Air Quality Study II (August- October 2006)  Surface meteorological and pollution monitoring sites  Mixing heights and winds from a radar wind profiler at LaPorte (on land)  WRF simulations

How can we tell if one model run is better than another?  Need metrics that clearly show improved performance  Several approaches: –Traditional bulk statistics –Case studies –Sea breeze and stagnation frequency –Plume locations

WRF simulations  75 days, 1 August – 14 October 2006  5 km inner grid spacing  Three styles: –FDDA of 3 wind profilers, reduced soil moisture, and hourly SST –FDDA of 3 wind profilers and reduced soil moisture –Reduced soil moisture only  All with ECMWF initialization every 24 hours (at 0000 UTC)  Retrospective runs, not forecasts

Impact of FDDA on wind profile  Full run, all hours  FDDA reduces random error in direction  Note this is not an independent comparison (this data was assimilated)  Red is FDDA run  Blue has FDDA, 1-h SST, and reduced soil moisture  Green has reduced soil moisture only

Impact of FDDA on surface winds  Full run, all hours  FDDA reduces random error in direction (more clearly seen if only daytime hours are considered)  ECMWF has less speed bias at C35 and C45 and less random error in speed at all sites  ECMWF has similar direction bias and random error to WRF runs over all hours, but WRF w/FDDA is better in daytime  Red is FDDA run  Blue has FDDA, 1-h SST, and reduced soil moisture  Green has reduced soil moisture only  Black is ECMWF

Impact of FDDA and soil moisture on surface winds  Episode days (17) only  Site C45, southeast of Houston very near Galveston Bay  FDDA improves random error in both speed and direction  1-h SST improves random error in the afternoon, but makes it worse at night  ECMWF has different but comparable errors, but WRF w/FDDA is better at hours 18 and 21 (and worse at hour 3)  Red is FDDA run  Blue has FDDA, 1-h SST, and reduced soil moisture  Green has reduced soil moisture only  Black is ECMWF

Impact of FDDA and soil moisture on surface temperatures  When are the errors worst?  10 days have at least one hour with temperature difference > 5K at site C35 (28 hours total) in FDDA run  All differences > 5K have model > measurement (model too warm)  All 10 days have convection or a cold front in reality  Model also has clouds and fronts but different amount, timing, or location

New metrics: Sea breeze frequency  How often does a sea breeze occur in the simulation AND measurement?  Definition: Northerly component >1 m/s between 0600 and 1200 UTC and southerly >1 m/s after 1200 UTC  FDDA or FDDA+1hSST run closer to measurement at all 7 sites (at least a little)  Results not sensitive to threshold Red is FDDA run Blue has FDDA, 1-h SST, and reduced soil moisture Green has reduced soil moisture only

New metrics: Net trajectory distance  Trajectories starting midway along the Ship Channel at 1400 UTC each day, extending for 10 hours at 190 m AGL  WRF run w/FDDA  Comparing total distance to net distance  A rough measure of recirculation  The lower left portion of the diagram is of most interest

New metrics: Net trajectory distance  Net distance was found by Banta et al. to correlate well with maximum ozone  Also holds for trajectories from WRF simulated winds, shown here  r = -0.85, r 2 = 0.72  Run with FDDA  Run with 1-h SST about the same  Total distance correlation much worse (r = -0.57)

New metrics: Vector average wind  Averaging u and v vs. averaging speed  Over 10 hours UTC  Interesting points are those below the 1:1 line since they have significant curvature  Run with FDDA and 1-h SST  Correlates well with measured wind (r > 0.9) in either run with FDDA  Non-FDDA run not as good (r < 0.85)

New metrics: Vector average wind  Good correlation with max ozone from airborne measurements  r = -0.91, r 2 = 0.83  Run with FDDA and 1-h SST  Runs without 1-h SST about the same  Without FDDA results are much worse  Scalar speed correlation slightly worse(?) (r = -0.88) but still better than net trajectory distance

Lagrangian plume comparisons  FLEXPART dispersion model with real emissions  Met fields from WRF (red) and ECMWF (blue)  SO2 measurements from NOAA aircraft (black)  WRF result has much better resolution and plume locations, even if averaged to same grid

Conclusions  ECMWF model used for initialization is already quite good, making it difficult to demonstrate improvement with high-resolution simulations  Traditional statistics (bias and std. dev.) don’t crisply display differences between runs, although they generally indicate improvement with FDDA –Different sites show different results  Looking at distribution of errors is useful –Large errors in temperature (>5K) occur when moist convection is present  New metric of sea breeze correspondence shows improvement at all 7 surface sites with FDDA  Net trajectory distance correlates better with ozone than total distance  Vector average wind correlates still better with ozone, scalar average wind speed almost as good  Average wind (vector or scalar) shows clearly that FDDA makes an important improvement under high-ozone conditions  Improvement above the surface is easy to demonstrate (eg. by comparison with wind profiler data)  Lagrangian plume model provides clear information about directly relevant performance of the model, but how to encapsulate?  Uncertainty analysis is needed  How good is good enough?  What if we know we have improved the model, but can’t show that we have improved the results?

Thanks to:  Bryan Lambeth, Texas Commission on Environmental Quality  NOAA P3 scientists  Richard Pyle and Vaisala, Inc. for funding  and many others

New metrics: Sea breeze frequency  How often does a sea breeze occur in the simulation or measurement?  Definition: Northerly component >1 m/s between 0600 and 1200 UTC and southerly >1 m/s after 1200 UTC  FDDA or FDDA+1hSST run closer to measurement at 4 of 7 sites Red is FDDA run Blue has FDDA, 1-h SST, and reduced soil moisture Green has reduced soil moisture only Black is surface site measurement

New metrics: Stagnation frequency  How often does stagnation occur in the simulation or measurement?  Definition: Wind speed < 1 m/s at any hour between 1500 and 2300 UTC  FDDA or FDDA+1hSST run closer to measurement at 3 of 7 sites Red is FDDA run Blue has FDDA, 1-h SST, and reduced soil moisture Green has reduced soil moisture only Black is surface site measurement

New metrics: Stagnation frequency  How often does stagnation occur in the simulation AND measurement?  Definition: Wind speed < 1 m/s at any hour between 1500 and 2300 UTC  No clear improvement with FDDA or FDDA+1hSST  Results not sensitive to threshold Red is FDDA run Blue has FDDA, 1-h SST, and reduced soil moisture Green has reduced soil moisture only

New metrics: Sea breeze and stagnation  Other things we can learn from these metrics: –Sea breeze correspondence is good at C45, closest to Bay and Gulf, with high frequency –Even better sea breeze correspondence at C81 with lowest frequency –C45 has the lowest stagnation frequency Red is FDDA run Blue has FDDA, 1-h SST, and reduced soil moisture Green has reduced soil moisture only