Validation of MTSAT-1R SST for the TWP+ Experiment Leon Majewski 1, Jon Mittaz 2, George Kruger 1,3, Helen Beggs 3, Andy Harris 2, Sandra Castro 4 1 Observations.

Slides:



Advertisements
Similar presentations
Improvements to the NOAA Geostationary Sea Surface Temperature Product Suite Eileen Maturi, NOAA/NESDIS/STAR Andy Harris, Jonathan Mittaz, Prabhat Koner.
Advertisements

Pathfinder –> MODIS -> VIIRS Evolution of a CDR Robert Evans, Peter Minnett, Guillermo Podesta Kay Kilpatrick (retired), Sue Walsh, Vicki Halliwell, Liz.
GHRSST XI Science Team Meeting, ST-VAL, June 2010, Lima, Peru Recent developments to the SST Quality Monitor (SQUAM) and SST validation with In situ.
1 High resolution SST products for 2001 Satellite SST products and coverage In situ observations, coverage Quality control procedures Satellite error statistics.
The Physical Basis of SST Measurements One (biased) look at progress Gary A. Wick NOAA ESRL/PSD October 29, 2013.
Satellite SST Radiance Assimilation and SST Data Impacts James Cummings Naval Research Laboratory Monterey, CA Sea Surface Temperature Science.
Characterizing and comparison of uncertainty in the AVHRR Pathfinder SST field, Versions 5 & 6 Robert Evans Guilllermo Podesta’ RSMAS Nov 8, 2010 with.
1 High Resolution Daily Sea Surface Temperature Analysis Errors Richard W. Reynolds (NOAA, CICS) Dudley B. Chelton (Oregon State University)
GOES-13 Science Team Report SST Images and Analyses Eileen Maturi, STAR/SOCD, Camp Springs, MD Andy Harris, CICS, University of Maryland, MD Chris Merchant,
Calibration/Validation and Generating Sea- Surface Temperature Climate Data Records: An approach using ship-board radiometry Peter Minnett 1, Gary Corlett.
NOAA Climate Obs 4th Annual Review Silver Spring, MD May 10-12, NOAA’s National Climatic Data Center 1.SSTs for Daily SST OI NOAA’s National.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 NOAA Operational Geostationary Sea Surface Temperature Products from NOAA.
1 Improved Sea Surface Temperature (SST) Analyses for Climate NOAA’s National Climatic Data Center Asheville, NC Thomas M. Smith Richard W. Reynolds Kenneth.
Arctic SST retrieval in the CCI project Owen Embury Chris Merchant University of Reading.
Determining the accuracy of MODIS Sea- Surface Temperatures – an Essential Climate Variable Peter J. Minnett & Robert H. Evans Meteorology and Physical.
SST Diurnal Cycle over the Western Hemisphere: Preliminary Results from the New High-Resolution MPM Analysis Wanqiu Wang, Pingping Xie, and Chenjie Huang.
MISST FY1 team meeting April 5-6, Miami, FL NOAA: Gary Wick, Eric Bayler, Ken Casey, Andy Harris, Tim Mavor Navy: Bruce Mckenzie, Charlie Barron NASA:
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 CLOUD MASK AND QUALITY CONTROL FOR SST WITHIN THE ADVANCED CLEAR SKY PROCESSOR.
MODIS Sea-Surface Temperatures for GHRSST-PP Robert H. Evans & Peter J. Minnett Otis Brown, Erica Key, Goshka Szczodrak, Kay Kilpatrick, Warner Baringer,
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 MAP (Maximum A Posteriori) x is reduced state vector [SST(x), TCWV(w)]
DMI-OI analysis in the Arctic DMI-OI processing scheme or Arctic Arctic bias correction method Arctic L4 Reanalysis Biases (AATSR – Pathfinder) Validation.
Application of in situ Observations to Current Satellite-Derived Sea Surface Temperature Products Gary A. Wick NOAA Earth System Research Laboratory With.
Verification of Global Ensemble Forecasts Fanglin Yang Yuejian Zhu, Glenn White, John Derber Environmental Modeling Center National Centers for Environmental.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 In Situ SST for Satellite Cal/Val and Quality Control Alexander Ignatov.
10/05/041 Satellite Data in the Verification of Model cloud forecasts Christoph Zingerle Tartu, 24. – 26. Jan HiRLAM mini workshop on clouds and.
Eileen Maturi 1, Jo Murray 2, Andy Harris 3, Paul Fieguth 4, John Sapper 1 1 NOAA/NESDIS, U.S.A., 2 Rutherford Appleton Laboratory, U.K., 3 University.
Retrieval Algorithms The derivations for each satellite consist of two steps: 1) cloud detection using a Bayesian Probabilistic Cloud Mask; and 2) application.
Validation of Satellite-derived Clear-sky Atmospheric Temperature Inversions in the Arctic Yinghui Liu 1, Jeffrey R. Key 2, Axel Schweiger 3, Jennifer.
November 28, 2006 Derivation and Evaluation of Multi- Sensor SST Error Characteristics Gary Wick 1 and Sandra Castro 2 1 NOAA Earth System Research Laboratory.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Monitoring of IR Clear-sky Radiances over Oceans for SST (MICROS) Alexander.
BLUElink> Regional High-Resolution SST Analysis System – Verification and Inter-Comparison Helen Beggs Ocean & Marine Forecasting Group, BMRC, Bureau of.
Using IMOS Ship SST observations for validation of Satellite SST Observations Helen Beggs, Ruslan Verein and George Paltoglou CAWCR, Bureau of Meteorology.
Andrea Kaiser-Weiss, Melbourne Joint GHRSST Workshop, 6 th March 2012 Experiences with SST profiles from near-surface Argo measurements A. Kaiser-Weiss.
STVAL Report Gary Corlett. Introduction At the 9 th GHRSST Science Team meeting, a proposal was put forward to join together the SSES-WG and the VAL-TAG.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Ship SST data for satellite SST and ocean.
GHRSST HL_TAG meeting Copenhagen, March 2010 Validation of L2P products in the Arctic Motivation: Consistent inter-satellite validation of L2p SST observations.
The MODIS SST hypercube is a multi-dimensional look up table of SST retrieval uncertainty, bias and standard deviation, determined from comprehensive analysis.
Page 1© Crown copyright 2004 Three-way error analysis between AATSR, AMSR-E and in situ sea surface temperature observations
Report to GHRSST11 from Australia – BLUElink> and IMOS Helen Beggs 1,2, Leon Majewski 2, George Paltoglou 1,2, Ian Barton 3, Eric Schulz 1,2 and Ruslan.
Report from the Australian Regional Data Assembly Centre Helen Beggs 1, Leon Majewski 2 and Justin Freeman 1 1 Centre for Australian Weather and Climate.
Status of SSES at the Bureau of Meteorology Leon Majewski, Justin Freeman, Helen Beggs Bureau of Meteorology Melbourne, Australia.
A comparison of AMSR-E and AATSR SST time-series A preliminary investigation into the effects of using cloud-cleared SST data as opposed to all-sky SST.
Characterizing and comparison of uncertainty in the AVHRR Pathfinder Versions 5 & 6 SST field to various reference fields Robert Evans Guilllermo Podesta’
EARWiG: SST retrieval issues for High Latitudes Andy Harris Jonathan Mittaz NOAA-CICS University of Maryland Chris Merchant U Edinburgh.
EARWiG: SST retrieval issues for TWP Andy Harris Jonathan Mittaz Prabhat Koner NOAA-CICS University of Maryland Eileen Maturi NOAA/NESDIS, Camp Springs,
Uncertainty estimation from first principles: The future of SSES? Gary Corlett (University of Leicester) Chris Merchant (University of Edinburgh)
New Australian High Resolution AVHRR SST Products from the Integrated Marine Observing System Presented at the GHRSST Users Symposium, Santa Rosa, USA,
SST from MODIS AQUA and TERRA Kay Kilpatrick, Ed Kearns, Bob Evans, and Peter Minnett Rosenstiel School of Marine and Atmospheric Science University of.
Introduction to ST-VAL Gary Corlett. ST-VAL The ST_VAL TAG objectives are to – Establish and promote guidelines for satellite SST validation Coordinate.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Systematic biases in satellite SST observations.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Studying the impact of hourly RAMSSA_skin.
Diurnal Variability Analysis for GHRSST products Chris Merchant and DVWG.
Summary of the GHRSST Joint Workshop Melbourne discussion on the Argo near- surface temperature measurements Andrea Kaiser-Weiss, Gary Wick, Carol-Anne.
GHRSST-9 Perros-Guirec, France 9-13 June Intercomparisons Among Global Daily SST Analyses NOAA’s National Climatic Data Center Asheville, NC, USA.
GHRSST interest in upgraded drifters - Summary from the GHRSST Joint Workshop Melbourne Andrea Kaiser-Weiss, Gary Corlett, Chris Merchant, Piere LeBorgne,
GHRSST 10: Report from Diurnal Variability Working Group Report on activities of the Diurnal Variability Working Group Chris Merchant University of Edinburgh.
May 28-29, 2009GHRSST 2009 International Users Symposium National Oceanic and Atmospheric Administration Group for High Resolution Sea Surface Temperature.
Calculation of Sea Surface Temperature Forward Radiative Transfer Model Approach Alec Bogdanoff, Florida State University Carol Anne Clayson and Brent.
Office of Marine Prediction, Japan Meteorological Agency
Impacts of GSICS inter-calibration on JAXA’s HIMAWARI-8 SST
Validating SMAP SSS with in situ measurements
Joint GRWG and GDWG Meeting February 2010, Toulouse, France
NOAA Report on Ocean Parameters - SST Presented to CGMS-43 Working Group 2 session, agenda item 9 Author: Sasha Ignatov.
Validation of Satellite-derived Lake Surface Temperatures
Characterizing DCC as invariant calibration target
The SST CCI: Scientific Approaches
NOAA Objective Sea Surface Salinity Analysis P. Xie, Y. Xue, and A
Comparison of observed SST Vs. Satellite AVHRR SST
G16 vs. G17 IR Inter-comparison: Some Experiences and Lessons from validation toward GEO-GEO Inter-calibration Fangfang Yu, Xiangqian Wu, Hyelim Yoo and.
Jie He, Fuqing Zhang, Yinghui Lu, Yunji Zhang
Presentation transcript:

Validation of MTSAT-1R SST for the TWP+ Experiment Leon Majewski 1, Jon Mittaz 2, George Kruger 1,3, Helen Beggs 3, Andy Harris 2, Sandra Castro 4 1 Observations & Engineering Branch, Bureau of Meteorology, Australia 2 Earth System Science Interdisciplinary Center, University of Maryland, USA 3 CAWCR, Bureau of Meteorology, Australia 4 University of Colorado, USA

MTSAT-1R −LRIT June 2005-June 2006 −Cross-talk correction April 2006 −HRIT June 2006-June 2010 Cloud Clearing Approach GBCS Library (U. Edinburgh) Modified by U. MD & NOAA Uses CRTM Library NWP: GASP start-late 2009 NWP: ACCESS late 2009-present Sea Surface Temperature Approach Regression against drifting buoys Option of physical retrieval Sea Surface Temperature from MTSAT NOAA GBCS CRTM McIDAS ghrsst sses netCDF L2P HRIT ADDE MODEL ICE GAMSSA MDB AREA L3U Subset: TWP+ Blacklist

Sea Surface Temperature from MTSAT Matchup database SST based on regression requires a matchup database MDB rules are important: Observations must be −within 1 hour of each other −co-located: within 4-10 km of each other −over 10 km from cloud −not blacklisted (Meteo France) Error statistics (SSES) Bias and Standard Deviation for each quality level / proximity conf. in situ observations within a 30 day window prior to sat observation Mapping mapx ( Nearest neighbour to preserve link to observations

Sea Surface Temperature from MTSAT Algorithm development using drifting buoys ( ) Period: 15 July 2006 – 01 June 2008 Location: 60S – 60N, 100E-160W Day: Night: a0a0 a1a1 a2a2 a3a3 a4a4 a5a5 a6a6 Day Night

Validation of MTSAT SST Product validation using drifting buoys ( ) Period: 15 July 2006 – June 2010 Region: 60S – 60N, 100E-160W Night: Bias: , St. Dev: (N=96572) Day: Bias: , St. Dev: (N=56981) Three-way Comparison (AVHRR, Buoy, MTSAT-1R) Simple statistics can hide some complex issues Performance is not uniform – spatially/temporally

Morning Local West of 100E ~ -1.5 K

Afternoon Local West of 100E ~ -1.5 K

Night Local West of 100E ~ -0.5 K

Night Local West of 100E ~ -0.2 K

Validation of MTSAT SST Performance in the TWP+ domain Night: Bias: , St. Dev: 0.468, Robust St. Dev: Local time 19 – 06; Sun zenith > 100; Sensor zenith < points Day: Bias: , St. Dev: 0.749, Robust St. Dev: Local time 08 – 17; Sun zenith < 80; Sensor zenith < points

Validation of MTSAT SST Performance in the TWP+ domain Average difference from analysis 4 month period East-West bias Bias: K/Deg. Longitude About 0.1 K over TWP domain Smaller than expected DV signals What is causing this? Not GAMSSA Probably calibration

MTSAT-1R Calibration Calibration issues Difference from CRTM, , 30S-30N

How fit for purpose is MTSAT? Assessment of satellite sea surface temperature products Characterisation of observed diurnal warm-layer events Assessment of diurnal warming models Tropical Warm Pool Experiment MTSAT-1R RAMSSA (MTSAT-1R – RAMSSA)

How fit for purpose is MTSAT? Tropical Warm Pool Experiment (MTSAT-1R – RAMSSA)

Observations Acceptable standard deviation ( K), small cold bias ( K) Variability is greater than the uncertainty −Diurnal variability, < 0.5K, may be hidden by noise/calibration Tropical Warm Pool Experiment

Observations Acceptable standard deviation ( K), small cold bias ( K) Variability is greater than the uncertainty −Diurnal variability, < 0.5K, may be hidden by noise/calibration −Diurnal warming, > 1K, can be observed Area of interest is usually small compared to spatial scales of calibration issues Algorithm differences (day/night) can be problematic Tropical Warm Pool Experiment 1K Colder than RAMSSA

Summary MTSAT-1R MTSAT-1R SST JAMI (MTSAT-1R) provides hourly observations of SST Acceptable standard deviation ( K), small cold bias ( K) Temporal and spatial variability due to calibration −Not perfect; Be aware of limitations MTSAT-1R SST can be used for diurnal warming studies Diurnal warming can be detected using analysis fields −False positives occur where analysis is cloud affected Has been used to test simplified models and parameterizations for diurnal warming – see Castro et. al. Future Developments Calibration – discussions this week New software/physical retrieval Still need methods to improve cloud screening

Statistics: TWP+ Region Note that the moored and drifting buoys have a similar bias signal, but the moored buoys appear to be cold by ~0.12K when compared to drifting stats – I’ll have to double check this with co-located drifters

Statistics: TWP+ Region HourBiasSt DevHourBiasSt Dev Reasonably consistent cold bias: K. Reasonably consistent standard deviation: 0.45 (night) 0.7 (day)

Statistics: TWP+ Region Use moored buoys for verification: Day: Bias: , St. Dev: 0.801, Robust St. Dev: Local time 08 – 17; Sun zenith < 80; Sensor zenith < points Night: Bias: , St. Dev: 0.423, Robust St. Dev: Local time 19 – 06; Sun zenith > 100; Sensor zenith < points Reasonably consistent standard deviation: 0.45 (night) 0.7 K (day) All the moored stats look similar so we can clump them together for an hour-by-hour analysis The bias moves around with local time If we use the day time algorithm at night, we have a similar level of bias (though positive), but the st dev increases to ~1K… but would provide a bit more consistency. I think I’ll provide both methods in the future