Presentation is loading. Please wait.

Presentation is loading. Please wait.

Vicarious Calibration of Sentinel-3 Toward the Blending of Methods

Similar presentations


Presentation on theme: "Vicarious Calibration of Sentinel-3 Toward the Blending of Methods"— Presentation transcript:

1 Vicarious Calibration of Sentinel-3 Toward the Blending of Methods
GSICS Annual Meeting 20-24th March 2017, Madison, Wisconsin Vicarious Calibration of Sentinel-3 Toward the Blending of Methods Bertrand Fougnie Camille Desjardins + support from CNES-DNO/OT/PE

2 Summary Sentinel-3 Optical Sensors S3A nominal on-board calibration
The Vicarious Calibration Tool Box Calibration Results OLCI and SLSTR Cross-calibration over desert-sites Interband and absolute calibration Assessing the temporal monitoring Feedbacks – Combination of Methods Questions and support for discussion

3 Vicarious Calibration of Sentinel-3 Optical Sensors

4 Sentinel-3 Sentinel 3 Sentinel-3A OLCI SLSTR
ESA mission - Operational needs of the European Copernicus Program Oceanography and land monitoring S3A : launched in February 2016 S3B : to be launched in November 2017 Altimetry + 2 optical sensors OLCI and SLST (follow-on MERIS and AATSR) OLCI = 21 bands from 410 to 1020nm, 300m, 1270km SLSTR = 11 bands from 550 to nm, 500m/1km, 1400km On-board diffusers + Black body (nominal) + Vicarious calibration methods Sentinel-3A OLCI SLSTR

5 OLCI - Ocean and Land Colour Imager
OLCI is a follow-on of MERIS 21 VISNIR 300m-bands derived from a spectrometer 5 cameras 1270km - reduced sunglint : 12.6º tilt Calibration wheel : 2 diffuser + spectral

6 SLSTR – Sea and Land Surface Temperature Radiometer
SLSTR is a follow-on of ATSR-1&2 and AATSR instruments Scanning radiometer 2 views : nadir (1400km, OLCI overlap) and 55º backward (740km) 3 VISNIR + 3 SWIR 500m-bands 3 TIR + 2 fire 1km-bands Blackbody + VISCAL targets

7 Nominal Calibration Nominal calibration = on-board calibration OLCI
Spectral diffuser (monthly) : spectral calibration using Erbium-doped “pink” diffuser spectral calibration using diffuser and Fraunhofer lines check the spectrometer response, adjust the definition of bands Primary diffuser (every 2 weeks) : optical PTFE panel covering the field-of-view field-of-view characterization + reflectance calibration Secondary diffuser (every 3 months) : same optical PTFE degradation monitoring SLSTR VISCAL unit (every orbit at sunrise) : same optical PTFE as OLCI calibration and monitoring of visible bands Black bodies (every scan) : 2 highly stable references hot (305ºK) and cold (265ºK) calibration of TIR bands OLCI Calibration wheel

8 Validation of the radiometry
Additional validation will be performed Secure the nominal calibration Detect early anomalies on calibration parameter Detect possible improvement on calibration or radiometry Validate the accuracy to a wide range of targets Multi-method approach : If every method tries to evaluate the same instrumental calibration, each method is also sensitive to different instrumental radiometric behavior e.g. variation within the field-of-view, variation with time, spectral consistency, linearity behavior, straylight, spectral knowledge, saturation… A good consistency at the end would be a contribution to the validation of the whole radiometric instrumental behavior, including of course the absolute calibration

9 Operational Configuration for Sentinel-3
S3ETRAC / SADE / MUSCLE Operational Environment = S3ETRAC + SADE + MUSCLE 1/ S3ETRAC = Extraction and Selection of Measurements = PREPROCESSING reading of S3 images, selection of relevant data, extraction of data 2/ SADE = Measurement & Calibration Data Repository = DATABASE 3 steps : measurements // elementary calibration and synthesis Various methods for VIS-NIR-SWIR range Easy data management & traceability product identifier, calibration version, SADE identifier acquisition conditions : dates, geometries, meteorological data tool version, processing date and parameters… 3/ MUSCLE = Multiple Method Calibration tools (Front-end Graphic) = CALIBRATION 3 steps : insertion // calibration // synthesis Common calibration tools for all sensors

10 Map of Calibration Approaches
in SADE/Muscle Pseudo Invariant Calibration Sites (PICS) use of 20 desert sites in Africa/Arabia [Lachérade et al., 2013] reference = one sensor (i.e. MODIS or MERIS) or one date geometrical matching (no simultaneity req.) + spectral interpolation [Lachérade et al., 2013] cross-calibration/trending for all REFLECTIVE bands additional use of 4 snowy sites (inc. Dome C) for VIS-NIR bands [Six et al., 2004] Oceanic Oligotrophic Sites (very clear non-turbid scenes) Strict selection : very clear + non-turbid situations for atmosphere + surface Rayleigh [Hagolle et al., 1999, Fougnie et al., 2010] reference = Rayleigh scattering (~90% of TOA signal after selection - predictable) absolute calibration over a wide range of the fov (exc. sunglint) for VISIBLE range Sunglint [Hagolle et al., 2004; Fougnie 2016] Reference = one spectral band (red band ~ nm) Interband for all REFLECTIVE bands wrt the reference band Cloudy Sites (DCC - deep convective clouds) Strict selection of DCC [Fougnie and Bach, 2009; Fougnie 2016] Interband for VISNIR bands All are statistical approaches Result is Measured / Simulated ratio

11 Validation OLCI and SLSTR OLCI (additional)
Validation of OLCI&SLSTR calibration though SADE/MUSCLE All methods were already tested over many sensors, including MERIS OLCI and SLSTR Deserts Sun Glint Clouds DCC OLCI (additional) Snow Rayleigh

12 Chronology Launch in mid-February 2016
First data available for calibration : beginning of March Mid-Term Review (MTR) in End of April 2016 MTR : Only 1.5 month available for vicarious calibration A first evaluation was possible These preliminary results were very relevant Absolute bias, interband consistency, temporal evolution These conclusions were confirmed and consolidated at IOCR In Orbit Calibration Review (IOCR) in 1st July 2016 Confirmation of the results on ~3 months of data Note : the best time series available for desert sites Swap of level-1 processing from the prototype (IPFP operated in ESTEC) to the operational level-1 processing chain (IPF operated in EUMETSAT and ESRIN) Since IOCR time series are longer no significant changes on the vicarious results re-analysis of the early weeks of the mission to track ageing  successful

13 Temporal consistency at IOCR MTR
Stability as seen by cross-calibration over desert sites Very good stability after adjustment made before Mid-Term Review (dashed line) Oa8-665 Oa18-885 at IOCR S1-555 S3-865 MTR S6A-2225 S5A-1600

14 Temporal consistency Stability of the radiometry after IOCR MERIS
S2-659nm MODIS

15 Temporal consistency Derived from the primary on-board diffuser 1%
Ageing observed by the diffuser Can be trust the diffuser and implement the correction as baseline ? Noise on the absolute trending (diffuser BRDF) Bands are increasing, other decreasing Can we use vicarious calibration to conclude ? Derived from the primary on-board diffuser 1%

16 Temporal consistency OLCI 7-620 time series
Is it possible to extract very small instrumental drift from desert time-series ? Usual time series of calibration results : dispersion is too high ! Interband approach wrt a reference band (620nm) OLCI time series OLCI is the time reference Time series are arbitrarily shifted up/down for clarity Time unit = 4 days

17 Temporal consistency Is it possible to extract very small instrumental drift from desert time-series Usual time series of calibration results : dispersion is too high ! Interband approach wrt a reference band (620nm) : limitation On-going analysis based on an “interband to closest band” approach : spectral correlation

18 Temporal consistency How to improve the result ? Combination ! (that’s not blending) MTR = some updates on the CCDB (parameters) Time series were not homogeneous Before MTR Desert : nice cover of the time series  very suitable to assess the trending DCC : only 2 dates  not possible to assess the trending  Use of Desert sites After MTR Desert : ~no data before October  poor information for the trending DCC : not so many matchups, but at different dates on the time series  better for the trending estimation  Use of DCC

19 Temporal consistency Results : camera#4
Same observation for cameras 3 and 5 19

20 Consistency with other sensors
PICS-Desert sites : References MERIS (best spectral interpolation for VISNIR) + MODIS (GSICS reference) S2A-MSI (Sentinel) + AATSR (consistency with Envisat) Classical approach [Lachérade et al., 2013] + double difference (when few matchups) VISNIR : bias between OLCI/SLSTR/AATSR and MERIS/MODIS/S2MSI SWIR : bias between SLSTR and MODIS/S2MSI/AATSR OLCI SLSTR

21 Spectral consistency Various methods very well agree OLCI SLSTR
Interband methods (clouds, sunglint), and Desert/Rayleigh : all renormalized OLCI : perfect spectral consistency <1% (exc. Absorption bands, and perhaps 1020) SLSTR : within 2% for VISNIR, but very important bias on SWIR bands OLCI SLSTR Normalization mean[ 7-620; 8-665; ; ] Normalization S2-659

22 Consistency between all methods
Multi-method comparison Interband methods (clouds, sunglint), and Desert/Rayleigh : all renormalized OLCI : absolute residue ~2% (exc. Absorption bands, and perhaps 1020) SLSTR : residue of 3% for VISNIR, but very important bias on SWIR bands (10 to 40%) OLCI SLSTR Normalization for interband : 1.025 Normalization for interband : 1.042

23 Consistency within field-of-view
No large variation detected within field-of-view OLCI = pushbroom with 5 cameras SLSTR = scanner Interband over white target validated : some residues to be explained Nevertheless : variation of the ISR with detector number for OLCI to be considered to explain residues on Rayleigh scattering and DCC OLCI OLCI SLSTR Oa2-412 Oa8-865 S3-865 DCC Desert-MERIS Desert-MODIS Oa16-779 Oa6-560 S1-555 DCC Rayleigh Rayleigh

24 Consistency within field-of-view
Comparison between methods would help to identify errors (need statistics) 8-665 4-490 Rayleigh 8-665 4-490 DCC 8-665 4-490 Desert

25 Combining methods as seen by S3A
Results derived from various methods were important in IOCR for OLCI&SLSTR : Confirm a light absolute bias in VISNIR (about 2.5%) [Desert, Rayleigh] OLCI : Provide confidence on the on-board diffuser [Desert, DCC] Check the temporal stability of the instrument [Desert, DCC] Provide high confidence on the spectral consistency [Desert, DCC, sunglint] Investigation on the variation within fov [Rayleigh, Desert, DCC] SLSTR : Confirm a large absolute bias in SWIR (about 10 to 30%) [Desert, Sunglint] Additional results are foreseen Check the temporal stability [Desert, Sunglint] Check the variation nadir/oblique [Rayleigh, Desert] But that’s combination, still not merging !

26 Combination of Methods toward the Blending ?

27 Observation Several calibration methods could be available
Cloud-DCC, Moon, PICS-Desert, Rayleigh, Sunglint, PICS-Antarctica, SNO, Ray-matching… Implement several methods will provide various results which will in general differ sometimes consistent, sometimes not at all What’s the definition of « consistant » for GSICS needs ? GSICS needs to define the target in term of « self-consistency » Self-consistency of calibration results will differ because : theoretical performance of the method : bias and noise properties /representativity of the matchups known contribution from the sensor and considered on the calibration method (e.g. SBAF) known contribution from the sensor but not considered (e.g. homogeneity of ISR) unknown contribution from the sensor (or level-1)

28 Observation It is often called “calibration error” a radiometric artifact which is not a calibration error  Straylight, spectral rejection, non-linearity, variation with scan, polarization… They are radiometric errors, but not calibration errors They are radiometric biases, but varying with every different situation They are not full instrumental biases  What do we mean by “calibration” ? For some of us, this means “global adjustment of level-1” We may empirically correct the bias, but it could not be purely a calibration error a good instrument : all radiometric artifacts are well controlled  May often means good engineering a good calibration : consistency on results wrt the prime calibration May often means good instrument GSICS : to be considered when selected a reference (metric and scoring) Blending is not a fate, if a single method remains the best estimation GSICS has to face the way to consider this in order to provide to users the most relevant results  Difficult to adopt a standardized unique method (approach ?)

29 Radiometric artifacts or
DCC Moon Rayleigh Calibration units GSICS Blending Strategy Evaluation & Scoring Investigation Limitations or discrepancies Radiometric artifacts or Method artifacts Diagnosis 3/ Investigation Loop Full consistency Limitations GSICS Standard Blind blending GPRC modulated blending 1/ Blending Template 2/ Optimized Blending

30 To be evaluated for each band
The Synergy matrix Sensor to calibrate To be evaluated for each band Uncertainty from implemented method (depending on data sampling) Uncertainty from sensor Characterization to be addressed DCC Moon PICS-desert PICS-snow SNO Rayleigh Sunglint Spectral response knowledge Straylight Linearity Polarization Radiometric noise Trending Absolute Interband Cross-calibration

31 Best method or weighted mean Radiometric explanation
The Synergy matrix Sensor to calibrate To be evaluated for each band Uncertainty from implemented method (depending on data sampling) Uncertainty from sensor Characterization to be addressed DCC Moon PICS-desert PICS-snow SNO Rayleigh Sunglint Spectral response knowledge Straylight Linearity Polarization Radiometric noise Trending Absolute Interband Cross-calibration 1 One calibration coefficient per band per date Best method or weighted mean GSICS Output Radiometric explanation

32 Combination of Methods

33 Combination of Methods
PICS-Desert Moon DCC SNO Rayleigh PICS Snow Sunglint DIY Calibration activities « Calibration, that’s hobby… » (Dave Doelling)

34 DIY is fine, but GSICS needs operationality Blended Method GSICS
House-blend DIY is fine, but GSICS needs operationality

35 Discussion – Questions
Derive results from various methods How to define the mean value, or weighted value ? How to define corresponding weights ? Consider theoretical accuracy of methods Consider the sensor behavior If results are consistent between methods Do we keep the best theoretical method ? Do we derive a blend through a weighted mean in order to reduce residual bias ? This is the job of every agency or GPRS If results are not consistent between methods How discrepancies will be interpreted ? Ignored ? Will GSICS try to understand what radiometric artifacts could be ? If not solved/solvable, do we propose all results and ask users their own selection Ideally not recommended But may help various users  rename to GSICS adjustment instead of calibration (because we do not adjust calibration but something else)

36 Blending methods as seen by S3A
Which calibration result could we recommend based on the analysis of OLCI calibration during the 1st year in orbit ? Inter-calibration over desert (MERIS) : If we want to optimize the continuity with MERIS or the historical archive Calibration over Rayleigh scattering : If we want to optimize the performance for ocean color application Calibration over DCC: If we want to optimize the performance for clouds retrieval Blending of ~50% desert and ~50% DCC : If we want to optimize the level-1 calibration at the state-of-the-art Even for OLCI/S3A, which can be considered as a « very good » sensor, the wrap-up is not obvious

37 « This is not Science, this is Art… » (again Dave Doelling)
Wrap-up In general, derive results from various (accurate) methods = provide different information from the radiometry In general, results will slightly/largely differ Probably, no universal method can be defined in advance for blending, but an approach was proposed At least at the beginning, this would be a case-by-case analysis and conclusion GSICS must define how the various calibration sets will be used Unclear information toward users may endanger the future of calibration correction proposed by GSICS « This is not Science, this is Art… » (again Dave Doelling)

38 That’s all Folks !

39 The GSICS paradigm Tim’s car GSICS’s vehicle when WG in US


Download ppt "Vicarious Calibration of Sentinel-3 Toward the Blending of Methods"

Similar presentations


Ads by Google