Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Status of the Multi-satellite Precipitation Analysis and Insights Gained from Adding New Data Sources G.J. Huffman 1,2, R.F. Adler 1, D.T. Bolvin 1,2,

Similar presentations


Presentation on theme: "The Status of the Multi-satellite Precipitation Analysis and Insights Gained from Adding New Data Sources G.J. Huffman 1,2, R.F. Adler 1, D.T. Bolvin 1,2,"— Presentation transcript:

1 The Status of the Multi-satellite Precipitation Analysis and Insights Gained from Adding New Data Sources G.J. Huffman 1,2, R.F. Adler 1, D.T. Bolvin 1,2, E.J. Nelkin 1,2 1: NASA/GSFC Laboratory for Atmospheres 2: Science Systems and Applications, Inc. Outline 1.MPA Status 2.Satellite Observation Noise 3. Estimating Error 4.(Validation Data Issues) 5.Summary

2 Instant- aneous SSM/I TRMM AMSR AMSU 30-day HQ coefficients 3-hourly merged HQ Hourly IR Tb Hourly HQ-calib IR precip 3-hourly multi-satellite (MS) Monthly gauges Monthly SG Rescale 3-hourly MS to monthly SG Rescaled 3-hourly MS Calibrate High-Quality (HQ) Estimates to “Best” Merge HQ Estimates Match IR and HQ, generate coeffs Apply IR coefficients Merge IR, merged HQ estimates Compute monthly satellite-gauge combination (SG) 30-day IR coefficients 1. MPA STATUS The MPA has been upgraded to produce both improved real-time (3B42RT) and new post- real-time (3B42) data sets Code to include AMSR-E and AMSU-B precip estimates in the MPA is in operational testing The “old” real-time is available for February 2002 – present The post-real-time is available for January 1998 – December 1998, and continuing reprocessing at 5x real time

3 2. SATELLITE OBSERVATION NOISE Different sensors “see” different physical scenes Microwave “sees” hydrometeors along front IR “sees” clouds ahead of front The inferred precip is in different places that are synoptically consistent; but the microwave is better

4 2. SAT. OBS. NOISE (cont.) IR vs. microwave for full resolution: 3-hr 0.25°x0.25°; 00, 03, …, 21Z 15 Feb 2002 Latitude band 30°N-S Errors are equitably distributed on either side of the 1:1 line by design of the IR calibration. But, details of IR and microwave patterns differ. Scene classification might be helpful (Sorooshian et al.).

5 2. SAT. OBS. NOISE (cont.) So, we try to get as many microwaveTRMM PR (red) sensors as possible (i.e., do GPM).TRMM TMI (cyan) SSM/I (3 sat.; yellow) But, details in the microwave observationsAMSR-E (blue) Can cause noise in the precip estimatesAMSU-B (3 sat.; green) If they’re not properly handled.IR (black)

6 2. SAT. OBS. NOISE (cont.) Coincident 0.25°-gridbox GPROF-AMSR and -TMI estimates for February 2004 0102030 AMSR Precip (mm/h) TMI Precip (mm/h) 010203040 ±15 min 0102030 AMSR Precip (mm/h) ±30 min 010203040 AMSR Precip (mm/h) ±60 min The “standard” 3-hr time window for coincidence introduces error same grid box for spatial coincidence ±15-, ±30-, ±60-minute windows of time coincidence points near axes at ±60 result from advection into/out of box, and/or growth/decay limiting the window decreases the microwave data in each period time interpolation, such as in morphing, helps avoid this error

7 2. SAT. OBS. NOISE (cont.) Conic scanners (SSM/I, TMI, AMSR-E) Scan lines are segments of a cycloidal pattern. The along-track separation is the same everywhere, but the curvature causes over- sampling at the edges. Satellite motion Pixels at scan edges uniquely represent an area about 40% smaller than at scan center.

8 Satellite motion 2. SAT. OBS. NOISE (cont.) Cross-track scanners (IR, AMSU) Pixels grow as viewing angle grows away from nadir. Also, oversampling in the along-track direction occurs at the edges. Changing pixel size changes the observed precipitation rates scan center scan edges

9 3. ESTIMATING ERROR Bowman, Phillips, North (2003, GRL) validation by TOGA TAO gauges 4-year average of Version 5 TRMM TMI and PR 1°x1° satellite, 12-hr gauge, each centered on the other each point is a buoy wind bias in the gauges is not corrected the behavior seems nearly linear over the entire range Slope = 0.96Slope = 0.68

10 3. ESTIMATING ERROR (cont.) Monthly accumulations of GPCP Version 2 versus Pacific atolls for 2.5°x2.5° boxes more spread than 4-year average part of spread is due to gauge uncertainty (Gebremichael et al, 2003, Steiner et al. 2003) basis of bias still uncertain

11 Mean = 3.2 mm/d Bias Ratio = 1.04 MAE = 5.3 mm/d 3. ESTIMATING ERROR (cont.) Daily accumulations of MPA (3B42RT) versus CPC analysis for 0.25°x0.25° boxes 13Z 30 July – 12Z 31 July 2004 from CPC validation site correlation continues to go down, as expected

12 3. ESTIMATING ERROR (cont.) Which “satellite” estimate matches the “observations” better? time amount obs.sat.1sat.2 The uncertainties are multi-scale sat.1 is better than sat.2 the usual  2 = (sat – obs) 2 yields the same bad score for both the improvement can be revealed with “some” averaging, but how much? The answer depends on the averaging. what does the user want to know? fine-scale forecasts have the same problem

13 3. ESTIMATING ERROR (cont.) At the monthly scale there are a few bulk formulae for estimating random error (Huffman; Gebremichael and Krajewski) even these need information not all data sets provide better schemes are needed that separately represent sampling and algorithmic error An estimator is needed for bias on coarse scales Tom Smith is working on this sticking point is possible dependence on weather regime Implication: regime-dependent bias would look like extra random error when the regimes aren’t represented At “fine” time/space scales we have a lot to do the cleanest possible match-ups are critical

14 3. ESTIMATING ERROR (cont.) There is no practical approach for averaging up the fine-scale errors to provide a consistent estimate of the coarser-scale errors. should there be separate estimates of correlated and uncorrelated errors on the fine scale? Speculation: accounting for weather regime and underlying surface type will turn out to be important for getting clean answers. Validating combination estimates has the additional challenge that the relative weight given different inputs fluctuates, and the different inputs usually have different statistical properties

15 3. ESTIMATING ERROR (cont.) The precip error in no-rain areas needs to be explicitly estimated X Y error is certainly not zero for every zero-rain estimate some locations are very certain not to contain rain, while the no-rain estimate is much less certain in others error estimates in zero-rain areas might be helpful in merging different rain estimates what does the user want to know? this is likely an algorithm-dependent calculation – GPROF is heading towards this in Version 7 + + + Rain Estimate Possible Estimate of Error

16 4. VALIDATION DATA ISSUES The lack of validation is true even at the 2.5° monthly scale. A standard monthly gauge analysis provides ≥5 gauges only in some land areas. We can’t assume correct monthly validation in the rainforests!

17 4. VALIDATION DATA ISSUES (cont.) We need to pursue the best in situ technologies redundant gauge siting (Krajewski, TRMM Office) dual-polarization radar revisit optical rain gauges? (Weller, Bradley, Lukas [2004 J.Tech.] think they’ve figured out TOGA COARE data) acoustic rain gauges (Nystuen) solid precipitation in general – solid precipitation is the next frontier for satellites; validation is a substantial issue We need to develop more surface validation sites ensure that the data get shared sample additional climate regimes – mid-latitude ocean – snowy land develop long-term strategies without breaking the bank – IPWG working with continental-scale validation efforts (Ebert - Australia, Janowiak - U.S., Kidd - Europe)

18 5. SUMMARY The MPA is ready to include “all” the standard microwave data. The original satellite data have features that can cause “noise” if they’re not properly handled. IR doesn’t respond to hydrometeors per se wide time windows mix non-coincident data different pixels along a scan represent different things Error estimation remains a substantial problem. finer-scale match-ups are intrinsically more noisy we need concepts and methodology for making and inter-relating quantitative estimates of error across the range of scales in particular, we need to develop bias estimates and estimates of error in non-raining areas Surface observations can help us understand the behavior of the satellite estimates. We need to: develop more data sites, including areas with snow emphasize clean match-ups of surface and global data

19 5. SUMMARY The MPA is ready to include “all” the standard microwave data. The original satellite data have features that can cause “noise” if they’re not properly handled. IR doesn’t respond to hydrometeors per se wide time windows mix non-coincident data different pixels along a scan represent different things Error estimation remains a substantial problem. finer-scale match-ups are intrinsically more noisy we need concepts and methodology for making and inter-relating quantitative estimates of error across the range of scales in particular, we need to develop bias estimates and estimates of error in non-raining areas Surface observations can help us understand the behavior of the satellite estimates. We need to: develop more data sites, including areas with snow emphasize clean match-ups of surface and global data

20 3. ESTIMATING ERROR Precipitation is non-negative intermittent highly variable over the known range of time scales loosely coupled to larger-scale controls The usual notion of error is P est (x,y,t) = [ P true (x,y,t) + r (x,y,t) ] B (x,y) estimated precipitation (what we actually see) true precipitation (what validation Is supposed to tell us) random error (zero-mean random parameter) bias error (persists when time averaging should have damped out the random error) results from algorithmic error or sampling error results from algorithmic error or sampling error

21 2. SAT. OB. NOISE (cont.) Both TMI and AMSU-B have a problem detecting light precipitation over ocean; AMSU-B is worse AMSU-B compensates for low occurrence of precip by having more high rates Probability matching can control rates, but can’t invent rain in zero- rain areas

22 3. ESTIMATING ERROR (cont.) How are these two “satellite” estimates best merged? time amount sat.1sat.2 Any linear weighting scheme will damage the statistics: -fractional coverage will be too high -maximum and conditional rainrates will be too low

23 3. ESTIMATING ERROR (cont.) Real rain patterns are messy! Rainfall for DC area, July 1994 Convective rain has very short correlation distances – even for a month The original D.C. is 50% of a 0.25° grid box at latitude 40°

24 3. ESTIMATING ERROR (cont.) Satellite-buoy validation Buoy TMI

25 4. VALIDATION DATA ISSUES (cont.) The primary difficulties are lighter precipitation rates snowy/icy/frozen surface defeats current microwave schemes - prevents direct estimates and calibration for IR IR tends to be decoupled from precipitation processes surface calibration/validation data are sparse “Complex terrain” can induce variations the satellites miss strong variations in short distances “warm rain enhancement” on windward slopes not retrievable Sounding channels – TOVS, AIRS – the current best choice GPCP SG and 1DD both use TOVS at high lat./alt. group funded to put sounder data in the MPA globally GPM (and others) have driven recent work evaluating additional channels evaluating deployment of sounder channels that don’t see the sfc.


Download ppt "The Status of the Multi-satellite Precipitation Analysis and Insights Gained from Adding New Data Sources G.J. Huffman 1,2, R.F. Adler 1, D.T. Bolvin 1,2,"

Similar presentations


Ads by Google