Presentation is loading. Please wait.

Presentation is loading. Please wait.

Remote Sensing of Optically Shallow Waters Retrieval of Bathymetry, Bottom Classification, and Water Optical Properties from Hyperspectral Imagery Curtis.

Similar presentations


Presentation on theme: "Remote Sensing of Optically Shallow Waters Retrieval of Bathymetry, Bottom Classification, and Water Optical Properties from Hyperspectral Imagery Curtis."— Presentation transcript:

1 Remote Sensing of Optically Shallow Waters Retrieval of Bathymetry, Bottom Classification, and Water Optical Properties from Hyperspectral Imagery Curtis D. Mobley Sequoia Scientific, Inc. Bellevue, Washington, USA Work underway with W. Paul Bissett, et al. Florida Environmental Research Institute Tampa, Florida

2 Overview What’s the problem? How the terrestrial RS people solve a similar problem Why the terrestrial solution doesn’t work for the ocean Airborne hyperspectral remote sensing One way to attack the ocean problem via a spectrum matching and look-up-table (LUT) methodology One example analysis Error estimates for the retrieved information

3 The Problem Many areas of the coastal ocean are difficult to map (shallow water, dangerous coral reefs, denied access) and monitor (for detection of storm effects on bathymetry and bottom vegetation; ecosystem changes in water quality or bottom vegetation due to human inputs or climate change) at high spatial and temporal resolution. Need to extract environmental information (bathymetry, bottom type, water-column inherent optical properties) from remote-sensing reflectances R rs.

4 Terrestrial Thematic Mapping Generation of maps of vegetation type, land usage, population density, etc. is called “thematic mapping.” Intensively studied for terrestrial mapping for >30 years (e.g., Landsat multispectral sensors) www.csc.noaa.gov/crs/rs_apps/sensors/landsat.htm

5 A Common Terrestrial Solution (1) Build a library of measured R rs spectra for various surface types (bare soil, grasslands, forest, pavement, healthy crops, diseased crops, etc.) simple example: 2 wavelengths and 3 classes R rs 11 22 bare soil water forest R rs (  1 ) R rs (  2 )

6 A Common Terrestrial Solution (2) Compute class mean R rs spectra and covariance matrices R rs 11 22 bare soil water forest R rs (  1 ) R rs (  2 )

7 The mean spectrum for class m, having N m spectra each with K wavelengths, is (dropping the rs from R rs ) Math Details The K  K covariance matrix for class m is  m (i,j) tells how R rs at wavelength i covaries with R rs at wavelength j [units of 1/sr 2 for R rs ]  m (i,i) is the variance at wavelength i i = 1,…,K i, j = 1,…,K

8 A Common Terrestrial Solution (3) Let I be an image R rs spectrum (i.e., R rs for a particular image pixel, after good atmospheric correction). In supervised classification, the object is to assign I to one of the predetermined classes of R rs spectra. One powerful way to do this is using maximum liklihood estimation (MLE), which says (trust me, you don’t want to see the derivation) that I most likely belongs to the class m having the smallest value of D 2 MLE (m) is the “distance” between I and the mean spectrum for class m. Note: we compare the image spectrum I only with the mean spectrum from each predetermined class; very fast since the matrices are precomputed ( |  m | is the determinant of  m ;  m -1 is the inverse ) For the details see Richards, J. A. and X. Jia, 2006. Remote Sensing Digital Image Analysis: An Introduction; Fourth Edition. Springer.

9 A Common Terrestrial Solution (4) In summary: use predetermined classes of surface types (trees, grass, etc.) use supervised classification use MLE to determine the “best fit” of the image spectra to the allowed classes This works very well for classification of land surfaces, so now let’s do the same thing for mapping of optically shallow waters.

10 Oceanic Thematic Mapping Can I extract bottom depth, bottom type, and water IOPs from imagery of optically shallow waters? image acquired by NRL-DC as part of the CoBOP program.

11 MLE Does NOT Work for the Ocean Problem (or, perhaps, I’m just not smart enough to figure out how to make it work) We cannot define simple “sand,” “coral,” etc. bottom classes because of the effects of depth and water column IOPs on the bottom reflectance spectra. Every combination of a bottom type, depth, and water IOPs is an individual class.

12 Terrestrial vs. Shallow-water Remote Sensing Terrestrial thematic mapping retrieves only the surface type For shallow waters, I need the bottom type AND the water depth AND the water IOPs, which is asking for much more and is thus a much more difficult problem Land surfaces are often very reflective (R = 0.20-0.80), so that atmospheric correction is less critical (surface reflected radiance is a bigger part of the measured total radiance) Oceans are generally very dark (R < 0.05), so that very good atmospheric correction is required to obtain accurate R rs I thus need all of the information I can get, e.g., well calibrated, hyperspectral R rs

13 Hyperspectral Airborne Imagery Hyperspectral imagery acquired from aircraft allows for rapid, high resolution observation of coastal waters. rapid: imagery over large areas (>100 km 2 ) can be acquired and processed in days high resolution: ground resolution is ~1m to a few meters, as desired cost: much less than data collection from small boats or diver observations Hyperspectral: 30 or more bands with 10 nm or better resolution Typically have >100 bands with ~5 nm resolution

14 The PHILLS Sensor PHILLS: Portable Hyperspectral Imager for Low-Light Spectroscopy (developed in 1990’s by NRL-DC) A pushbroom scanning spectrometer: records calibrated hyperspectral radiances along a line perpendicular to the flight direction There are several similar systems now in use (CASI, SAMPSON, etc)

15 spatial wavelength camera optics image the ground onto the focal plane slit selects the across-track spatial dimension prism disperses the light, 400-1000 nm 2D CCD records radiance as a function of across-track position (1024 pixels) and wavelength (128 or 256 bands) build up radiance (and remote sensing reflectance, after atmospheric correction) as a function of (x,y,  ) as the plane flies ground scene PHILLS Optical Design

16 PHILLS in Use

17 Spectrum Matching and Look-Up-Table R rs Inversion (Mobley et al., 2005. Applied Optics, 44(17), 3576-3592) LUT retrieval: Depth 2.75 m 80% sand, 20% grass IOP set #17 pixel R rs extraction database of R rs spectra database search spectrum match

18 Authors in oceanography include Bachmann, Bissett, Davis, Goetz, Hoyer, Lee, Liu, Louchard, Lyzenga, Mobley, Sandidge, to name a few… Spectrum Matching Is Nothing New Google Scholar: 973 refs with "spectrum matching" (all fields, not just earth remote sensing) There are extensive spectra reflectance libraries for terrestrial and manmade surface types, but not for the ocean. I have to build my library of R rs spectra using HydroLight. Each R rs spectrum corresponds to a known bottom depth, bottom reflectance (either a pure spectrum or a mixture of various bottom types), and a, b, and b b spectra. (I currently have ~10 6 R rs spectra)

19 Example: PHILLS Horseshoe Reef Image NRL-DC PHILLS image from ONR CoBOP program, May 2000 501x899 pixels at ~1.3 m resolution Horseshoe Reef ooid sand mixed sediment, corals, turf algae, seagrass Lee Stocking Island, Bahamas dense seagrass

20 ● Unconstrained inversions I know nothing about the environment, so I do a simultaneous retrieval of everything: bathymetry, bottom reflectance and type, and water-column absorption, scatter, & backscatter ● Constrained inversions I know the bathymetry and/or water optical properties, so I retrieve only what I don’t know Adding (correct) constraints adds information, so presumably the retrievals of the remaining unknowns will be improved Unconstrained and Constrained Inversions

21 Unconstrained Bathymetry Retrieval Black:NRL acoustic survey for ONR CoBOP program Color:LUT unconstrained depth retrieval acoustic bathymetry coverage is a few meters along track and ~10 m cross track resolution; interpolate to pixel level for depth-constrained retrievals

22 Unconstrained LUT vs. Acoustic Bathymetry These “LUT errors” also include errors due to latitude-longitude calculations in mapping acoustic ping locations to image pixels (horizontal errors of several meters or more due to failure of built- in navigation instrument)

23 Unconstrained LUT Bottom Classification photo of dense seagrass over sand substrate

24 Depth-Constrained Bottom Classification Depth- constrained inversion Unconstrained inversion some sand substrate with sparse vegetation changed to pure sediments

25 Depth-Constrained Bottom Classification Depth- constrained inversion Unconstrained inversion some dense vegetation changed to pure corals or mixtures

26 IOP-Constrained Retrievals dots and squares: two sets of ac9 data from the Horseshoe Reef area. lines: similar a and b from the LUT IOP database; the four backscatter curves have particle backscatter fractions of 0.01, 0.02, 0.03, and 0.04 To constrain the IOPs, assume that a and b are constant over the image area (probably wrong: CDOM decreases as go off shore, and resuspended sediment likely higher near shore)

27 IOP-Constrained Bathymetry IOP- constrained inversion Unconstrained inversion

28 IOP-Constrained Bathymetry IOP-constrained inversion Unconstrained inversion Constraining the IOPs gave slightly greater depths on average but did not greatly improve the bathymetry retrieval.

29 Depth- and IOP-Constrained Bottom Classification for Horseshoe Reef Divers laid down transects along the reef (within the polygon) and measured the bottom coverage via photography (very laborious)

30 retrieval databasebare sand dark sediment or sand and sparse grass sediment mixed with grass, turf, macrophytes pure coral sediment mixed w/ coral and algae depth errors: % error/rms err/ % in ±1m/ % in ±25% 7 (a, b) x 4 B p ; unconstrained depths 3.38.570.110.37.8-4.5/1.16/66/92 4 (a, b) x 4 B p ; unconstrained depths 3.38.569.910.47.9-4.6/1.20/65/90 4 (a, b), B p = 0.02; unconstrained depths 3.38.367.312.09.1-5.0/1.24/65/90 1 (a, b), B p = 0.02; unconstrained depths 3.19.449.412.126.0-1.3/1.39/61/86 7 (a, b) x 4 B p ; constrained depths 3.26.870.815.43.8NA 4(a, b) x 4 B p ; constrained depths 3.26.870.715.53.8NA 4(a, b) ), B p = 0.02; constrained depths 2.85.968.416.06.9NA 1 (a, b), B p = 0.02; constrained depths 2.97.763.616.79.2NA measured (along diver transects) ~2.5hardpan~14turf + algae 69.8 13.3 from Lesser & Mobley, Coral Reefs (accepted) Depth- and IOP-Constrained Bottom Classification for Horseshoe Reef

31 kNN Error Analysis The previous figures were generated using the one “best fit” or “closest” database R rs for each pixel (smallest Euclidean distance: D 2 =  j [R rsdb (  j ) - R rsim (  j )] 2 ) Many spectra in the database are very similar and correspond to slightly different environmental conditions (depths, bottom reflectances, IOPs) Noise in the image R rs may cause different database spectra to be the closest match, and thus give different retrievals Rather than using just the closest match, find the k closest matching database spectra (k Nearest Neighbors, kNN) and “vote” on the retrieval

32 kNN Error Analysis PHILLS R rs (blue) best fit (k = 1) database spectrum (red) k = 50 closest database spectra (green)

33 kNN Error Analysis depth distribution for the k = 50 closest spectra (red) best fit gaussian depth distribution (equal area; blue)

34 kNN Error Analysis best fit (k = 1) retrieval: depth z b = 6.00 m bottom type = pure sea grass IOPs = database set 49 k = 50 NN retrievals: depth: gaussian fit: mean z b = 5.89 m, std dev = 0.42 m bottoms: 44 pure sea grass; 4 10% sand + 90% grass; 1 turf algae; 1 sargassum --> bottom is dense sea grass with good confidence IOPs: 16 set 42; 8 set 43; 15 set 49, 11 were 4 others --> IOPs were probably close to database IOP sets 42, 43, or 49 (all of which are similar) ground truth: depth = 5.78 m (acoustic) bottom type: dense sea grass (visual) IOPs: were not measured at this pixel and time, but the retrievals are consistent with IOPs measured in this area

35 kNN Error Analysis On average, using statistical estimation based on k ~ 50 NN gives ~ 25% reduction in rms depth errors (preliminary finding) kNN analysis also gives quantitative estimates of the errors in the retrievals, which is VERY IMPORTANT

36 Conclusions (1) For this image: Unconstrained retrievals of depth, bottom reflectance, and water- column IOPs are consistent with available ground truth. Constraining the bathymetry does not greatly change the bottom classification and IOP retrievals. Why not? Because the unconstrained bathymetry was already close to correct. Constraining the IOPs does not greatly change the retrieved bathymetry and bottom classification, because the unconstrained IOPs were already close to correct Constraining both bathymetry and IOPs does not greatly change the retrieved bottom classification (ditto) This indicates that non-uniqueness was not a problem: LUT did not find combinations of wrong depth, wrong bottom reflectance, and wrong IOPs that gave almost the same R rs spectrum as the correct solution.

37 Conclusions (2) Adding constraints does however greatly improve the image processing time because less of the LUT R rs database needs to be searched for each pixel. For the Horseshoe Reef image (on a 2 GHz PC): unconstrained inversion: 71 minutes (>10 10 R rs comparisons) depth-constrained inversion: 25 min IOP-constrained inversion: 27 min depth- and IOP-constrained inversion: 3.5 min Statistical estimation techniques based on kNN matching can provide quantitative error estimates on the retrieved information

38 What is the Future of Remote Sensing? Ocean color remote sensing has a long history of multispectral satellite imagers (CZCS, SeaWiFS, MODIS, and — let us pray for success — NPOESS/VIIRS, and others from other countries) Hyperspectral sensors so far have been placed only on aircraft, and hyperspectral imagery is proving useful for RS of optically shallow waters and coastal waters (very active research area in many countries) Why no hyperspectral sensors on satellites? It’s technologically risky It’s politically risky The advantage of HS over MS has not been convincingly presented IMHO, hyperspectral is not just hype. The future of ocean RS lies in hyperspectral sensors and new classes of retrieval algorithms that make use of both spectral magnitude and shape (neural networks, spectrum matching, etc.). And don’t forget polarization….

39 Go forth now into your optical oceanography careers with great self confidence!!


Download ppt "Remote Sensing of Optically Shallow Waters Retrieval of Bathymetry, Bottom Classification, and Water Optical Properties from Hyperspectral Imagery Curtis."

Similar presentations


Ads by Google