Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.

Similar presentations


Presentation on theme: "1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively."— Presentation transcript:

1 1 Information Content Tristan L’Ecuyer

2 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively straight-forward to show that where I m is the m x m identity matrix and A is the averaging kernel. NOTE: Even if the number of retrieval parameters is equal to or less than the number of measurements, a retrieval can still be under- constrained if noise and redundancy are such that the number of degrees of freedom for signal is less than the number of parameters to be retrieved.

3 3 Entropy-based Information Content The Gibbs entropy is the logarithm of the number of discrete internal states of a thermodynamic system where p i is the probability of the system being in state i and k is the Boltzmann constant. The information theory analogue has k=1 and the p i representing the probabilities of all possible combinations of retrieval parameters. More generally, for a continuous distribution (eg. Gaussian):

4 4 Entropy of a Gaussian Distribution For the Gaussian distributions typically used in optimal estimation we have: For an m-variable Gaussian dist.:

5 5 Information Content of a Retrieval The information content of an observing system is defined as the difference in entropy between an a priori set of possible solutions, S(P 1 ), and the subset of these solutions that also satisfy the measurements, S(P 2 ): If Gaussian distributions are assumed for the prior and posterior state spaces as in the O. E. approach, this can be written: since, after minimizing the cost function, the covariance of the posterior state space is:

6 6 Interpretation Qualitatively, information content describes the factor by which knowledge of a quantity is improved by making a measurement. Using Gaussian statistics we see that the information content provides a measure of how much the ‘volume of uncertainty’ represented by the a priori state space is reduced after measurements are made. Essentially this is a generalization of the scalar concept of ‘signal-to- noise’ ratio.

7 7 Blue  a priori state space Green  state space that also matches MODIS visible channel (0.64 μm) Red  state space that matches both 0.64 and 2.13 μm channels Yellow  state space that matches all 17 MODIS channels Liquid Cloud Retrievals Prior State Space0.64 μm (H=1.20) LWP (gm -3 ) R e (μm) LWP (gm -3 ) R e (μm) 0.64 & 2.13 μm (H=2.51) 17 Channels (H=3.53)

8 8 Measurement Redundancy Using multiple channels with similar sensitivities to the parameters of interest merely adds redundant information to the retrieval. While this can have the benefit of reducing random noise, it cannot remove biases introduced by forward model assumptions that often impact both channels in similar ways as well.

9 9 The information content of individual channels in an observing system can be assessed via: where k j is the row of K corresponding to channel j. The channels providing the greatest amount of information can then be sequentially selected by adjusting the covariance matrix via: Channel Selection

10 10 Method Evaluate S y Compute K Establish prior information Evaluate the information content of each channel, H j, with respect to the a priori, S a Select the channel that provides the most information and update the covariance matrix using the appropriate row of K Recompute the information content of all remaining channels with respect to this new error covariance, S 1 Select the channel that provides the most additional information Repeat this procedure until the signal-to-noise ratio of all remaining channels is less than 1:

11 11 Optimizing Retrieval Algorithms GOAL: Select optimal channel configuration that maximizes retrieval information content for the least possible computational cost by limiting the amount of redundancy in the observations APPROACH: Use Jacobian of the forward model combined with appropriate error statistics to determine the set of measurements that provides the most information concerning the geophysical parameters of interest for the least computational cost

12 12 Information Spectra Relative to the a priori, the 11 μm channel provides the most information due to its sensitivity to cloud height and its lower uncertainty relative to the visible channels. Once the information this channel carries is added to the retrieval, the I.C. of the remaining IR channels is greatly reduced and two visible channels are chosen next. IWP = 100 gm -2 R e = 16 μm C top = 9 km

13 13 Unrealistic Errors When a uniform 10% measurement uncertainty is assumed, the visible/near-IR channels are weighted unrealistically strongly relative to the IR. IWP = 100 gm -2 R e = 16 μm C top = 9 km 10 %

14 14 Thin Cloud (IWP = 10 gm -2 ) For very thin clouds, the improved accuracy of IR channels relative to those in the visible increases their utility in the retrieval. IWP = 10 gm -2 R e = 16 μm C top = 9 kmIWP = 100 gm -2 R e = 16 μm C top = 9 km

15 15 Larger Crystals (R e = 40 μm) At large effective radii, both the visible and IR channels lose sensitivity to effective radius. Two IR channels are chosen primarily for retrieving cloud height and optical depth. IWP = 100 gm -2 R e = 40 μm C top = 9 kmIWP = 100 gm -2 R e = 16 μm C top = 9 km

16 16 High Cloud (C top = 14 km) The enhanced contrast between cloud top temperature and the surface increases the signal to noise ratio of the IR channels. IWP = 100 gm -2 R e = 16 μm C top = 14 kmIWP = 100 gm -2 R e = 16 μm C top = 9 km


Download ppt "1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively."

Similar presentations


Ads by Google