The Noise Propagator for Laser Tomography Adaptive Optics Don Gavel NGAO Telecon October 8, 2008.

Slides:



Advertisements
Similar presentations
Structural Equation Modeling
Advertisements

Discovering Cyclic Causal Models by Independent Components Analysis Gustavo Lacerda Peter Spirtes Joseph Ramsey Patrik O. Hoyer.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Experimental Design, Response Surface Analysis, and Optimization
Integration of sensory modalities
Lecture 3 Probability and Measurement Error, Part 2.
Linear Algebraic Equations
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Error Propagation. Uncertainty Uncertainty reflects the knowledge that a measured value is related to the mean. Probable error is the range from the mean.
Atmospheric phase correction for ALMA Alison Stirling John Richer Richard Hills University of Cambridge Mark Holdaway NRAO Tucson.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Lecture II-2: Probability Review
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Normalised Least Mean-Square Adaptive Filtering
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
1 Terminating Statistical Analysis By Dr. Jason Merrick.
Objectives of Multiple Regression
LINEAR PROGRAMMING SIMPLEX METHOD.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Introduction to Error Analysis
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Some matrix stuff.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
CSDA Conference, Limassol, 2005 University of Medicine and Pharmacy “Gr. T. Popa” Iasi Department of Mathematics and Informatics Gabriel Dimitriu University.
ME 440 Intermediate Vibrations Th, March 26, 2009 Chapter 5: Vibration of 2DOF Systems © Dan Negrut, 2009 ME440, UW-Madison.
NSF Center for Adaptive Optics UCO Lick Observatory Laboratory for Adaptive Optics Tomographic algorithm for multiconjugate adaptive optics systems Donald.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Modern Navigation Thomas Herring
INTRODUCTION When two or more instruments sound the same portion of atmosphere and observe the same species either in different spectral regions or with.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
BioSS reading group Adam Butler, 21 June 2006 Allen & Stott (2003) Estimating signal amplitudes in optimal fingerprinting, part I: theory. Climate dynamics,
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
LWG, Destin (Fl) 27/1/2009 Observation representativeness error ECMWF model spectra Application to ADM sampling mode and Joint-OSSE.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
CY3A2 System identification Input signals Signals need to be realisable, and excite the typical modes of the system. Ideally the signal should be persistent.
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least.
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
CS654: Digital Image Analysis Lecture 22: Image Restoration.
© The McGraw-Hill Companies, 2005 EDUCATION AND GROWTH: THE SOLOW MODEL WITH HUMAN CAPITAL Chapter 6 – second lecture Introducing Advanced Macroeconomics:
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Université d’Ottawa / University of Ottawa 2001 Bio 8100s Applied Multivariate Biostatistics L11.1 Lecture 11: Canonical correlation analysis (CANCOR)
Problems in solving generic AX = B Case 1: There are errors in data such that data cannot be fit perfectly (analog: simple case of fitting a line while.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 19: Examples with the Batch Processor.
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
Canadian Bioinformatics Workshops
Advanced Macroeconomics:
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Singular Value Decomposition SVD
Principles of the Global Positioning System Lecture 13
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

The Noise Propagator for Laser Tomography Adaptive Optics Don Gavel NGAO Telecon October 8, 2008

Noise Propagator Issue Simulations are showing that the “law of averages” is not working as expected with multiple laser guidestars Dividing a fixed amount of laser power over a larger number of guidestars results in an increase in the noise in the solution Increasing number of guidestars with fixed laser power per guidestar results in no decrease in noise in the solution Law of averages (sqrt(n) noise reduction) only holds if guidestars are overlapping or very close to overlapping 2

Analysis of Noise Propagator An analytic approach was taken to understand this problem independently of the numerical simulations The answer is a basic consequence of linear algebra Definition Noise propagator is the ratio of the standard deviation of the noise in the estimate of wavefront along the direction of a guidestar to the standard deviation in the noise of the measurement. Determination One way to determine the noise propagator is to form the rss- difference of an estimate to the zero noise case Another way, for linear systems, is to simply have zero atmospheric index fluctuation (r 0 =infinity) and assess the response to measurement noise. 3

LTAO is a linear system of equations We (LAOS, TSW, etc.) model LTAO as a linear system Where x is a vector of all the delta-indices of the “voxels” in the atmospheric volume (n_subaps x n_layers) y is the vector of all the phase measurements (n_subaps x n_guidestars) A is the linear relation between them – representing the accumulation of indices times dz to get accumulated optical path distance n is the noise in the measurement. Note: I’m skipping the phase-to-slope and slope-to-phase operations. These operations are also assumed linear and don’t change the nature of the argument. (for example 50 mas noise- equivalent-angle equals ~35 nm phase error) 4

Minimum variance solution and noise propagator The LTAO problem is underdetermined because n, the number of unknowns (voxels), exceeds m, the number of measurements Aside: When n>m, the difference n-m is the number of blind modes The minimum variance solution is Where P is the a-priori covariance of the solutions (e.g Kolmogorov spectrum and Cn2 profile), and N 0 is the assumed covariance of the measurement noise. The noise propagator is found by setting y = n, N = I mxm and solving for the covariance of In the case where the “signal” APA T is much greater than the assumed noise N 0, the noise propagator is nearly identity I mxm ! This is the case for LTAO: sqrt( N 0 ) is 35 nm compared to sqrt( APA T ) of several microns in a typical atmosphere. 5

Noise propagator and the law of averages Why doesn’t the noise propagator follow a law of averages when more guidestars are added? Because although more equations are added, more unknowns are added too. As long as n  m and the equations are non-redundant and the solution is unconstrained, the noise propagator is identity. Overlapping guidestars introduces redundant equations: i.e. more equations without adding more unknowns. Law of averages starts to apply when n’<m, where n’ = the number of degrees of freedom you can measure = m – the redundancy of the measurements – the number of observable a-priori constraints on the solution = rank(APA T ). Then, noise propagator goes as sqrt(n’/m) This is consistent with what was observed in the LAOS runs done by Chris Neyman ( of Aug 26) and consistent with subsequent example runs explained in the next few slides. 6

About redundancy in measurements Redundant equations result in the matrix becoming singular. The matrix is nearly singular if N 0 is relatively small. Numerical inversion generates large gains in the reconstructor in its valiant attempt to remain consistent with all measurements. It is better to use a pseudo-inverse using the singular value decomposition with thresholds on singular values (“regularization”). This allows redundancies to be suppressed without causing large gains – and brings back the law of averages! Increasing N 0 to keep the matrix full rank (also a form of regularization) has roughly the same effect. Redundancy in LTAO happens when a newly added guidestar does not improve the resolution of layers: (  1 -  2 )z max n layers We can force the law of averages through choices in the model: e.g. limiting number of layers or assuming a “severe” Cn2 profile limiting a- priori uncertainty to just a few layers (and using regularization). 7

Examples I ran a couple cases independent of LAOS and TSW, just using matrix arithmetic in IDL to illustrate these points Case runs: 2 dimensional (x and z): 3 to 5 guidestars over 30 arcsec, 32 subapertures across x,3 to 32 layers over z, various Cn2 profiles. Regularization is SVD pseudo inverse thresholded at 1% (ratio of singular value to largest singular value). Case 1: 8 z x Telescope pupil Altitude Distribution of estimate covariance over volume In response to unit measurement noise 32 layers,3 guidestars: -30, 0, +30 arcsec Cn2 uniform over altitude Noise propagator back to WFS Telescope pupil position x Noise Propagator

More examples Case 2: 5 guidestar: -30,-15,0,15,30 arcsec, 32 layer uniform Cn2 Case 3: 5 guidestars and 3 layers 9 Distribution of estimate covariance over volume In response to unit measurement noise Noise propagator back to WFS

Last example: forcing ground-layer averaging Case 4: 3 guidestars -10,0,10 arcsec, 3 layers, Cn2=exp{-z/0.73km) 10 Distribution of estimate covariance over volume In response to unit measurement noise Noise propagator back to WFS x Telescope pupil Telescope pupil position x z Altitude Noise Propagator

Conclusion Adding more guidestars can serve one of two purposes 1.Introducing denser sampling of the atmosphere In which case, noise propagator remains unity (increasing noise with decreasing power per guidestar) 2.Introducing measurement redundancy In which case, noise propagator follows law of averages But it can’t do both Noise regularization is essential –To prevent high reconstructor gains –To recognize and apply law of averages to redundancy 11