Determining asteroid masses from planetary range a short course in parameter estimation Petr Kuchynka NASA Postdoctoral Program Fellow Jet Propulsion Laboratory,

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Active Appearance Models
Edge Preserving Image Restoration using L1 norm
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Colorado Center for Astrodynamics Research The University of Colorado ASEN 5070 OD Accuracy Assessment OD Overlap Example Effects of eliminating parameters.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Pattern Recognition and Machine Learning
On the alternative approaches to ITRF formulation. A theoretical comparison. Department of Geodesy and Surveying Aristotle University of Thessaloniki Athanasios.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Dimension reduction (1)
Lecture 3 Probability and Measurement Error, Part 2.
Chapter 4: Linear Models for Classification
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Aspects of Conditional Simulation and estimation of hydraulic conductivity in coastal aquifers" Luit Jan Slooten.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
The Simple Linear Regression Model: Specification and Estimation
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Prediction and model selection
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Mass Determination of Asteroids Kleinplanetentagung Heppenheim, 2005 Mike Kretlow.
11 Regarding the re-definition of the astronomical unit of length (AU) E.V. Pitjeva Institute of Applied Astronomy, Russian Academy of Sciences Workshop.
Abstract Despite many estimations found in the past, Phobos mass is still largely unknown. Strong correlations in the reduction techniques and some obvious.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Young Ki Baik, Computer Vision Lab.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
-Arnaud Doucet, Nando de Freitas et al, UAI
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
R EGRESSION S HRINKAGE AND S ELECTION VIA THE L ASSO Author: Robert Tibshirani Journal of the Royal Statistical Society 1996 Presentation: Tinglin Liu.
Michel Rapaport Observatoire de l’Université Bordeaux Floirac Accuracy in the mass determination of asteroids with Gaia SSWG Gaia / Besançon 6-7.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
CpSc 881: Machine Learning
Gaussian Processes For Regression, Classification, and Prediction.
Parameters : Temperature profile Bulk iron and olivine weight fraction Pressure gradient. Modeling of the Martian mantle Recently taken into account :
Machine Learning 5. Parametric Methods.
Tracking with dynamics
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Measuring shear using… Kaiser, Squires & Broadhurst (1995) Luppino & Kaiser (1997)
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Probability Theory and Parameter Estimation I
ASEN 5070: Statistical Orbit Determination I Fall 2014
ASEN 5070: Statistical Orbit Determination I Fall 2015
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Ch3: Model Building through Regression
Latent Variables, Mixture Models and EM
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Filtering and State Estimation: Basic Concepts
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
10701 / Machine Learning Today: - Cross validation,
Generally Discriminant Analysis
Principles of the Global Positioning System Lecture 11
LECTURE 09: BAYESIAN LEARNING
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Probabilistic Surrogate Models
Presentation transcript:

Determining asteroid masses from planetary range a short course in parameter estimation Petr Kuchynka NASA Postdoctoral Program Fellow Jet Propulsion Laboratory, California Institute of Technology © 2012 California Institute of Technology. Government sponsorship acknowledged.

ranging data measurements of the distance between a spacecraft and a DSN antenna, in practice round-trip light-time planetary ranging data measurements of the distance between a planet and a DSN antenna what is ranging data ? why do we want to determine asteroid masses ? precious source of information about the asteroids themselves limiting factor in the prediction capacity of planetary ephemerides determination of porosity, material composition, collisional evolution etc. 2

Mars Global Surveyor (MGS) Mars Reconnaissance Orbiter (MRO) now Mars Odyssey (MO) now up to about 400 asteroids could induce perturbations > 2 m Mars ranging : more than 10 years of data with accuracy ~ 1 m data with lower accuracy is available since the 1980s extract asteroid mass information from Mars range data general introduction to parameter estimation, practical application to asteroid mass determination OBJECTIVE : 3

how to extract information from the ranging data ?.. the naive approach model able to reproduce the data find the parameters of the model, so that range prediction = range data - orbits of the Earth and Mars around the Sun - perturbers : Moon, planets, asteroids - other (solar oblateness, solar plasma etc.) Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 the parameters can be recovered by simply exploring the parameter space 4 = real value of a parameter (unknown) = value of a parameter in the model

if data is noisy, many model predictions are satisfactory Earth-Mars distance time p1p2 p3 p4 p5 uncertainty in observations induces uncertainty in the recovery process. Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 5

how to extract information from the ranging data ?.. a more formal approach notation : the range dependence on the model parameters is linear : range prediction model parameters range data if each model parameter is a correction with respect to a reference value, partials with respect to the parameters matrix of partials 6

the distance between data and prediction can be estimated with the norm with noise : time without noise : least – square solution time 7

center defined by least-square solution size given by noise relative sizes of axes given by proper values of orientation given by proper vectors of constraint on parameters : if is diagonal :... interior of an ellipsoid ! if not diagonal, the ellipsoid is not aligned with the parameter axes p1 p2 p3 parameter uncertainties = corresponds to the sides of a bounding box 8

p1 p2 p3 If noise on data follows a Gaussian distribution with variance the probability of finding the real parameters at a given point of the parameter space follows a multivariate Gaussian distribution the multivariate distribution can be represented by an ellipsoid that has a 70% chance of containing the real parameters 9

using the norm constraints parameters to an ellipsoid = least squares what about other norms ? regularization Lasso Tikhonov Danzig selector (DS) Bounded Variable Least Squares (BVLS) There are many ways to estimate parameters from data ! high order Tikhonov Weighted Lasso Elastic Net Truncated Singular Value Decomposition (TSVD) time 10

how is the knowledge of the parameters improved by the data ? knowledge = information = density of probability notation : probability density of variable X probability density of variable X, given variable Y This is what we are looking for ! Bayes formula : 11

the most probable set of parameters is the one that minimizes... the least square solution... a multivariate Gaussian distribution the least square solution is the optimal method for extracting information 12

if the a priori knowledge on the individual parameters = Gaussian distributions... the updated knowledge = multivariate Gaussian distribution multivariate Gaussian distribution p1 p2 p3 p1 p2 p3 p1 p2 p3 13 multivariate Gaussian distribution

updated knowledge = multivariate Gaussian distribution center : parameter 1σ uncertainties : where and with prior information, the solution is still given by the least squares only true for Gaussian priors equivalent to Tikhonov regularization : Any form of regularization can be interpreted as accounting for additional information 14 very important to know in order to use regularization correctly

least squares are the optimal method for extracting information from observations in presence of Gaussian noise least squares constrain parameters into an ellipsoid prior information on parameters can be easily accounted for, if prior distributions are Gaussian any improvement requires additional information the solution is then given by Tikhonov regularization Tikhonov regularization also constrains parameters to an ellipsoid any form of regularization can be interpreted as accounting for additional information this is interpretation necessary in order to apply the regularization correctly SUMMARY : 15

objective determine and extract asteroid mass information from Mars range data MGS MRO MO data approximately Mars ranging observations model ≈ model used to build the DE423 planetary ephemeris Folkner 2010 adjusted parameters : asteroid masses - 12 initial conditions for the orbits of Earth and Mars - other (solar plasma correction, constant biases) a total of about 400 parameters 16

matrix of partials constraints on parameters given by least-squares : and 1 m obtained by finite differences fitting the model parameters by simple least squares : huge uncertainties in practice, the inversion of the covariance matrix is impossible (the matrix is close to singular) the range data alone provides no information regarding asteroid masses more information is needed 17

what do we know about the 343 asteroid masses ?... they are positive depend on diameters and densities asteroid diameters depend on absolute magnitude and albedo (Bowell et al. 1989) WISE MIMPS SIMPS Masiero et al ~ diameters Tedesco et al ~ 100 diameters Tedesco et al ~ 1000 diameters 300 / 343 diameters are known to 10% 40 / 343 diameters are known to < 35% only 4 / 343 diameters are undetermined 18 absolute magnitude albedo

classical approach to introduce prior information : splitting asteroids into major and minor objects ~ 20 asts masses fitted individually introduced in Williams ~ 320 asts diameter taxonomy class (C/S/M) masses determined assuming constant density in the 3 taxonomy classes number of parameters reduced involves 2 hypotheses uncertainty estimation not obvious 3 parameters fitted

asteroid mass densities density distribution is approximately Gaussian : mean ≈ 2.2 g cm -3 deviation ≈ 1 g cm Psyche 1σ uncertainty = 0.5 nominal mass nominal mass computed from radiometric diameter and mean density the true mass can reasonably be at 0 or at twice the nominal mass truncated Gaussian distribution prior information on masses for asteroids with determined diameters : 20

what do we know about the 343 asteroid masses : N = 300 asteroids N = 4 asteroids Tikhonov regularization with Gaussian priors Tikhonov regularization with truncated Gaussian priors solved with NNLS algorithm (Lawson & Hanson 1974) 21 constraints on parameters given by least-squares : and

post-fit residuals : MGS ODY MRO 1σ = 1.1 m 1σ = 0.9 m 1σ = 1.0 m fitting the model parameters to Mars range data 22

asteroid masses : uncertainties : prior 1σ uncertainty posterior 1σ uncertainty the range data constraints relatively few masses 23

30 asteroid masses are determined to better than 35% 532Herculina Melpomene Amphitrite Thisbe Davida Egeria Thalia Euphrosyne Julia Diotima Zelinda Eva Sophrosyne Cava Daphne Vesta Ceres Pallas Juno Hygiea Bamberga Iris Interamnia Eunomia Irene Fortuna Flora Hebe Psyche Europa uncertainty (%) GM [ km 3 s -2 ] uncertainty (%) GM [ km 3 s -2 ] of the adjusted mass asteroids determined by other methods : spacecraft or multiple system Mars ranging (classic approach) Konopliv et al close encounters - Baer et al

comparison with other mass determination : A. Konopliv Conrad et al Daphne binary system observation + 1σ - 1σ + 1σ - 1σ DAWN tracking 25

Baer et al close encounters +/- 1σ+/- 2σ 26

our masses compare well with determinations obtained by others Konopliv et al Mars ranging, classic approach +/- 1σ Tikhonov regularization performs at least as well as the classic approach while relying only on available knowledge of asteroid diameters and densities (no taxonomy information, no assumption of constant density, no selection of individually adjusted parameters) 27

Tikhonov regularization appears as a good alternative to the classical approach offers a rigorous framework to treat prior information : avoids choices / hypotheses necessary in the classic approach 30 asteroid masses can be determined from range data to better than 35% compare well with estimates obtained elsewhere guarantees that we cannot do better without additional information performs well : CONCLUSION : 28

THANK YOU ! acknowledgements : D. K. Yeomans and W. M. Folkner advisers at JPL NASA Postdoctoral Program