Download presentation

Presentation is loading. Please wait.

Published byByron Tumlin Modified over 2 years ago

1
Determining asteroid masses from planetary range a short course in parameter estimation Petr Kuchynka NASA Postdoctoral Program Fellow Jet Propulsion Laboratory, California Institute of Technology petr.kuchynka@jpl.nasa.gov © 2012 California Institute of Technology. Government sponsorship acknowledged.

2
ranging data measurements of the distance between a spacecraft and a DSN antenna, in practice round-trip light-time planetary ranging data measurements of the distance between a planet and a DSN antenna what is ranging data ? why do we want to determine asteroid masses ? precious source of information about the asteroids themselves limiting factor in the prediction capacity of planetary ephemerides determination of porosity, material composition, collisional evolution etc. 2

3
Mars Global Surveyor (MGS) 1999 - 2007 Mars Reconnaissance Orbiter (MRO) 2006 - now Mars Odyssey (MO) 2002 - now up to about 400 asteroids could induce perturbations > 2 m Mars ranging : more than 10 years of data with accuracy ~ 1 m data with lower accuracy is available since the 1980s extract asteroid mass information from Mars range data general introduction to parameter estimation, practical application to asteroid mass determination OBJECTIVE : 3

4
how to extract information from the ranging data ?.. the naive approach model able to reproduce the data find the parameters of the model, so that range prediction = range data - orbits of the Earth and Mars around the Sun - perturbers : Moon, planets, asteroids - other (solar oblateness, solar plasma etc.) Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 the parameters can be recovered by simply exploring the parameter space 4 = real value of a parameter (unknown) = value of a parameter in the model

5
if data is noisy, many model predictions are satisfactory Earth-Mars distance time p1p2 p3 p4 p5 uncertainty in observations induces uncertainty in the recovery process. Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 Earth-Mars distance time p1p2 p3 p4 p5 5

6
how to extract information from the ranging data ?.. a more formal approach notation : the range dependence on the model parameters is linear : range prediction model parameters range data if each model parameter is a correction with respect to a reference value, partials with respect to the parameters matrix of partials 6

7
the distance between data and prediction can be estimated with the norm with noise : time without noise : least – square solution time 7

8
center defined by least-square solution size given by noise relative sizes of axes given by proper values of orientation given by proper vectors of constraint on parameters : if is diagonal :... interior of an ellipsoid ! if not diagonal, the ellipsoid is not aligned with the parameter axes p1 p2 p3 parameter uncertainties = corresponds to the sides of a bounding box 8

9
p1 p2 p3 If noise on data follows a Gaussian distribution with variance the probability of finding the real parameters at a given point of the parameter space follows a multivariate Gaussian distribution the multivariate distribution can be represented by an ellipsoid that has a 70% chance of containing the real parameters 9

10
using the norm constraints parameters to an ellipsoid = least squares what about other norms ? regularization Lasso Tikhonov Danzig selector (DS) Bounded Variable Least Squares (BVLS) There are many ways to estimate parameters from data ! high order Tikhonov Weighted Lasso Elastic Net Truncated Singular Value Decomposition (TSVD) time 10

11
how is the knowledge of the parameters improved by the data ? knowledge = information = density of probability notation : probability density of variable X probability density of variable X, given variable Y This is what we are looking for ! Bayes formula : 11

12
the most probable set of parameters is the one that minimizes... the least square solution... a multivariate Gaussian distribution the least square solution is the optimal method for extracting information 12

13
if the a priori knowledge on the individual parameters = Gaussian distributions... the updated knowledge = multivariate Gaussian distribution multivariate Gaussian distribution p1 p2 p3 p1 p2 p3 p1 p2 p3 13 multivariate Gaussian distribution

14
updated knowledge = multivariate Gaussian distribution center : parameter 1σ uncertainties : where and with prior information, the solution is still given by the least squares only true for Gaussian priors equivalent to Tikhonov regularization : Any form of regularization can be interpreted as accounting for additional information 14 very important to know in order to use regularization correctly

15
least squares are the optimal method for extracting information from observations in presence of Gaussian noise least squares constrain parameters into an ellipsoid prior information on parameters can be easily accounted for, if prior distributions are Gaussian any improvement requires additional information the solution is then given by Tikhonov regularization Tikhonov regularization also constrains parameters to an ellipsoid any form of regularization can be interpreted as accounting for additional information this is interpretation necessary in order to apply the regularization correctly SUMMARY : 15

16
objective determine and extract asteroid mass information from Mars range data 199920012003 2004 2007 2009 2011 MGS MRO MO data approximately 11000 Mars ranging observations model ≈ model used to build the DE423 planetary ephemeris Folkner 2010 adjusted parameters : - 343 asteroid masses - 12 initial conditions for the orbits of Earth and Mars - other (solar plasma correction, constant biases) a total of about 400 parameters 16

17
matrix of partials constraints on parameters given by least-squares : and 1 m obtained by finite differences fitting the model parameters by simple least squares : huge uncertainties in practice, the inversion of the covariance matrix is impossible (the matrix is close to singular) the range data alone provides no information regarding asteroid masses more information is needed 17

18
what do we know about the 343 asteroid masses ?... they are positive depend on diameters and densities asteroid diameters depend on absolute magnitude and albedo (Bowell et al. 1989) WISE MIMPS SIMPS Masiero et al. 2011 ~ 100000 diameters Tedesco et al. 2004 ~ 100 diameters Tedesco et al. 2004 ~ 1000 diameters 300 / 343 diameters are known to 10% 40 / 343 diameters are known to < 35% only 4 / 343 diameters are undetermined 18 absolute magnitude albedo

19
classical approach to introduce prior information : splitting asteroids into major and minor objects ~ 20 asts masses fitted individually introduced in Williams 1984 19 ~ 320 asts diameter taxonomy class (C/S/M) masses determined assuming constant density in the 3 taxonomy classes number of parameters reduced involves 2 hypotheses uncertainty estimation not obvious 3 parameters fitted

20
asteroid mass densities density distribution is approximately Gaussian : mean ≈ 2.2 g cm -3 deviation ≈ 1 g cm -3 16 Psyche 1σ uncertainty = 0.5 nominal mass nominal mass computed from radiometric diameter and mean density the true mass can reasonably be at 0 or at twice the nominal mass truncated Gaussian distribution prior information on masses for asteroids with determined diameters : 20

21
what do we know about the 343 asteroid masses : N = 300 asteroids N = 4 asteroids Tikhonov regularization with Gaussian priors Tikhonov regularization with truncated Gaussian priors solved with NNLS algorithm (Lawson & Hanson 1974) 21 constraints on parameters given by least-squares : and

22
post-fit residuals : MGS ODY MRO 1σ = 1.1 m 1σ = 0.9 m 1σ = 1.0 m fitting the model parameters to Mars range data 22

23
asteroid masses : uncertainties : prior 1σ uncertainty posterior 1σ uncertainty the range data constraints relatively few masses 23

24
30 asteroid masses are determined to better than 35% 532Herculina0.7922.3 18Melpomene0.2526.4 29Amphitrite0.5927.3 88Thisbe0.7627.4 511Davida1.9427.6 13Egeria0.7827.6 23Thalia0.1328.1 31Euphrosyne1.8328.2 89Julia0.3031.4 423Diotima0.7631.8 654Zelinda0.1932.2 164Eva0.1333.3 134Sophrosyne0.1733.4 505Cava0.1233.7 41Daphne0.3734.1 4Vesta17.130.6 1Ceres 62.530.6 2Pallas 12.423.0 3Juno 1.826.9 10Hygiea 6.858.1 324Bamberga 0.688.8 7Iris 1.0710.2 704Interamnia 2.9914.4 15Eunomia 1.5315.6 14Irene 0.5216.1 19Fortuna 0.4417.5 8Flora 0.3318.1 6Hebe 0.7018.7 16Psyche 1.6818.9 52Europa 2.3720.7 uncertainty (%) GM [ km 3 s -2 ] uncertainty (%) GM [ km 3 s -2 ] of the adjusted mass asteroids determined by other methods : spacecraft or multiple system Mars ranging (classic approach) Konopliv et al. 2011 close encounters - Baer et al. 2001 24

25
comparison with other mass determination : A. Konopliv Conrad et al. 2008 41 Daphne binary system observation + 1σ - 1σ + 1σ - 1σ DAWN tracking 25

26
Baer et al. 2011 close encounters +/- 1σ+/- 2σ 26

27
our masses compare well with determinations obtained by others Konopliv et al. 2011 Mars ranging, classic approach +/- 1σ Tikhonov regularization performs at least as well as the classic approach while relying only on available knowledge of asteroid diameters and densities (no taxonomy information, no assumption of constant density, no selection of individually adjusted parameters) 27

28
Tikhonov regularization appears as a good alternative to the classical approach offers a rigorous framework to treat prior information : avoids choices / hypotheses necessary in the classic approach 30 asteroid masses can be determined from range data to better than 35% compare well with estimates obtained elsewhere guarantees that we cannot do better without additional information performs well : CONCLUSION : 28

29
THANK YOU ! acknowledgements : D. K. Yeomans and W. M. Folkner advisers at JPL NASA Postdoctoral Program

Similar presentations

OK

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on coalition government in india Ppt on new zealand culture and people Ppt on forex exchange market Ppt on history of olympics for children Ppt on abstract art tattoo Ppt on natural and artificial satellites and their uses Ppt on effective communication for preceptors Ppt on zener diode basics Ppt on motivational techniques Blood vessels anatomy and physiology ppt on cells