Presentation is loading. Please wait.

Presentation is loading. Please wait.

Interplanetary Trajectory Optimization Collin Bezrouk 2-24-2015.

Similar presentations


Presentation on theme: "Interplanetary Trajectory Optimization Collin Bezrouk 2-24-2015."— Presentation transcript:

1 Interplanetary Trajectory Optimization Collin Bezrouk 2-24-2015

2 Discussion Reference Some of this material comes from Spacecraft Trajectory Optimization (Ch. 7) by Bruce Conway.

3 Optimization Problem Setup Optimization problems require the following: 1.State (Decision) vector 2.Cost (Objective) Function 3.Constraints and Bounds 4.Control Law - may be part of the state vector and/or cost function 5.Algorithm for Optimizing

4 State Vector In general, it is much easier to begin with a known/assumed sequence of flybys. –Ex: Earth to Jupiter – VEEJ –Conway + grad students have done a lot of work automating this process. Typical to structure state vector as the Julian date of launch, each flyby, and arrival. –Ex: [t launch, t VGA, t EGA1, t EGA2, t arrival ] T A Deep Space Maneuver (DSM) would include time and velocity components in the state vector.

5 Search Space Pruning What is a poor flyby candidate for a trajectory en route to Neptune? What is a good range of dates for a flyby of Jupiter? of Saturn? of Both? Jan 1, 2015 Dec 31, 2025 Neptune Uranus Saturn Jupiter

6 Search Space Pruning The flyby sequence should have successive planets leading the previous planets. Large lead angle: –Higher turn angle required –Lower flyby radius (beware of impact) –Higher energy/velocity. Small lead angle –Higher flyby radius (safer). –Lower energy change from flyby. –Flyby can become trivial.

7 Search Space Pruning Pruning is a Heuristic (experience based) method for reducing your search space. It reduces the outer planet flyby sequences you will consider. It provides approximate time ranges for generating useful porkchop plots. By approximating arrival times at the outer planets, you can also trim flyby sequences of inner planets. –Ex: if you need to get to Jupiter 2 years after launch, you can’t enter a 2:1 or 3:1 resonant Earth orbit.

8 Algorithms for Optimizing Deterministic Algorithms –Assume cost function is continuous and differentiable. –Use gradients, Jacobians, and/or Hessians (numerical or analytical) to optimize. –Non-Linear Programming algorithms Stochastic Algorithms –Uses random variables to sample and minimize the cost function. Trades derivative information for more cost function evaluations. –Particle Swarm Optimization (PSO) –Genetic Algorithms –Simulated Annealing

9 Deterministic Algorithms Hessian (Second Deriv.) Gradient (First Deriv.)

10 Non-Linear Programming Sequential Quadratic Programming (SQP) is the state-of-the-art for NLP. –Developed in the late 70’s and early 80’s. Three stages in the algorithm 1.Update estimate of Hessian matrix 2.Solve the quadratic approximation of the function with conjugate gradient methods. 3.Take a step along optimal direction from Step 2. MATLAB’s fmincon function is an example of an NLP solver that can use SQP. Fortran, C++ can use SNOPT (also SQP based).

11 SQP References 1.Fletcher, R., "Practical Methods of Optimization," John Wiley and Sons, 1987. 2.Gill, P.E., W. Murray, and M.H.Wright, Practical Optimization, London, Academic Press, 1981. 3.Hock, W. and K. Schittkowski, "A Comparative Performance Evaluation of 27 Nonlinear Programming Codes,"Computing, Vol. 30, p. 335, 1983. 4.Powell, M.J.D., "Variable Metric Methods for Constrained Optimization," Mathematical Programming: The State of the Art, (A. Bachem, M. Grotschel and B. Korte, eds.) Springer Verlag, pp 288–311, 1983.

12 Stochastic Algorithms Do not require derivative information. Evaluates the cost function a lot. Searches across the entire domain. This improves the chances of finding the global optimum. Many tuning parameters and a heavy reliance on random number generators. The following three algorithms are available in MATLAB’s Global Optimization Toolbox.

13 Particle Swarm Optimization Simulates social behavior of animal groups. –Flocks of birds, hives of insects, etc. Procedure: 1.Begin with a population of particles that each have an associated state vector and state vector rate. Each state vector value and rate for each particle is randomly generated with a uniform distribution within the bounds. Very similar to a Monte Carlo simulation. 2.Evaluate the cost function for all particles. 3.Move each particle based on its rate vector.

14 Particle Swarm Optimization Procedure (cont.): 4.The particle’s “acceleration” will depend on three factors: inertial, cognitive, and social. Inertial – the particle partially moves in the direction it was previously moving. Cognitive – the particle partially moves towards the best location visited by that particle over all iterations. Social – the particle partially moves towards the best location visited by the entire swarm 5.Update state of each particle and repeat the procedure until convergence.

15 Example of PSO University of Stuttgart Institute of Engineering and Computational Mechanics

16 Multiple Particle Swarm Optimization The MPSO variation has several (3-5) swarms independently searching the state space. Every k-iterations, the swarms swap “ownership”. When ownership swaps, the particles in one swarm are attracted to the best solution from a different swarm. Provides some advantage of PSO, but is more computationally intensive. Blackwell, T., and Branke, J. (2004) Multi-Swarm Optimization in Dynamic Environments. Lecture Notes in Computer Science, 3005, 489-500

17 Genetic Algorithms Simulates evolution through natural selection and breeding. Very similar to PSO. Procedure: 1.Generate population with randomly generated, uniformly distributed state values. 2.Evaluate cost function for each member of population. 3.Identify “parents” based on their cost function. Also identify “elites”. 4.At the next iteration, elites survive, and parents produce “children” through mutation or crossover. Mutate – add a random perturbation to a parent’s state. Crossover – combine the vectors of two or more parents.

18 Genetic Algorithms Procedure (cont): 5.Replace the current population with the children and elites. 6.Repeat procedure until convergence. Crossover Child Mutation Child Elite Child 60 iter 80 iter 95 iter100 iter

19 Simulated Annealing Simulates annealing (heating and controlled cooling) a metal to reduce defects, which minimizes internal energy. Great for discrete problems, and typically useful when you only need a solution that is “good enough”, not a global minimum.

20 Simulated Annealing Procedure: 1.Begin at an initial state: x. 2.Compute a neighboring state: x’. 3.Define a probability from moving from state x to x’: P = P(f(x), f(x’), T) where T is the “Temperature” f(x) is your cost function and represents “Engery”. Examples for function P can be found in literature. Temperature makes it so that state changes are sensitive to coarser variations in energy at the beginning and finer variations at the end. 4.Update temperature based on remaining iterations. 5.Repeat until temperature reaches zero.


Download ppt "Interplanetary Trajectory Optimization Collin Bezrouk 2-24-2015."

Similar presentations


Ads by Google