MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization.

Slides:



Advertisements
Similar presentations
Fairness and Social Welfare Functions. Deriving the Utility Possibility Frontier (UPF) We begin with the Edgeworth Box that starts with individual 1,and.
Advertisements

Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Topic Outline ? Black-Box Optimization Optimization Algorithm: only allowed to evaluate f (direct search) decision vector x objective vector f(x) objective.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
© 2003 Anita Lee-Post Linear Programming Part 2 By Anita Lee-Post.
Kinematics of Rigid Bodies
Multi-item auctions with identical items limited supply: M items (M smaller than number of bidders, n). Three possible bidder types: –Unit-demand bidders.
Introduction to multi-objective optimization We often have more than one objective This means that design points are no longer arranged in strict hierarchy.
Chapter 6 Linear Programming: The Simplex Method
Lecture 2. A Day of Principles The principle of virtual work d’Alembert’s principle Hamilton’s principle 1 (with an example that applies ‘em all at the.
Multi-objective optimization multi-criteria decision-making.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Process Optimization.
Introduction to multi-objective optimization We often have more than one objective This means that design points are no longer arranged in strict hierarchy.
Multi-objective Approach to Portfolio Optimization 童培俊 张帆.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #33 4/22/02 Fully Stressed Design.
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Pareto Multiobjective Optimization Patrick N. Ngatchou, Anahita Zarei, Warren.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #19 3/8/02 Taguchi’s Orthogonal Arrays.
MAE 552 Heuristic Optimization
No Free Lunch (NFL) Theorem Many slides are based on a presentation of Y.C. Ho Presentation by Kristian Nolde.
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #16 3/1/02 Taguchi’s Orthogonal Arrays.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #32 4/19/02 Fuzzy Logic.
Chapter 28 Design of Experiments (DOE). Objectives Define basic design of experiments (DOE) terminology. Apply DOE principles. Plan, organize, and evaluate.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #20 3/10/02 Taguchi’s Orthogonal Arrays.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #36 4/29/02 Multi-Objective Optimization.
Design Optimization School of Engineering University of Bradford 1 Formulation of a multi-objective problem Pareto optimum set consists of the designs.
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Tier I: Mathematical Methods of Optimization
PH0101 UNIT 4 LECTURE 3 CRYSTAL SYMMETRY CENTRE OF SYMMETRY
Applied Economics for Business Management
Radial Basis Function Networks
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
Evolutionary Multi-objective Optimization – A Big Picture Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Probability theory: (lecture 2 on AMLbook.com)
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Scan Conversion Line and Circle
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Lecture # 2 Review Go over Homework Sets #1 & #2 Consumer Behavior APPLIED ECONOMICS FOR BUSINESS MANAGEMENT.
CSC321: Neural Networks Lecture 12: Clustering Geoffrey Hinton.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Introduction to multi-objective optimization We often have more than one objective This means that design points are no longer arranged in strict hierarchy.
Department Of Industrial Engineering Duality And Sensitivity Analysis presented by: Taha Ben Omar Supervisor: Prof. Dr. Sahand Daneshvar.
5.3 Geometric Introduction to the Simplex Method The geometric method of the previous section is limited in that it is only useful for problems involving.
This is Julie’s (and my) 2003 data. The location is Foster Park and the graph shows TDN and Chla concentrations as the 2003 algal bloom waxed and waned.
Spatial Modeling – some fundamentals for Robot Kinematics ME 3230.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #12 2/20/02 Evolutionary Algorithms.
Multi-objective Optimization
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
1 Optimization Techniques Constrained Optimization by Linear Programming updated NTU SY-521-N SMU EMIS 5300/7300 Systems Analysis Methods Dr.
Curve Simplification under the L 2 -Norm Ben Berg Advisor: Pankaj Agarwal Mentor: Swaminathan Sankararaman.
 This will explain how consumers allocate their income over many goods.  This looks at individual’s decision making when faced with limited income and.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
Pareto-Optimality of Cognitively Preferred Polygonal Hulls for Dot Patterns Antony Galton University of Exeter UK.
An Introduction to Linear Programming
Linear Programming for Solving the DSS Problems
Combinatorial Library Design Using a Multiobjective Genetic Algorithm
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Engineering Economics (2+0)
Digital Logic & Design Dr. Waseem Ikram Lecture 02.
An Algorithm for Multi-Criteria Optimization in CSPs
Introduction Defining the Problem as a State Space Search.
Linear Programming.
Digital Logic & Design Lecture 02.
Heuristic Optimization Methods Pareto Multiobjective Optimization
Multiobjective Optimization
Presentation transcript:

MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

References: Das, I, and Dennis, J, A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems, Can be found at Chen. W, Wiecek, M, Zhang, J, Quality Utility – A Compromise Programming Approach to Robust Design, ASME JMD, 1999, Vol 121, pp

Multi-Objective Optimization All the problems that we have considered in this class as well as in 550 have been comprised of a single objective function with perhaps multiple constraints and design variables. Minimize Subject To:

Multi-Objective Optimization In such a case, the problem has a 1 dimensional performance space and the optimum point is the one that is the furthest toward the desired extreme. F Optimum

Multi-Objective Optimization What happens when it is necessary (or at least desirable) to optimize with respect to more than one criteria? Now we have additional dimensions in our performance space and we are seeking the best we can get for all dimensions simultaneously. What does that mean “best in all dimensions”?

Multi-Objective Optimization Consider the following 2D performance space: F1F1 F2F2 Minimize Both F’s Optimum

Multi-Objective Optimization But what happens in a case like this: F1F1 F2F2 Minimize Both F’s Optimum?

Multi-Objective Optimization The one on the left is better with respect to F1 but worse with respect to F2. And the one on the right is better with respect to F2 and worse with respect to F1. How does one wind up in such peril?

Multi-Objective Optimization That depends on the relationships that exist between the various objectives. There are 3 possible interactions that may exist between objectives in a multi-objective optimization problem: 1. Cooperation 2. Competition 3. No Relationship

Multi-Objective Optimization What defines a relationship between objectives? How can I recognize that two objectives have any relationship at all? The relationship between two objectives is defined by the variables that they have in common. Two objectives will fight for control of common design variables throughout a multi-objective design optimization process.

Multi-Objective Optimization Just how vicious the fight is depends on what type of interaction exists (of the 3 we mentioned). Let’s consider the 1 st case of cooperation. Two objectives are said to “cooperate” if they both wish to drive all their common variables in the same direction (pretty much all the time). In such a case, betterment of one objective typically accompanies betterment of the other.

Multi-Objective Optimization In such a case, the optimum is a single point (or collection of equally desirable points) like in our first performance plot. F1F1 F2F2 Minimize Both F’s Optimum

Multi-Objective Optimization Now let’s consider the 2 nd case of competition. Two objectives are said to “compete” if they wish to drive at least some of their common variables in different directions. In such a case, betterment of one objective typically comes at the expense of the other. This is the most interesting case.

Multi-Objective Optimization In such a case, the optimum is no longer a single point but a collection of points called the Pareto Set. Named for Vilfredo Pareto ( ) who was a French economist and sociologist. He established the concept now known as “Pareto Optimality”.

Multi-Objective Optimization Pareto optimality - Optimality criterion for optimization problems with multiple objectives. A state (set of parameters) is said to be Pareto optimal if there is no other state dominating the state with respect to a set of objective functions. –State A dominates state B if A is better than B in at least one objective function and not worse with respect to all other objective functions.

Multi-Objective Optimization So let’s take a look at this: F1F1 F2F2 Minimize Both F’s

Multi-Objective Optimization For completeness, we will now consider the case in which there is no relationship between two objectives. When do you think such a thing might occur? Clearly this only occurs when the two objectives have no design variables in common (each is a function of a different subset of the design variables and the 2 subsets have a null intersection).

Multi-Objective Optimization In such a case, we are free to optimize each function individually to determine our optimal design configuration. That is why this case is desirable but uninteresting. So back to competing objectives.

Multi-Objective Optimization Now that we know what we are looking for, that is, the set of non-dominated designs, how are we going to go about generating it? The most common way to generate points along a Pareto frontier is to use a weighted sum approach. Consider the following example:

Multi-Objective Optimization Suppose I wish to minimize both of the following functions simultaneously: F 1 = 750x 1 +60(25-x 1 ) x 2 +45(25- x 1 )(25- x 2 ) F 2 = (25- x 1 ) x 2 For the typical weighted sum approach, I would assign a weight to each function such that:

Multi-Objective Optimization I would then combine the two functions into a single function as follows and solve:

Multi-Objective Optimization The net effect of our weighted sum approach is to convert a multiple objective problem into a single objective problem. But this will only provide us with a single Pareto point. How will be go about finding other Pareto points? By altering the weights and solving again.

Multi-Objective Optimization As mentioned, such schemes are very common in multi-objective optimization. In fact, in an ASME paper published in 1997, Dennis and Das made the claim that all common methods of generating Pareto points involved repeated conversion of a multi-objective problem into a single objective problem and solving.

Multi-Objective Optimization Ok, so I march up and down my weights generating Pareto points and then I’ve got a good representation of my set. Unfortunately not. As it turns out it is seldom this easy. There are a number of pitfalls associated with using weighted sums to generate Pareto points.

Multi-Objective Optimization Some of those pitfalls are: Inability to generate points in non-convex portions of the frontier Inability to generate a uniform sampling of the frontier A non-intuitive relationship between combinatorial parameters (weights, etc.) and performances Poor efficiency (can require an excessive number of function evaluations).

Multi-Objective Optimization Let’s consider the 1 st pitfall: What is a non-convex portion of the frontier? I assume you are all familiar with the concept of convexity so let’s move on to a pictorial.

Multi-Objective Optimization F1F1 F2F2 Minimize Both F’s This is a non-convex region of the frontier

Multi-Objective Optimization Ok so why do weighted sum approaches have difficulty finding these points? As discussed in reference 1, choosing the weights in the manner that we have can be shown to be equivalent to rotating the performance axes by an angle that can be determined from the weights and then translating those rotated axes until they hit the frontier. The effect of this on a convex frontier can be visualized as follows.

Multi-Objective Optimization F1F1 F2F2 Minimize Both F’s

Multi-Objective Optimization So I think that you can see already what is going to happen when the frontier is not convex. Consider the following animation.

Multi-Objective Optimization F1F1 F2F2 Minimize Both F’s

Multi-Objective Optimization So we missed all the points in the non-convex region. This also demonstrates one reason why we may not get a uniform sampling of the Pareto frontier. As it turns out, a uniform sampling is only possible in this way for a Pareto set having a very specific shape. So not even all convex Pareto sets can be sampled uniformly in this fashion. You can read more about this in reference 1.

Multi-Objective Optimization Clearly, if we cannot generate a uniform sampling and we cannot find non-convex regions, then the relationship between changes in weights and motion along the frontier is non-intuitive. Finally, since with each combination of weights, we are completing an entire optimization of our system, You can see how this may result in a great deal of system evaluations.