1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications.

Slides:



Advertisements
Similar presentations
Engineering Optimization
Advertisements

Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 123 “True” Constrained Minimization.
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Linear Programming Fundamentals Convexity Definition: Line segment joining any 2 pts lies inside shape convex NOT convex.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
ENGINEERING OPTIMIZATION
Tutorial 12 Unconstrained optimization Conjugate gradients.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Chapter 10: Iterative Improvement
Optimization Linear Programming and Simplex Method
Unconstrained Optimization Problem
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Ch. 9: Direction Generation Method Based on Linearization Generalized Reduced Gradient Method Mohammad Farhan Habib NetLab, CS, UC Davis July 30, 2010.
Tier I: Mathematical Methods of Optimization
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
1 OR II GSLM Outline  separable programming  quadratic programming.
Graphical Solutions Plot all constraints including nonnegativity ones

ENCI 303 Lecture PS-19 Optimization 2
Chapter 6 Linear Programming: The Simplex Method
Nonlinear programming Unconstrained optimization techniques.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Part 4 Nonlinear Programming 4.3 Successive Linear Programming.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Chapter 7 Handling Constraints
OR Chapter 2. Simplex method (2,0) (2,2/3) (1,2)(0,2)
Mechanical Engineering Department 1 سورة النحل (78)
Linear Programming Erasmus Mobility Program (24Apr2012) Pollack Mihály Engineering Faculty (PMMK) University of Pécs João Miranda
Chapter 4 Linear Programming: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
An-Najah N. University Faculty of Engineering and Information Technology Department of Management Information systems Operations Research and Applications.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
IE 312 Review 1. The Process 2 Problem Model Conclusions Problem Formulation Analysis.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 1 Primal Methods.
D Nagesh Kumar, IIScOptimization Methods: M8L1 1 Advanced Topics in Optimization Piecewise Linear Approximation of a Nonlinear Function.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 Optimization Linear Programming and Simplex Method.
EMGT 5412 Operations Management Science Nonlinear Programming: Introduction Dincer Konur Engineering Management and Systems Engineering 1.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
Part 4 Nonlinear Programming 4.3 Successive Linear Programming.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Water Resources Development and Management Optimization (Nonlinear Programming & Time Series Simulation) CVEN 5393 Apr 11, 2011.
Computation of the solutions of nonlinear polynomial systems
Chap 10. Sensitivity Analysis
Solver & Optimization Problems
Dr. Arslan Ornek IMPROVING SEARCH
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Chapter 6. Large Scale Optimization
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
Chapter 8. General LP Problems
Part 4 Nonlinear Programming
Chapter 8. General LP Problems
Part 4 Nonlinear Programming
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Chapter 10: Iterative Improvement
Chapter 8. General LP Problems
Chapter 6. Large Scale Optimization
Presentation transcript:

1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis

2 Outline Introduction Direct Use of Successive Linear Programs – Linearly Constrained Case – General Nonlinear Programming Case Separable Programming – Single-Variable Functions – Multivariable Separable Functions – Linear Programming Solutions of Separable –Problems

3 Introduction

4 Efficient algorithms exist for two problem classes: – Unconstrained problems –Completely linear constrained problems Most approaches to solve the general problem of non-linear objective functions with non-linear constraints exploit techniques that solve the “easy” problems.

5 Introduction The basic Idea: 1.Use linear functions to approximate both the objective function as well as the constraints (Linearization). 2.Employ LP algorithms to solve this new linear program.

6 Introduction Linearization can be achieved in two ways: Any non-linear function can be approximated in the vicinity of a point by using Taylor’s expansion, is called the linearization point. Using piecewise linear approximations and then applying a modified simplex algorithm (separable programming).

7 8.1 Direct Use of Successive Linear Programs

8 Using Taylor’s expansion linearize all problem functions at some selected estimate of the solution. Result is an LP. is called the linearization point. With some additional precautions the LP solution ought to be an improvement over the linearization point. There are two cases to be considered: 1.Linearly constrained NLP case: 2.General NLP case:

Direct Use of Successive Linear Programs: Linearly constrained NLP case

Linearly constrained NLP case The linearly constrained NLP problem that of: is a nonlinear objective function. Feasible region is a polyhedron, however optimal solution can lie anywhere within the feasible region.

Linearly Constrained NLP case Using Taylor’s approximation around the linearization point and ignoring the second and higher order terms we obtain the linear approximation of around the point. So the linearized version becomes: The Solution of the linearized version is. How close is to the solution to the original NLP? By virtue of minimization it must be true that

Linearly Constrained NLP case Using a bit of algebra leads us to the result: So the vector is a descent direction. In chapter 6 we studied that a descent direction can lead to an improved point only if it is coupled with a step adjustment procedure. All points between and are feasible. Moreover since is a corner point, any point beyond it on the line are outside the feasible region. So, to improve upon, a line search method is employed in the line segment: Minimizing will find a point such that

Linearly Constrained NLP case will not in general be the optimal solution but it will serve as a linearization point for the next approximating LP. The text book presents the Frank-Wolfe Algorithm that employs this sequence of alternating LP’s and line searches.

Linearly Constrained NLP case: Frank-Wolfe Algorithm (page 339)

15 Frank-Wolfe Algorithm Execution: Example 8.1 (Page Number 340)

16 Frank-Wolfe Algorithm Execution: Example 8.1 (Page Number 340)

17 Frank-Wolfe Algorithm Frank-Wolfe algorithm converges to a Kuhn-Tucker point from any feasible starting point. No analysis for rate of convergence. However, if is convex we can obtain estimates on how much remaining improvements can be achieved. If is convex and it is linearized at a point, for all Hence after each cycle the difference gives an estimate of the improvement.

Direct Use of Successive Linear Programs: General NLP case

General Nonlinear Programming Case At some linearization point the linear approximation is

20 General Nonlinear Programming Case If we solve the LP approximation we obtain a new point, but it is highly unlikely that it is feasible. If is infeasible then is no guarantee than an improved estimate of the true optimum has been attained. if is a series of points each of which is the solution of the LP problem then in order to attain convergence to the true optimum solution it is sufficient that at each point an improvement be made in both the objective function value and the constraint infeasibility.

21 General Nonlinear Programming Case: Example 8.4 (page 349) The linearized sub-problem after simplification is:

22 Example 8.4

General Nonlinear Programming Case In example 8.4 we saw a case where there is a divergence away from the optimal. For suitably small neighborhood of any linearization point linearization is a good approximation. Need to ensure that the linearization is used only within the immediate vicinity of the base point. Where is some step size.

24 Example 8.5: Step Size Constraints Bounds of are introduced in the example of 8.4. Where

25 Example 8.5: Step Size Constraints

26 Penalty Functions We can remove the constraints and instead do a line search over a penalty function in the direction defines by the vector A two step algorithm can be developed: 1.Construct the LP and solve it to yield a new point 2.For a suitable parameter the line search problem: would be solved o yield a new point

Separable Programming

Separable Programming For a nonlinear function, partition the interval into subintervals. and construct individual linear approximations over each subinterval. Need to consider: 1.Single variable functions. 2.Multi-variable functions.

Separable Programming: Single-Variable Functions Consider some single variable continuous function, defined over the interval. Arbitrarily choose points over the interval denoted by Obtain For every pair of points draw a straight line connecting The equations for each of these lines is given by

30 Separable Programming: Single-Variable Functions These equations can be rewritten as: Simplifying further gives:

31 Separable Programming: Single-Variable Functions Consequently sets of equations can be represented by two equations

Multivariable Separable Functions Definition: A function of variables is said to be separable if it van be expressed as a sum of single-variable functions that each involve only one of the variables. Example: But not: needs to be separable in order to build piecewise linear approximations.

Multivariable Separable Functions Subdivide the interval of values on each variable with grid points. Then the approximating piecewise linear function is

34 LP Solutions of Separable Problems The only feature requiring special attention is condition: In the ordinary simplex method the basic variables are the only ones that can be nonzero. Before entering one of the ‘s into the basis a check is made to ensure that no more than one other associated with the corresponding variable is in the basis (is nonzero) and, if it is, that the ‘s are adjacent. This is known as restricted basis entry.

35 End