Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Chapter 5: Linear Programming: The Simplex Method
Operation Research Chapter 3 Simplex Method.
Engineering Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
ENGINEERING OPTIMIZATION
Empirical Maximum Likelihood and Stochastic Process Lecture VIII.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
The Simplex Method: Standard Maximization Problems
Operation Research Chapter 3 Simplex Method.
Visual Recognition Tutorial
Numerical Optimization
Optimization using Calculus
Function Optimization Newton’s Method. Conjugate Gradients
Constrained Optimization
Optimization Mechanics of the Simplex Method
Unconstrained Optimization Problem
Lecture 10: Support Vector Machines
Advanced Topics in Optimization
5.6 Maximization and Minimization with Mixed Problem Constraints
Why Function Optimization ?
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Today Wrap up of probability Vectors, Matrices. Calculus
LINEAR PROGRAMMING SIMPLEX METHOD.
Solving Systems of Equations
ECON 1150 Matrix Operations Special Matrices
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
1. The Simplex Method.
ENCI 303 Lecture PS-19 Optimization 2
Chapter 6 Linear Programming: The Simplex Method
Simplex method (algebraic interpretation)
Simplex Algorithm.Big M Method
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring Instructor: Prof.Dr.Sahand Daneshvar Submited.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Chapter 3 Linear Programming Methods
Lecture 26 Molecular orbital theory II
L8 Optimal Design concepts pt D
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
1 Section 5.3 Linear Systems of Equations. 2 THREE EQUATIONS WITH THREE VARIABLES Consider the linear system of three equations below with three unknowns.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
Optimal Control.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
Chap 10. Sensitivity Analysis
Chapter 11 Optimization with Equality Constraints
Perturbation method, lexicographic method
Dr. Arslan Ornek IMPROVING SEARCH
Chapter 6. Large Scale Optimization
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Simplex method (algebraic interpretation)
Chapter 6. Large Scale Optimization
Multivariable optimization with no constraints
Constraints.
Presentation transcript:

Searching a Linear Subspace Lecture VI

Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology developed on in our last meeting is referred to the Variable Reduction Technique.

◦ The nullspace is then defined as ◦ Let’s start out with the matrix form

◦ The nullspace then becomes

An alternative approach is to use the AQ factorization which is related to the QR factorization. ◦ These approaches are based on transformations using the Householder transformation

◦ where H is the Householder transformation w is a vector used to “annihilate” some terms the original A matrix ◦ For any two distinct vectors and there exists a Householder matrix that can transform a into b

◦ The idea is that we can come up with a sequence of Householder transformations that will transform our original A matrix into a lower triangular L matrix and a zero matrix here

◦ As a starting point, consider the first row of our objective is to annihilate the 2 (or to transform the matrix in such a way to make the 2 a zero) and the 4.

◦ Thus,

◦ Now we create a Householder transformation to annihilate the ◦ Multiplying

◦ Therefore ◦ The last column of this matrix is then the nullspace matrix

Linear Equality Constraints ◦ The general optimization problem for the linear equality constraints can be stated as:

This time instead of searching over dimension n, we only have to search over dimension n-t where t is the number of nonredundant equations in A. ◦ In the vernacular of the problem, we want to decompose the vector x into a range-specific portion which is required to solve the constraints and a null-space portion which can be varied.

◦ Specifically,  where Y x Y denotes the range-specific portion of x and Z x Z denotes the null-space portion of x.

◦ Algorithm LE (Model algorithm for solving LEP)  LE1. [Test for Convergence] If the conditions for convergence are satisfied, the algorithm terminates with x k.  LE2. [Compute a feasible search direction] Compute a nonzero vector p z, the unrestricted direction of the search. The actual direction of the search is then

 LE3. [Compute a step length] Compute a positive  k, for which f(x k +  k p k ) < f(x k ).  LE4. [Update the estimate of the minimum] x k+1 = x k +  k p k and go back to LE1. ◦ Computation of the Search Direction  As is often the case in this course, the question of the search direction starts with the second order Taylor series expansion. As in the unconstrained case, we derive the approximation of the objective function around some point x k as

 Substituting only feasible steps for all possible steps, we derive the same expression in terms of the null- space:  Solving for the projection based on the Newton- Raphson concept, we derive much the same steps as the constrained optimization problem:

As an example, assume that the maximization problem is

◦ This problem has a relatively simple gradient vector and Hessian matrix

◦ Let us start from the initial solution ◦ To compute a feasible step

◦ In this case ◦ Hence using the concept

Linear Inequality Constraints ◦ The general optimization problem with linear inequality constraints can be written as: ◦ This problem differs from the linearly constrained problem by the fact that some of the constraints may not be active at a given iteration, or may become active at the next iteration.

◦ Algorithm LI  LI1. [Test for convergence] If the conditions for convergence are met at x k, terminate.  LI2. [Choose which logic to perform] Decide whether to continue minimizing in the current subspace or whether to delete a constraint from the working set. If a constraint is to be deleted go to step LI6. If the same working set is to be retained, go on to step LI3.  LI3. [Compute a feasible search direction] Compute a vector p k by applying the null-space equality

 LI4. [Compute a step length] Compute , in this case, we must dtermine if the optimum step length will violate a constraint. Specifically  is equal to the traditional  k or min(  i ) which is defined as the minimum distance to a constraint. If the optimum step is less than the minimum distance to another constraint, then go to LI7, otherwise go to LI5.  LI5. [Add a constraint to the working set] If the optimum step is greater than the minimum distance to another constraint, then you have to add, or make active, the constraint associated with  i. After adding this constraint, go to L17.

 LI6. [Delete a constraint] If the marginal value of one of the Lagrange multipliers is negative, then the associated constraint is binding the objective function suboptimally and the constraint should be eliminated. Delete the constraint from the active set and return to LI1.  LI7. [Update the estimate of the solution]. x k+1 = x k +  k p k and go back to LE1.

◦ A significant portion of the discussion in the LIP algorithm centered around the addition or elimination of an active constraint.  The concept is identical to the minimum ratio rule in linear programming. Specifically, the minimum ratio rule in linear programming identifies the equation (row) which must leave solution in order to maintain feasibility. The rule is to select that row with the minimum positive ratio of the current right hand side to the a ij coefficient in the matrix.

◦ In the nonlinear problem, we define