Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unconstrained Optimization Problem

Similar presentations


Presentation on theme: "Unconstrained Optimization Problem"— Presentation transcript:

1 Unconstrained Optimization Problem
Problem setting: Given function Find First order optimality condition: Taylor expansion (Second order)

2 Unconstrained Optimization Algorithms
Gradient Ascent Algorithm (Steepest descent): Newton Method:

3 Newton-Armijo Algorithm
Start with any . Having stop if else : (i) Newton Direction : globally and quadratically converge to unique solution in a finite number of steps (ii) Armijo Stepsize : such that Armijo’s rule is satisfied

4 Why use stepsize It can not converge to optimum solution !

5 Optimization Problem Formulation
Problem setting: Given functions and , defined on a domain subject to where is called the objective function and are called constraints.

6 Definitions and Notation
Feasible region: where A solution of the optimization problem is a point such that for which and is called a global minimum.

7 Definitions and Notation
A point is called a local minimum of the optimization problem if such that At the solution , an inequality constraint is said to be active if , otherwise it is called an inactive constraint. where is called the slack variable

8 Definitions and Notation
Remove an inactive constraint in an optimization problem will NOT affect the optimal solution Very useful feature in SVM If then the problem is called unconstrained minimization problem Least square problem is in this category SSVM formulation is in this category Difficult to find the global minimum without convexity assumption

9 Gradient and Hessian Let be a differentiable function. The
gradient of function at a point is defined as If is a twice differentiable function. The Hessian matrix of at a point is defined as

10 Algebra of the Classification Problem 2-Category Linearly Separable Case
Given m points in the n dimensional real space Represented by an matrix or Membership of each point in the classes is specified by an diagonal matrix D : if and Separate and by two bounding planes such that: More succinctly: , where

11 Robust Linear Programming (RLP)
Preliminary Approach to Support Vector Machines s.t. (LP) where : nonnegative slack (error) vector The term , 1-norm measure of error vector, is called the training error. For the linearly separable case, at solution of (LP):

12 Support Vector Machines Formulation
Solve the quadratic program for some : min s. t. , , denotes where or membership. Margin is maximized by minimizing reciprocal of margin. Different error functions and measures of margin will lead to different SVM formulations.

13 Linear Program and Quadratic Program
An optimization problem in which the objective function and all constraints are linear functions is called a linear programming problem formulation is in this category If the objective function is convex quadratic while the constraints are all linear then the problem is called convex quadratic programming problem Standard SVM formulation is in this category

14 The Most Important Concept in Optimization (minimization)
A point is said to be an optimal solution of a unconstrained minimization if there exists no decent direction A point is said to be an optimal solution of a constrained minimization if there exists no feasible decent direction There might exist decent direction but move along this direction will leave out the feasible region


Download ppt "Unconstrained Optimization Problem"

Similar presentations


Ads by Google