# by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright

## Presentation on theme: "by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright"— Presentation transcript:

by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Semidefinite Programming Using Conical Hull Representation of Semidefinite Matrices by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright

Introduction Constrained optimization problem occurs throughout engineering, especially in design where trade-off between many aspects can not be avoided Many optimizations in engineering can be approached as convex problems Optimization problems over matrix variables are often found in control system field : Semidefinite Programming

Outline Backgrounds Conical Hull Approach Common Lyapunov Function
Convex Optimization Semidefinite Programming Duality and Multiplier Theory Conical Hull Approach Conical Hull of Positive Semidefinite Matrices Optimization over Positive Semidefinite Matrices Common Lyapunov Function Background & Problem definition Conical hull approach Optimization procedure Feasibility Preliminary result Future Works

backgrounds Convex Optimization
Optimization problem : Convex Optimization problem :

backgrounds Semidefinite Programming
Optimization problems which involve symmetric positive semidefinite matrices as variables Formulation: Operators on matrix operation sign -def/semidef (+def/semidef ) sign Same notion as < or > in scalar

backgrounds Duality and Multiplier Theory
For a primal problem There exists a dual problem which has the same optimal value as the primal (for convex problem) is called the dual function which forms the lower bound of the optimal value

backgrounds Duality and Multiplier Theory
Dual function can be obtained from: L(x,):Rn x Rm →R is the Lagrangian function composed from the cost function and the constraints, and  is the Lagrange multiplier Finding the optimal * from maximization of dual q(), the optimal x* can be obtained by solving

backgrounds Duality and Multiplier Theory
Visualization of Lagrange multiplier

Conical hull approach Conical Hull of Positive Semidefinite Matrices
Symmetric Positive Semidefinite Matrices set Recall : The vector of n xn symmetric matrices can be represented in Rr where r=(n+1)n/2 via a basis matrix So that :

Conical hull approach Optimization over cone of PSD matrices
→ cone of a convex hull is a convex set : S≥n is a convex set → A linear map of a convex set is also a covex set → The set of positive definite matrices =int(S≥n) A linear mapping of a cone of PSD matrices can be used for representing a set satisfying a particular constraint Having all the constraints represented in cones, the feasible set of the problem can be obtained Minimization/maximization can be carried out over this feasible set.

Conical hull approach Optimization over cone of PSD matrices
Theorem: Optimization over set Ω The theorem can be applied to optimization over a set obtained from linear mapping of Ω The theorem is extended for optimization over cone(Ω) Using the minimizer of g for determining search directions in a cone, will guarantee the search to be kept inside the associated cone

Common Lyapunov Function Background & Problem Definition
The problem: Find P=PT>0 such that: In this problem, a PSD cone and a set of cones representing the lyapunov inequalities are formed The problem then becomes a problem of finding a common point in the intersection of all involved cones (if the intersection of all cones is nonempty)

Common Lyapunov Function Conical hull approach
Taking the vector of these inequalities gives: The solution will be a point in cone of PSD matrices which also lies in the set of points that make the inequalities negative definite : The problem then is mapped into Rr using basis W

Common Lyapunov Function Optimization procedure
the algorithm starts from points at each cones, and moves these points to the intersection Search directions that drive the points into this intersection are needed: a minimum Euclidean distance cost function can generate these directions Becomes a problem of minimizing euclidean distance between each combination pair of points in different cones → minimizing distance between cones

Common Lyapunov Function Optimization procedure
The cost gradient gj is calculated at reference points p0j Minimizers w.r.t. gj are obtained at each set Ω (LP problem) Search direction is obtained from the minimizer Minimization of cost func. then is carried out along this search line The cost minimizer pminj updates the reference point until it converges to an optimal point

Common Lyapunov Function Optimization procedure
The search is always directed to a point in the associated set Ω (not cone(Ω)) The problem of reaching a point in the intersection of cones can be viewed as problem of minimizing distance between cones 1) : the mindist obtained represents the distance between set Ω, M1-1Ω,…, Mm-1Ω, not between their cones, not as what expected in 2). To overcome this, the normalized form of every vectors involved → carries a notion of angle deviation between them → minimizing the angle deviation can gives a common point in the intersection of the cones (distance =angle deviation=0).

Common Lyapunov Function Feasibility
To assure that the point minimizing the distance between cones lies in the intersection, additional constraints should be incorporated. The constraints are based on the facts that if a minimization over a convex set S w.r.t. a linear vector –g gives a point x*, then for any point x inside S the following relation holds: Or, the cost value at any point inside the set S will be bigger than the cost value of the minimizer w.r.t any minimization vector g

Common Lyapunov Function Feasibility
For a point to be inside the intersection of some sets, it should satisfy the above condition for all minimization vectors gi associated with each cones

Common Lyapunov Function Feasibility
The additional constraints can be incorporated into the minimization problem via multiplier technique multiplier Vector of minimizer associated to gradient gj at each cone Matrix composed from the gradient vector gj at each cone → force the solutions to move inside the intersection

Common Lyapunov Function
Preliminary results (1) Given the problem a common lyapunov function for the following stable systems : The algorithm produces the solution Which is positive definite and makes the Lyapunov Inequality AiTP+PAi<0 holds for all the systems involved

Common Lyapunov Function
The following pictures show how the algorithm finds the solution by minimizing the distance between points (cost function) and how it drive the solution into the intersection set

Common Lyapunov Function Preliminary Result
Preliminary results (2) Given the problem a common lyapunov function for the following stable systems :

Common Lyapunov Function Preliminary Result
The algorithm produces the solution

Conclusions The approach of using the conical hull of PSD matrices provides a framework for solving SDP problems The approach based on the formation of sets that satisfy the constraints, and optimization over the feasible set can be carried out using the results from the theorem of minimization over a PSD set Common Lyapunov problem can be solved using the cone approach

Future works Establish the convergence proof
Improve the convergence rate (e.g. by finding better search direction method, modifying the cost function, using penalty method) Explore the approach for other forms of LMI constraints and solve linear minimization problem Extend the algorithm for solving problem related to the bilinear systems Where,