Download presentation

Presentation is loading. Please wait.

Published byCandace Cain Modified over 4 years ago

1
**by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright**

Semidefinite Programming Using Conical Hull Representation of Semidefinite Matrices by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright

2
Introduction Constrained optimization problem occurs throughout engineering, especially in design where trade-off between many aspects can not be avoided Many optimizations in engineering can be approached as convex problems Optimization problems over matrix variables are often found in control system field : Semidefinite Programming

3
**Outline Backgrounds Conical Hull Approach Common Lyapunov Function**

Convex Optimization Semidefinite Programming Duality and Multiplier Theory Conical Hull Approach Conical Hull of Positive Semidefinite Matrices Optimization over Positive Semidefinite Matrices Common Lyapunov Function Background & Problem definition Conical hull approach Optimization procedure Feasibility Preliminary result Future Works

4
**backgrounds Convex Optimization**

Optimization problem : Convex Optimization problem :

5
**backgrounds Semidefinite Programming**

Optimization problems which involve symmetric positive semidefinite matrices as variables Formulation: Operators on matrix operation sign -def/semidef (+def/semidef ) sign Same notion as < or > in scalar

6
**backgrounds Duality and Multiplier Theory**

For a primal problem There exists a dual problem which has the same optimal value as the primal (for convex problem) is called the dual function which forms the lower bound of the optimal value

7
**backgrounds Duality and Multiplier Theory**

Dual function can be obtained from: L(x,):Rn x Rm →R is the Lagrangian function composed from the cost function and the constraints, and is the Lagrange multiplier Finding the optimal * from maximization of dual q(), the optimal x* can be obtained by solving

8
**backgrounds Duality and Multiplier Theory**

Visualization of Lagrange multiplier

9
**Conical hull approach Conical Hull of Positive Semidefinite Matrices**

Symmetric Positive Semidefinite Matrices set Recall : The vector of n xn symmetric matrices can be represented in Rr where r=(n+1)n/2 via a basis matrix So that :

10
**Conical hull approach Optimization over cone of PSD matrices**

→ cone of a convex hull is a convex set : S≥n is a convex set → A linear map of a convex set is also a covex set → The set of positive definite matrices =int(S≥n) A linear mapping of a cone of PSD matrices can be used for representing a set satisfying a particular constraint Having all the constraints represented in cones, the feasible set of the problem can be obtained Minimization/maximization can be carried out over this feasible set.

11
**Conical hull approach Optimization over cone of PSD matrices**

Theorem: Optimization over set Ω The theorem can be applied to optimization over a set obtained from linear mapping of Ω The theorem is extended for optimization over cone(Ω) Using the minimizer of g for determining search directions in a cone, will guarantee the search to be kept inside the associated cone

12
**Common Lyapunov Function Background & Problem Definition**

The problem: Find P=PT>0 such that: In this problem, a PSD cone and a set of cones representing the lyapunov inequalities are formed The problem then becomes a problem of finding a common point in the intersection of all involved cones (if the intersection of all cones is nonempty)

13
**Common Lyapunov Function Conical hull approach**

Taking the vector of these inequalities gives: The solution will be a point in cone of PSD matrices which also lies in the set of points that make the inequalities negative definite : The problem then is mapped into Rr using basis W

14
**Common Lyapunov Function Optimization procedure**

the algorithm starts from points at each cones, and moves these points to the intersection Search directions that drive the points into this intersection are needed: a minimum Euclidean distance cost function can generate these directions Becomes a problem of minimizing euclidean distance between each combination pair of points in different cones → minimizing distance between cones

15
**Common Lyapunov Function Optimization procedure**

The cost gradient gj is calculated at reference points p0j Minimizers w.r.t. gj are obtained at each set Ω (LP problem) Search direction is obtained from the minimizer Minimization of cost func. then is carried out along this search line The cost minimizer pminj updates the reference point until it converges to an optimal point

16
**Common Lyapunov Function Optimization procedure**

The search is always directed to a point in the associated set Ω (not cone(Ω)) The problem of reaching a point in the intersection of cones can be viewed as problem of minimizing distance between cones 1) : the mindist obtained represents the distance between set Ω, M1-1Ω,…, Mm-1Ω, not between their cones, not as what expected in 2). To overcome this, the normalized form of every vectors involved → carries a notion of angle deviation between them → minimizing the angle deviation can gives a common point in the intersection of the cones (distance =angle deviation=0).

17
**Common Lyapunov Function Feasibility**

To assure that the point minimizing the distance between cones lies in the intersection, additional constraints should be incorporated. The constraints are based on the facts that if a minimization over a convex set S w.r.t. a linear vector –g gives a point x*, then for any point x inside S the following relation holds: Or, the cost value at any point inside the set S will be bigger than the cost value of the minimizer w.r.t any minimization vector g

18
**Common Lyapunov Function Feasibility**

For a point to be inside the intersection of some sets, it should satisfy the above condition for all minimization vectors gi associated with each cones

19
**Common Lyapunov Function Feasibility**

The additional constraints can be incorporated into the minimization problem via multiplier technique multiplier Vector of minimizer associated to gradient gj at each cone Matrix composed from the gradient vector gj at each cone → force the solutions to move inside the intersection

20
**Common Lyapunov Function**

Preliminary results (1) Given the problem a common lyapunov function for the following stable systems : The algorithm produces the solution Which is positive definite and makes the Lyapunov Inequality AiTP+PAi<0 holds for all the systems involved

21
**Common Lyapunov Function**

The following pictures show how the algorithm finds the solution by minimizing the distance between points (cost function) and how it drive the solution into the intersection set

22
**Common Lyapunov Function Preliminary Result**

Preliminary results (2) Given the problem a common lyapunov function for the following stable systems :

23
**Common Lyapunov Function Preliminary Result**

The algorithm produces the solution

24
Conclusions The approach of using the conical hull of PSD matrices provides a framework for solving SDP problems The approach based on the formation of sets that satisfy the constraints, and optimization over the feasible set can be carried out using the results from the theorem of minimization over a PSD set Common Lyapunov problem can be solved using the cone approach

25
**Future works Establish the convergence proof**

Improve the convergence rate (e.g. by finding better search direction method, modifying the cost function, using penalty method) Explore the approach for other forms of LMI constraints and solve linear minimization problem Extend the algorithm for solving problem related to the bilinear systems Where,

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google