Presentation is loading. Please wait.

Presentation is loading. Please wait.

(Convex) Cones Def: closed under nonnegative linear combinations, i.e.

Similar presentations


Presentation on theme: "(Convex) Cones Def: closed under nonnegative linear combinations, i.e. "— Presentation transcript:

1 (Convex) Cones Def: closed under nonnegative linear combinations, i.e. 𝐾 is a cone provided 𝑎 1 ,…, 𝑎 𝑝 ∈𝐾⊆ 𝑅 𝑛 , 𝜆 1 ,…, 𝜆 𝑝 ≥0  𝑖=1 𝑝 𝜆 𝑖 𝑎 𝑖 ∈𝐾 ( Note: Usually cone is defined as closed only under nonnegative scalar multiplication. But, we consider convex cones here.) Observations (Characteristics) Subspaces are cones For any family of cones 𝐾 𝑖 :𝑖∈𝐼 , 𝑖∈𝐼 𝐾 𝑖 is a cone also. Any nonempty cone contains 0 If 𝐾 1 , 𝐾 2 are cones, then so is 𝐾 1 + 𝐾 2 ={ 𝑥+𝑦 :𝑥∈ 𝐾 1 ,𝑦∈ 𝐾 2 } Halfspaces are cones: 𝐻={𝑥∈ 𝑅 𝑛 : 𝑎 ′ 𝑥≤0} Linear Programming 2012

2 Any subset 𝐴⊆ 𝑅 𝑛 generates a cone 𝐾(𝐴) (define 𝐾 ∅ ={0} )
Description of cones: Any subset 𝐴⊆ 𝑅 𝑛 generates a cone 𝐾(𝐴) (define 𝐾 ∅ ={0} ) 𝐾 𝐴 ={ 𝜆 1 𝑎 1 +…+ 𝜆 𝑝 𝑎 𝑝 :𝑝≥1, 𝜆 𝑖 ∈ 𝑅 + for all 𝑖, 𝑎 𝑖 ∈𝐴 for all 𝑖}, called “conical span of 𝐴” “conical hull of 𝐴” = 𝐾 𝑖 ⊇𝐴, 𝐾 𝑖 is cone 𝐾 𝑖 (outside description) They are the same. Finite basis result is false for cones (e.g. ice cream cones). Hence we will restrict our attention to the cones with finite conical basis. Linear Programming 2012

3 For any 𝐴⊆ 𝑅 𝑛 , define the (conical) dual of 𝐴 to be
𝐴 + ={𝑥∈ 𝑅 𝑛 :𝐴𝑥≤0}, where 𝐴𝑥≤0 means 𝑎 ′ 𝑥≤0 for all 𝑎∈𝐴. ( some people use 𝐴𝑥≥0 ) It is called a constrained cone since it is the solution set of some homogeneous inequalities. When 𝐴 is a cone, 𝐴 + is defined as dual cone (or polar cone) of 𝐴. Note that 𝐴 + is always a cone – a constrained cone. For 𝐴:𝑚×𝑛, 𝐴 + (with rows of 𝐴 regarded as the vectors in the set 𝐴) is finitely constrained (polyhedron). Linear Programming 2012

4 (conical) dual of 𝐴⊆ 𝑅 𝑛 𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } 𝑎 1 𝑎 2 𝑎 3
{x: a3’x 0} 𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } 𝑎 1 𝑎 2 {x: a1’x 0} 𝐴 + ={𝑥:𝐴𝑥≤0} 𝑎 3 Linear Programming 2012

5 Prop: Suppose 𝐴,𝐵⊆ 𝑅 𝑛 . Then (1) 𝐵⊆𝐴  𝐴 + ⊆ 𝐵 + (2) 𝐴⊆ 𝐴 ++
(1) 𝐵⊆𝐴  𝐴 + ⊆ 𝐵 + (2) 𝐴⊆ 𝐴 ++ (3) 𝐴 + = 𝐴 +++ (4) 𝐴= 𝐴  𝐴 is a constrained cone (5) If 𝐵⊆𝐴 and 𝐵 generates 𝐴 conically, then 𝐴 + = 𝐵 + . Pf) parallels the cases for subspaces. Linear Programming 2012

6 𝐴⊆ 𝐴 ++ 𝐴 ++ 𝑎 1 𝑎 2 𝑎 3 𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } 𝐴 + ={𝑥:𝐴𝑥≤0}
Linear Programming 2012

7 𝐴 + = 𝐴 +++ 𝐴 ++ 𝑎 1 𝑎 2 A+={ x: Ax0} 𝐴 +++ ={𝑥: 𝐴 ++ 𝑥≤0} 𝑎 3
𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } Linear Programming 2012

8 𝐴= 𝐴 ++  𝐴 is a constrained cone
𝐴 ++ =𝐴 𝐴 + ={𝑥:𝐴𝑥≤0} Linear Programming 2012

9 Pf) Use Fourier-Motzkin elimination (later). 
Thm (Weyl): Any nonempty finitely generated cone is polyhedral (finitely constrained). Pf) Use Fourier-Motzkin elimination (later).  Cor 1: Among all subsets 𝐴⊆ 𝑅 𝑛 with finite conical basis 𝐴= 𝐴  𝐴 is a nonempty cone. Cor 2: Given matrix 𝐴:𝑚×𝑛, consider 𝐾={ 𝑦 ′ 𝐴:𝑦≥0}, 𝐿={𝑥:𝐴𝑥≤0}. Then 𝐾 + =𝐿, 𝐿 + =𝐾. Linear Programming 2012

10 𝐾 + =𝐿, 𝐿 + =𝐾 𝐾=𝐿 + 𝑎 1 𝑎 2 𝑎 3 𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } 𝐾 + =𝐿={𝑥:𝐴𝑥≤0}
𝐾 + =𝐿, 𝐿 + =𝐾 𝐾=𝐿 + 𝑎 1 𝑎 2 𝐾 + =𝐿={𝑥:𝐴𝑥≤0} 𝑎 3 𝐴={ 𝑎 1 , 𝑎 2 , 𝑎 3 } Linear Programming 2012

11 Given 𝐴:𝑚×𝑛, 𝑐∈ 𝑅 𝑛 , exactly one of the following two cases holds
Cor 3 (Farkas’ lemma): Given 𝐴:𝑚×𝑛, 𝑐∈ 𝑅 𝑛 , exactly one of the following two cases holds (I) there exists 𝑦∈ 𝑅 + 𝑚 such that 𝑦 ′ 𝐴=𝑐′. (II) there exists 𝑥∈ 𝑅 𝑛 such that 𝐴𝑥≤0, 𝑐 ′ 𝑥>0. Pf) Show ~ (I)  (II) ~ (I)  𝑐∉𝐾≡{ 𝑦 ′ 𝐴:𝑦≥0}  𝑐∉ 𝐾 (by Cor 1)  ∃ 𝑥∈ 𝐾 + (𝐴𝑥≤0) such that 𝑐 ′ 𝑥>0  (II) holds  Linear Programming 2012

12 Farkas’ Lemma 𝐴:𝑚×𝑛, 𝑐∈ 𝑅 𝑛 Case (1): 𝑐∈𝐾
𝐴:𝑚×𝑛, 𝑐∈ 𝑅 𝑛 Case (1): 𝑐∈𝐾 (∃ 𝑦≥0 such that 𝑦 ′ 𝐴= 𝑐 ′ ) 𝐾 𝑎 1 𝑐 𝑎 2 𝐾 + ={𝑥:𝐴𝑥≤0} 𝑎 3 Linear Programming 2012

13 Farkas’ Lemma Case (2): 𝑐∉𝐾 (∃ 𝑥 such that 𝐴𝑥≤0, 𝑐 ′ 𝑥>0) 𝐾 𝑐 𝑎 1
𝑎 2 𝐾 + ={𝑥:𝐴𝑥≤0} 𝑎 3 𝑐 Linear Programming 2012

14 Farkas’ lemma is core of LP duality theory (details later)
Farkas’ lemma is core of LP duality theory (details later). There are many other forms of theorems of the alternatives and they are important and powerful tools in optimization theory. ex) verifying that 𝑐 ∗ ( 𝑐 ′ 𝑥 ∗ = 𝑐 ∗ for some 𝑥 ∗ ) is an optimal value of an LP (in minimization form) is the same as to verify that the following system has no solution. 𝑐 ′ 𝑥< 𝑐 ∗ 𝐴𝑥≥𝑏 Truth of the claim can be verified by giving a solution to the alternative system. question: similar result possible for integer form? Finding projection of a polyhedron to a lower dimensional space (later) absence of arbitrage condition in finance theory. KKT optimality condition for nonlinear program ... Linear Programming 2012

15 Applications to asset pricing
text Chapter 4, p Text uses the form (I) there exists some 𝑥≥0 such that 𝐴𝑥=𝑏. (II) there exists some vector 𝑝 such that 𝑝 ′ 𝐴≥0′ and 𝑝 ′ 𝑏<0. (here, columns of 𝐴 are generators of a cone) Compare with (I) there exists 𝑦∈ 𝑅 + 𝑚 such that 𝑦 ′ 𝐴=𝑐′. (II) there exists 𝑥∈ 𝑅 𝑚 such that 𝐴𝑥≤0, 𝑐 ′ 𝑥>0. Linear Programming 2012

16 𝑛 different assets are traded in a market (single period)
𝑚 possible states after the end of the period 𝑟 𝑠𝑖 : return on investment of 1 dollar on asset 𝑖 and the state is 𝑠 at the end of the period payoff matrix 𝑅:𝑚×𝑛, 𝑅= 𝑟 11 ⋯ 𝑟 1𝑛 ⋮ ⋱ ⋮ 𝑟 𝑚1 ⋯ 𝑟 𝑚𝑛 . 𝑥 𝑖 : amount held of asset 𝑖 𝑥 𝑖 can be negative 𝑥 𝑖 >0: has bought 𝑥 𝑖 units of asset 𝑖, receive 𝑟 𝑠𝑖 𝑥 𝑖 if state 𝑠 occurs. 𝑥 𝑖 <0: “short position”, selling | 𝑥 𝑖 | units of asset 𝑖 at the beginning, with the promise to buy them back at the end. (seller’s position in futures contract, payout 𝑟 𝑠𝑖 | 𝑥 𝑖 |, i.e. receiving a payoff of 𝑟 𝑠𝑖 𝑥 𝑖 if state 𝑠 occurs) Linear Programming 2012

17 Given a portfolio 𝑥, the resulting wealth when state 𝑠 occurs is,
𝑤 𝑠 = 𝑖=1 𝑛 𝑟 𝑠𝑖 𝑥 𝑖  𝑤=𝑅𝑥 Let 𝑝 𝑖 be the price of asset 𝑖 in the beginning, then cost of acquiring portfolio 𝑥 is 𝑝 ′ 𝑥. What are the fair prices for the assets? Absence of arbitrage condition: asset prices should always be such that no investor can get a guaranteed nonnegative payoff out of a negative investment (type A arbitrage, no free lunch) Hence, if 𝑅𝑥≥0, then we must have 𝑝 ′ 𝑥≥0, i.e. there exists no vector 𝑥 such that 𝑥 ′ 𝑅′≥0′, 𝑥 ′ 𝑝<0. So, by Farkas’ lemma, there exists 𝑞≥0 such that 𝑅 ′ 𝑞=𝑝, i.e. 𝑝 𝑖 = 𝑠=1 𝑚 𝑞 𝑠 𝑟 𝑠𝑖 ( Here, 𝑅 ′ =𝐴, 𝑝=𝑏 in the Farkas’ lemma ) (the vector 𝑞 (normalized) called risk neutral probability.) Linear Programming 2012

18 Fourier-Motzkin Elimination
Solving system of inequalities (refer text section 2.8) Idea similar to Gaussian elimination. Eliminate one variable at a time with some mechanism reserved to recover the feasible values later. Given a system of inequalities and equations: (I) 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≥ = 𝑏 𝑖 , 𝑖=1,…,𝑚 Eliminate 𝑥 𝑛 in (I) and obtain system (II) which consists of linear equations and inequalities now in variables 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑛−1 . And we want to have ( 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑛 ) satisfies (I) for some 𝑥 𝑛  ( 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑛−1 ) satisfies (II). (i.e. we want (II) which does not miss any feasible solution to (I) and does not include any vector not satisfying (I) ) Then (I) consistent  (II) consistent (have a solution). (I) Linear Programming 2012

19 Related concept: projection of vectors to a lower dimensional space
Def: If 𝑥=( 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑛 )∈ 𝑅 𝑛 and 𝑘≤𝑛, the projection mapping 𝜋 𝑘 : 𝑅 𝑛 → 𝑅 𝑘 is defined as 𝜋 𝑘 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑛 =( 𝑥 1 ,…, 𝑥 𝑘 ) For 𝑆⊆ 𝑅 𝑛 , Π 𝑘 𝑆 ={ 𝜋 𝑘 𝑥 : 𝑥∈𝑆} Equivalently, Π 𝑘 𝑆 ={( 𝑥 1 ,…, 𝑥 𝑘 ): ∃ 𝑥 𝑘+1 ,…, 𝑥 𝑛 𝑠.𝑡. ( 𝑥 1 ,…, 𝑥 𝑛 )∈𝑆} To determine whether a polyhedron 𝑃 is nonempty, find Π 𝑛−1 𝑃 →…→ Π 1 (𝑃) and determine whether the one dimensional polyhedron is nonempty. (But it is inefficient) Linear Programming 2012

20 Elimination algorithm:
(0) If all coefficients of 𝑥 𝑛 in (I) are 0, then take (II) same as (I). (1)  some relation, say 𝑖−th, with 𝑎 𝑖𝑛 ≠0, and this relation is ′=′. Then derive (II) from (I) by Gauss-Jordan elimination. ( 𝑎 𝑖1 𝑥 1 +…+ 𝑎 𝑖𝑛 𝑥 𝑛 = 𝑏 𝑖  𝑥 𝑛 =1/ 𝑎 𝑖𝑛 ( 𝑏 𝑖 − 𝑎 𝑖1 𝑥 1 −…− 𝑎 𝑖𝑛 𝑥 𝑛 , substitute into (I). Clearly ( 𝑥 1 ,…, 𝑥 𝑛 ) solves (I)  ( 𝑥 1 ,…, 𝑥 𝑛−1 ) solves (II).) (continued) Linear Programming 2012

21 (2) Rewrite each constraint 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≥ 𝑏 𝑖 as
(continued) (2) Rewrite each constraint 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≥ 𝑏 𝑖 as 𝑎 𝑖𝑛 𝑥 𝑛 ≥− 𝑗=1 𝑛−1 𝑎 𝑖𝑗 𝑥 𝑗 + 𝑏 𝑖 , 𝑖=1,…,𝑚 If 𝑎 𝑖𝑛 ≠0, divide both sides by 𝑎 𝑖𝑛 . By letting 𝑥 =( 𝑥 1 ,…, 𝑥 𝑛−1 ), we obtain 𝑥 𝑛 ≥ 𝑑 𝑖 + 𝑓 𝑖 ′ 𝑥, if 𝑎 𝑖𝑛 >0 𝑑 𝑗 + 𝑓 𝑗 ′ 𝑥≥ 𝑥 𝑛 , if 𝑎 𝑗𝑛 <0 0≥ 𝑑 𝑘 + 𝑓 𝑘 ′ 𝑥, if 𝑎 𝑘𝑛 =0, where 𝑑 𝑖 , 𝑑 𝑗 , 𝑑 𝑘 ∈𝑅 and 𝑓 𝑖 , 𝑓 𝑗 , 𝑓 𝑘 ∈ 𝑅 𝑛−1 Let (II) be the system defined by 𝑑 𝑗 + 𝑓 𝑗 ′ 𝑥≥ 𝑑 𝑖 + 𝑓 𝑖 ′ 𝑥, if 𝑎 𝑖𝑛 >0 and 𝑎 𝑗𝑛 <0 0≥ 𝑑 𝑘 + 𝑓 𝑘 ′ 𝑥, if 𝑎 𝑘𝑛 =0 ( and remaining equations ) Linear Programming 2012

22 Ex) 𝑥 1 + 𝑥 2 ≥1 2 𝑥 1 + 𝑥 2 +2 𝑥 3 ≥2 2 𝑥 1 +3 𝑥 3 ≥3 𝑥 1 −4 𝑥 3 ≥4
2 𝑥 𝑥 3 ≥3 𝑥 −4 𝑥 3 ≥4 −2 𝑥 1 + 𝑥 2 − 𝑥 3 ≥5  ≥1− 𝑥 1 − 𝑥 2 𝑥 3 ≥1− 𝑥 −( 𝑥 2 2 ) 𝑥 3 ≥1−( 2 𝑥 1 3 ) −1+( 𝑥 1 4 )≥ 𝑥 3 −5−2 𝑥 1 + 𝑥 2 ≥ 𝑥 3  ≥1− 𝑥 1 − 𝑥 2 −1+ 𝑥 ≥1− 𝑥 −( 𝑥 2 2 ) −1+ 𝑥 ≥1−( 2 𝑥 1 3 ) −5−2 𝑥 1 + 𝑥 2 ≥1− 𝑥 −( 𝑥 2 2 ) −5−2 𝑥 1 + 𝑥 2 ≥1− 2𝑥 1 3 Linear Programming 2012

23 Let 𝑥 ∈𝑄. Then 𝑥 satisfies
Thm 2.10: The polyhedron 𝑄 (defined by system (II)) constructed by the elimination algorithm is equal to Π 𝑛−1 (𝑃) of 𝑃. Pf) If 𝑥 ∈ Π 𝑛−1 (𝑃), ∃ 𝑥 𝑛 such that ( 𝑥 , 𝑥 𝑛 )∈𝑃. In particular, 𝑥=( 𝑥 , 𝑥 𝑛 ) satisfies system (I), hence also satisfies system (II). It shows that Π 𝑛−1 (𝑃)⊆𝑄. Let 𝑥 ∈𝑄. Then 𝑥 satisfies 𝑚𝑖𝑛 𝑗: 𝑎 𝑗𝑛 <0 ( 𝑑 𝑗 + 𝑓 𝑗 ′ 𝑥 )≥ 𝑚𝑎𝑥 𝑖: 𝑎 𝑖𝑛 >0 ( 𝑑 𝑖 + 𝑓 𝑖 ′ 𝑥 ). Let 𝑥 𝑛 be a number between the two sides of the above inequality. Then ( 𝑥 , 𝑥 𝑛 )∈𝑃, which shows 𝑄⊆ Π 𝑛−1 (𝑃).  Observe that for 𝑥=( 𝑥 1 , …, 𝑥 𝑛 ), we have 𝜋 𝑛−2 𝜋 𝑛−1 𝑥 = 𝜋 𝑛−2 (𝑥). Also Π 𝑛−2 Π 𝑛−1 𝑃 = Π 𝑛−2 (𝑃). Hence obtain Π 1 (𝑃) recursively to determine if 𝑃 is empty or to find a solution. A solution value 𝑥 𝑖 in 𝑃 can be recovered recursively starting from Π 1 (𝑃) and finding 𝑥 𝑖 that lies in the interval specified by the constraints for Π 𝑖 (𝑃). Linear Programming 2012

24 Cor 2. 4: Let 𝑃⊆ 𝑅 𝑛+𝑘 be a polyhedron
Cor 2.4: Let 𝑃⊆ 𝑅 𝑛+𝑘 be a polyhedron. Then, the set {𝑥∈ 𝑅 𝑛 : there exists 𝑦∈ 𝑅 𝑘 such that (𝑥, 𝑦)∈𝑃} is also a polyhedron. ( Will be used to prove the Weyl’s Theorem. Other proof technique is not apparent.) Cor 2.5: Let 𝑃⊆ 𝑅 𝑛 be a polyhedron and 𝐴 be an 𝑚×𝑛 matrix. Then the set 𝑄={𝐴𝑥:𝑥∈𝑃} is also a polyhedron. Pf) 𝑄={𝑦∈ 𝑅 𝑚 :𝑦=𝐴𝑥, 𝑥∈𝑃}. Hence 𝑄 is the projection of the polyhedron {(𝑥,𝑦)∈ 𝑅 𝑛+𝑚 :𝑦=𝐴𝑥, 𝑥∈𝑃} onto the 𝑦 coordinates.  Cor 2.6: The convex hull of a finite number of vectors (called polytope) is a polyhedron. Pf) The convex hull { 𝑖=1 𝑘 𝜆 𝑖 𝑥 𝑖 : 𝑖=1 𝑘 𝜆 𝑖 =1, 𝜆 𝑖 ≥0 for all 𝑖} is the image of the polyhedron { 𝜆 1 ,…, 𝜆 𝑘 : 𝑖=1 𝑘 𝜆 𝑖 =1, 𝜆 𝑖 ≥0 for all 𝑖} under the mapping that maps ( 𝜆 1 ,…, 𝜆 𝑘 ) to 𝑖=1 𝑘 𝜆 𝑖 𝑥 𝑖 . (The mapping can be expressed as 𝐴𝜆, where the columns of the matrix 𝐴 are 𝑥 𝑖 vectors. We will see a different proof later.)  Linear Programming 2012

25 Remarks FM elimination not efficient as an algorithm. Number of inequalities grows exponentially as we eliminate variables. Can also handle strict inequalities. Can solve LP problem max { 𝑐 ′ 𝑥:𝐴𝑥≤𝑏}  Consider 𝐴𝑥≤𝑏, 𝑧= 𝑐 ′ 𝑥 and eliminate 𝑥 and find 𝑧 as large as possible in the one dimensional polyhedron. Solution can be recovered by backtracking. FM gives an algorithm to find the projection of 𝑃={(𝑥,𝑦)∈ 𝑅 𝑛+𝑝 :𝐴𝑥+𝐺𝑦≤𝑏} onto the 𝑥 space 𝑃𝑟 𝑥 𝑃 = 𝑥∈ 𝑅 𝑛 : 𝑥,𝑦 ∈𝑃 for some 𝑦∈ 𝑅 𝑝 . But how can we characterize 𝑃𝑟 𝑥 (𝑃) for arbitrary 𝑃? Linear Programming 2012

26 (e.g. RLT (reformulation and linearization technique))
Concept of projection becomes important in recent optimization theory (especially in integer programming) as new techniques using projections have been developed. (e.g. RLT (reformulation and linearization technique)) Formulation in a higher dimensional space and use the projection to lower dimensional space may give stronger formulation in integer programming (in terms of strength of LP relaxation). (e.g. Node + edge variable formulation stronger than edge formulation for weighted maximal b-clique problem) : Given a complete undirected graph 𝐺=(𝑉, 𝐸), weight 𝑐 𝑒 , 𝑒∈𝐸. Find a clique (complete subgraph) of size (number of nodes in the clique) at most 𝑏 and sum of edge weights in the clique is maximum. Linear Programming 2012

27 Pf) 𝐾={ 𝑦 ′ 𝐴: 𝑦≥0} for 𝐴:𝑚×𝑛
Weyl’s Theorem: Any nonempty finitely generated cone is polyhedral (i.e. finitely constrained). Pf) 𝐾={ 𝑦 ′ 𝐴: 𝑦≥0} for 𝐴:𝑚×𝑛 ={𝑥:𝑥− 𝑦 ′ 𝐴=0, 𝑦≥0 is a consistent system in (𝑥, 𝑦)} Use FM elimination to get rid of 𝑦 ′ 𝑠 = ( 𝑥 : some linear homogeneous system in 𝑥 is consistent} Write these relation as 𝐵𝑥≤0 Then 𝐾={𝑥:𝐵𝑥≤0} -- polyhedral  Note that we get homogeneous system 𝐵𝑥≤0 if we apply FM. Linear Programming 2012

28 Any polyhedral cone is nonempty and finitely generated.
Minkowski’s Theorem: Any polyhedral cone is nonempty and finitely generated. Pf) Let 𝐿 be a polyhedral cone. Clearly 𝐿≠∅. We know that 𝐿= 𝐿 ++ from earlier Prop. Part 2 of Cor. 2 says 𝐿 + is finitely generated ( 𝐿 + =𝐾) By Weyl’s Thm, 𝐿 + itself is polyhedral. By part 2 of Cor. 2 ( 𝐿 + ) + is finitely generated.  𝐿 ++ =𝐿 from above  𝐿 is finitely generated.  FM elimination leads to Weyl-Minkowski cone representation. (nonempty finitely generated cone  finitely constrained (polyhedral)) Extend this result to affine version Def: The set of all convex combinations of a finite point set is called a polytope. Linear Programming 2012

29 Suppose 𝑃={𝑥∈ 𝑅 𝑛 : 𝑥 ′ = 𝑦 ′ 𝐵+ 𝑧 ′ 𝐶, 𝑦≥0, 𝑧≥0, 𝑖 𝑧 𝑖 =1},
Affine Weyl Theorem: (i.e. finitely generated are finitely constrained) Suppose 𝑃={𝑥∈ 𝑅 𝑛 : 𝑥 ′ = 𝑦 ′ 𝐵+ 𝑧 ′ 𝐶, 𝑦≥0, 𝑧≥0, 𝑖 𝑧 𝑖 =1}, 𝐵:𝑝×𝑛, 𝐶:𝑞×𝑛. Then ∃ matrix 𝐴:𝑚×𝑛 and 𝑏∈ 𝑅 𝑚 s.t. 𝑃={𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏}. (special case where 𝐵 is vacuous is that polytope is polyhedron.) Pf) (use technique called homogenization) If 𝑃=∅ (i.e. 𝐵, 𝐶 vacuous, i.e. 𝑝=𝑞=0), take 𝐴=[0,…,0], 𝑏=−1. If 𝑃≠∅, consider 𝑃′∈ 𝑅 𝑛+1 defined as Observe that 𝑥∈𝑃  (𝑥, 1)∈𝑃′ Linear Programming 2012

30 and 𝑃′ is finitely generated nonempty cone in 𝑅 𝑛+1 .
Apply Weyl’s Thm to 𝑃′ in 𝑅 𝑛+1 to get 𝑃 ′ ={ 𝑥, 𝑥 𝑛+1 : 𝐴 ′ 𝑥′, 𝑥 𝑛+1) ′ ≤0} for some 𝐴 ′ :𝑚×(𝑛+1) i.e. 𝐴 ′ =[𝐴:𝑑] with 𝑑 = last column of 𝐴′ Define 𝑏=−𝑑, then 𝐴 ′ =[𝐴:−𝑏] Observe that 𝑥∈𝑃  (𝑥, 1)∈𝑃′  𝐴′( 𝑥 ′ , 1)′≤0  (𝐴:−𝑏)( 𝑥 ′ , 1)′≤0  𝐴𝑥≤𝑏  Note that we changed the problem as the problem involving a cone in 𝑅 𝑛+1 , for which we know more properties, and used the results for cones to prove the theorem. Linear Programming 2012

31 Suppose 𝑃={𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏}, 𝐴:𝑚×𝑛, 𝑏∈ 𝑅 𝑚 .
Affine Minkowski Theorem: (i.e. finitely constrained are finitely generated) Suppose 𝑃={𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏}, 𝐴:𝑚×𝑛, 𝑏∈ 𝑅 𝑚 . Then ∃ matrices, 𝐵:𝑝×𝑛, 𝐶:𝑞×𝑛 such that 𝑃={𝑥∈ 𝑅 𝑛 : 𝑥 ′ = 𝑦 ′ 𝐵+ 𝑧 ′ 𝐶, 𝑦, 𝑧≥0, 𝑖=1 𝑞 𝑧 𝑖 =1} Pf) For 𝑃=∅, take 𝑝=𝑞=0, i.e. 𝐵, 𝐶 vacuous. Otherwise, again homogenize and consider Then 𝑥∈𝑃  (𝑥,1)∈𝑃′ 𝑃′ is a polyhedral cone, so Minkowski’s Theorem applies. Hence  matrix, 𝐵 ′ :𝑙×(𝑛+1) such that 𝑃 ′ ={ 𝑥, 𝑥 𝑛+1 ∈ 𝑅 𝑛+1 : 𝑥, 𝑥 𝑛+1 ′ = 𝑦 ′ 𝐵 ′ , 𝑦∈ 𝑅 + 𝑙 } Linear Programming 2012

32 Hence 𝐵 ′ = 𝐵 0 𝐶 >0 . Scale rows of 𝐵′ to get 𝐵 ′ = 𝐵 0 𝐶 1 .
(continued) Break 𝐵′ into 2 parts so that all rows of 𝐵′ with 0 last component come as top rows and rows with nonzero last component come as bottom rows. Note that all nonzero values in the last column of 𝐵′ must be >0. (each row of 𝐵 ′ = 𝐵 0 𝐶 ≠0 must be in 𝑃′. 𝐴′ 𝑥 𝑥 𝑛+1 ≤0 implies 𝑥 𝑛+1 ≥0.) Hence 𝐵 ′ = 𝐵 0 𝐶 >0 . Scale rows of 𝐵′ to get 𝐵 ′ = 𝐵 0 𝐶 1 . It doesn’t change 𝑃′. Then we have 𝑥∈𝑃  (𝑥,1)∈𝑃′ 𝑃 ′ ={ 𝑥, 𝑥 𝑛+1 ∈ 𝑅 𝑛+1 : 𝑥 ′ , 𝑥 𝑛+1 = 𝑦 ′ , 𝑧 ′ 𝐵 0 𝐶 1 , 𝑦,𝑧≥0} i.e. 𝑥∈𝑃  𝑥= 𝑦 ′ 𝐵+ 𝑧 ′ 𝐶, 𝑦≥0, 𝑧≥0, 𝑖 𝑧 𝑖 =1  Linear Programming 2012

33 Geometric view of homogenization in Affine Minkowski Thm
𝑅 𝑃′⊆ 𝑅 𝑛+1 1 𝑅 𝑛 𝑃={𝑥:𝐴𝑥≤𝑏} Linear Programming 2012

34 Think about similar picture for affine Weyl.
Affine Weyl, Minkowski Theorems together provides “Double Description Theorem” We can describe polyhedron as (finite) intersection of halfspaces (finite) conical combination of points + convex combination of points ( i.e. 𝑃=𝐶+𝑄, where C is a cone and 𝑄 is a polytope). Existence of different representations has been shown. Next question is how to identify the representation. Linear Programming 2012


Download ppt "(Convex) Cones Def: closed under nonnegative linear combinations, i.e. "

Similar presentations


Ads by Google