25/6/2006 Solving Multivariate Nonlinear Polynomial Systems Massarwi Fady Computer Aided Geometric Design (236716) Spring 2006.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

Chapter 4 Euclidean Vector Spaces
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
3D Geometry for Computer Graphics
Lecture #3; Based on slides by Yinyu Ye
Problems in curves and surfaces M. Ramanathan Problems in curves and surfaces.
CSE 330: Numerical Methods
Developable Surface Fitting to Point Clouds Martin Peternell Computer Aided Geometric Design 21(2004) Reporter: Xingwang Zhang June 19, 2005.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Separating Hyperplanes
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Chapter 4 Roots of Equations
Chapter 5 Orthogonality
Offset of curves. Alina Shaikhet (CS, Technion)
Computer Graphics Recitation 5.
Systems of Equations. I. Systems of Linear Equations Four Methods: 1. Elimination by Substitution 2. Elimination by Addition 3. Matrix Method 4. Cramer’s.
NEW APPROACH TO CALCULATION OF RANGE OF POLYNOMIALS USING BERNSTEIN FORMS.
Tier I: Mathematical Methods of Optimization
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
SVD(Singular Value Decomposition) and Its Applications
Solving Equations. Is a statement that two algebraic expressions are equal. EXAMPLES 3x – 5 = 7, x 2 – x – 6 = 0, and 4x = 4 To solve a equation in x.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Curve Modeling Bézier Curves
Systems and Matrices (Chapter5)
1 Preliminaries Precalculus Review I Precalculus Review II
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
11/19/02 (c) 2002, University of Wisconsin, CS 559 Last Time Many, many modeling techniques –Polygon meshes –Parametric instancing –Hierarchical modeling.
Elementary Linear Algebra Anton & Rorres, 9th Edition
ENCI 303 Lecture PS-19 Optimization 2
V. Space Curves Types of curves Explicit Implicit Parametric.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Chapter 5 General Vector Spaces.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
1 Subdivision Termination Criteria in Subdivision Multivariate Solvers Iddo Hanniel, Gershon Elber CGGC, CS, Technion.
Robustness in Numerical Computation I Root Finding Kwanghee Ko School of Mechatronics Gwnagju Institute of Science and Technology.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Chapter 3 Linear Programming Methods
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Solving Non-Linear Equations (Root Finding)
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Introduction to Parametric Curve and Surface Modeling.
Trigonometric Identities
Computation of the solutions of nonlinear polynomial systems
Chapter 5 Systems and Matricies. Chapter 5 Systems and Matricies.
Systems of First Order Linear Equations
Trigonometric Identities
© University of Wisconsin, CS559 Spring 2004
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
Efficient Methods for Roots of Univariate Scalar Beziers
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Introduction to Parametric Curve and Surface Modeling
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Presentation transcript:

25/6/2006 Solving Multivariate Nonlinear Polynomial Systems Massarwi Fady Computer Aided Geometric Design (236716) Spring 2006

25/6/2006 References  "Computation of the solutions of nonlinear polynomial systems", by E. C. Sherbrooke and N. M. patrikalakis. Computer Aided Geometric Design, Vol 10, No 5, pp , 1993  "Geometric Constraint Solver using Multivariate Rational Spline Functions", by G. Elber and M. S. Kim. The Sixth ACM/IEEE Symposium on Solid Modeling and Applications, Ann Arbor, Michigan, pp 1-10, June  "Subdivision methods for solving polynomial equations", by B. Mourrain and J. P. Pavone, Technical Report 5658, Inria, Sophia-Antipolis, 2005.

25/6/2006 Problem definition  Given a set of n rational polynomial functions of m variables:  And m dimensional box  Find solution such that

25/6/2006 Definitions  Multi Index I is an ordered m-tuple of non-negative integers  Example : if M=(1,1) then  Bernstein i’th polynomial of degree m :  Bernstein basis function determined by multi index I and bounded by multi index M is defined by:

25/6/2006 Problem definition  Since each function f is polynomial it can be represented by a basis change in Bernstein basis  Define the graph F of the function f as :

25/6/2006 Problem definition  Since  We can write  And write the graph F of each function as :  The set of V are the control points of F, this representation is much powerful and allows to use different properties of Bernstein basis ( like convex hull property )  u is a solution to the polynomial system if the point (u,0) contained in all the graphs Fi

25/6/2006 Subdivision-based techniques  Numerical methods to find areas (box) where a single root ( solution ) could be found  In each step, If the current box dimensions are larger than some tolerance split it into two boxes and search in each of them recursively.  Advantages: Speed Stability Simple  Disadvantages: Designed for zero dimensional solution set -- No guarantee that all the roots are found Doesn’t supply information on the root multiplicity

25/6/2006 Subdivision-based techniques  We will review The following: Subdivision while restricting the search domain  PP ( projection polyhedron ), LP (linear programming ) - ( Sherbrooke & Patrikalakis) Subdivision with numerical improvements ( Elber & Kim )  Usage examples Further improvements ( Mourrain & Pavoni)

25/6/2006 Subdivision with domain restriction  Representing the graph of each function in the Bernstein basis allows us to use the convex hull property – The graph is contained in the convex hull of its coefficients ( v i )  If there is a solution for all the equations then the point should be contained in all the graphs, thus in the intersection of the convex hulls of the graphs coefficients

25/6/2006 Algorithm  Assume domain box is B = [0,1] n  Form the convex hulls of all F k and intersect them with one another and with the hperplane u n+1 =0, call the intersection set A, if A is smaller than a tolerance stop.  A=A’X{0}, find a box B’=[a 1,b 1 ] X[a 2,b 2 ] X….. X[a n,b n ] that encloses A’  If B’ is not singificantly smaller than [0,1] n, split B into equally sizes sub domains and work with each sub domain separately  For each k, define new function f’ k by  Can be computed by De Casteljau multivariate subdivision  Update B with B’ and continue from step 2

25/6/2006 Algorithm ( cont’d)  Step 2 contains two heavy operations Computing the convex hull in dimension n Intersecting all the convex hulls  Actually what we need is the bounding box of the intersection of the convex hulls, this can be achieved in different simple way

25/6/2006 Algorithm ( updated )  (1) -Start with initial search box  (2) - Perform transformation that brings the search box to [0,1] n  (3) - Find a sub box of [0,1] n which contains the roots  (4) - Use the reverse of the transformation used in step 2 to determine if the sub box is too small in R n, if yes there is a root there and report for the mid point of the box.  (5) - if the dimensions of the box is close to 1, split the box into two parts in each dimension.  (6) - Go back to step 2, once for each box.

25/6/2006 Finding the Bounding Box of the intersection of the CHs  Projected Polyhedron ( PP ) Find the width of the bounding box in each dimension by projecting it in that dimension For dimension j:  For each graph F k project each of it’s control points u to (u j,u n+1 ) ( call it x-y plane )  Form the convex hull in 2d of the projected points  Return as [a j,b j ] the intersection of the convex hull with the X axis in the projection plane ( between 0 and 1 for all the graphs ). Finally the bounding box will be  B = [a 1,b 1 ]X [a 2,b 2 ]X…… X[a n,b n ]

25/6/2006 Finding the Bounding Box of the intersection of the CHs  Linear Programming ( LP ) minimization function - If we look at the i’th coordinate of the points lying in the feasible area, what is the smallest ( a i ), and what is the biggest ( b i ) Constrains – for each point x in the feasible area, it must satisfy  x n+1 =0  For each k between 1 and n, x is contained in the convex hull of V k (the control points of F k )

25/6/2006 Linear Programming  Formally  Minimization functions:  Constrains Each point u is in the convex hull

25/6/2006 Linear Programming  The unknowns are the coefficients  The number of the unknowns is

25/6/2006 Analysis  The algorithm terminates The subdivision factor (p)  Linear convergence in the PP method  Quadratic convergence in the LP method

25/6/2006 Linear convergence in the PP method  Given a box B=[a 1,b 1 ]x[a 2,b 2 ]x…x[a n,b n ] which contains one and only one simple root x 0 of a system of polynomials f 0,k :[0,1] n  R where k ranges from 1 to n. let f k represent the subdivision of f 0,k over B. Let the new box after executing one step of the PP method, denoted by B ’ =[a’ 1,b’ 1 ]x[a’ 2,b’ 2 ]x…x[a’ n,b’ n ]. Let j be an integer between 1 and n, and let ( the lengths of the j’th side ) then

25/6/2006 Linear convergence in the PP method - Proof  Let and g(u) is the linear approximation of f,  Then the following holds :

25/6/2006 Linear convergence in the PP method - Proof  The projected control points of the graphs of the functions f, g ( (u,f(u)), (u,g(u)) ), are separated by vertical distances no more than  The projection of G on the (u j,u n+1 ) plane is planar region ( because G is linear approximation ). For a fixed u j, the difference between the maximum y-coordinate and minimum y coordinate of this region is given by:  This is obtained from g(x 1 )-g(x 0 ), where x 0 ={0,0,…,x j,0,0,…,0}, x 1 ={1,1,…,x j,1,1,…,1}

25/6/2006 Linear convergence in the PP method - Proof  Then we can say that the whole projected control points of F are enclosed by two parallel lines that has a distance of from each other and slope of  Then if we look at the intersection of these two lines with the x axis ( between 0 and 1), the distance between the intersections ( which is translated to the box dimension later ) is :

25/6/2006 Linear convergence in the PP method - Proof  Since   But we do scaling to transform the box to R n, by scaling by, and this is the new box we work with so the new box dimension is

25/6/2006 Quadratic convergence in the LP method  Given a box B=[a 1,b 1 ]x[a 2,b 2 ]x…x[a n,b n ] which contains one and only one root u 0 of a system of polynomials f 0,k :[0,1] n  R where k ranges from 1 to n. let f k represent the subdivision of f 0,k over B. After executing one step of the LP method, and let the new box denoted by B ’ =[a’ 1,b’ 1 ]x[a’ 2,b’ 2 ]x…x[a’ n,b’ n ]. Define. There exists a neighborhood U of u 0 and a depending only on U such that if and Let Then

25/6/2006 Quadratic convergence in the LP method-proof  The linearization of the n f k is  Lets denote by p={p 1,p 2,…,p n,0} the intersection ( in R n+1 ) of all g k (k=1..n) with x n+1 =0 ( i.e g 1 =g 2 =…=g n =x n+1 =0 ) ( such a point exists since the Jacobian matrix is not zero )  Pick a point q=(q 0,q 1,…,q n,0) that lies in the intersection of F 1, F 2,…, F n with hyperplane x n+1 =0  We know that  Substitute (q 0, q 1,…, q n ) for x, since all f k are zero there, then

25/6/2006 Quadratic convergence in the LP method-proof  letting r k =g k (q 1,q 2,…,q n ), then there exists n points r k =(q 1,q 2,…,q n,r k ) such that r k lies on linear approximation of F k,  For point p we know that  Subtracting * and **, and define we get

25/6/2006 Quadratic convergence in the LP method-proof  By Cramer’s rule we get ( assuming the Jacobian matrix is non singular ) that where J is the Jacobian matrix, and J k is the Jacobian matrix after exchanging k’th column with the column vector (r 1,r 2,…,r n ) T  Since then and since we get  Then

25/6/2006 Quadratic convergence in the LP method-proof  q is chosen arbitrary and its distance from other fixed point ( p ) is, then the distance between two points in the bounding box we want is also  Then the length of each side of the bounding box is also  The real size then ( in R n ) is multiplied by factor of the bounding box,then the dimensions of the new bounding box is  This hold for all K, then

25/6/2006 Subdivision with numerical improvements  Subdivide the multivariate functions bounded within a certain resolution  Using the convex hull property if for a constrain F i all the control points have the same sign then it won’t have a roots.  For each cell with dimensions lower than the wanted resolution report for a root ( center of the cell )  Subdivision is terminated when it is known that there is an isolated root in the domain, in this case Newton- Raphson iterations is applied ( with quadratic convergence )  If the zero set has a dimension larger than 0, then fit a multivariate surface to the set of the discrete points found by means of least squares ( minimizing the L2 norm )

25/6/2006 Root Finding Algorithm

25/6/2006 Numerical improvement of the solution  for a given approximated solution u 0 found, such that for each constrain F i (u 0 )~0, using first order approximation, each graph F i constrains the solution to be on the hyperplane which is defined in the point u 0 as following : Consider the normal to the surface F i (u) at the point u 0 : Then the hyper plane at the point u 0 is defined as

25/6/2006 Numerical improvement of the solution  There are m such planes, and with the additional constraint that the component u m+1 =0, we end with n+1 linear equations( with m+1 variables)  If n=m then the system is full constrained and there is one solution  Otherwise if n < m a least square solution is found in an L2-norm ( finds the closest solution to u 0 from the possible solutions set)

Uniqueness of a solution  If it is known that there is a single solution then it would be better to stop the subdivision and move to more efficient numerical procedure for root finding.  Definitions: Cone: normal cone of hypersurface F i : set of all possible normal vectors on F i. Complementary (Tangent) cone:

25/6/2006 Uniqueness of a solution  Theorm: Given m implicit hypersurfaces F i (u)=0. i=1,…m in R m, there is at most one common solution if  Proof: Consider u 0 (in R m ) is a common solution of the m equations F i (u)=0. Consider, from the relation we have

25/6/2006 Examples  Ray-Traps Definition:  Given a set of n planar curves, it’s ray-trap of length n is a set of n points {P i =C i (u i )} such that a ray bouncing from P i towards P (i+1)mod n will be reflected toward P (i+2)mod n The incoming ray from P i-1 and the outgoing ray towards P i+1 should form an identical angle with the normal of C i at P i This can be written as:

25/6/2006 Examples- ray-traps

25/6/2006 Examples  Surface-Surface Bisectors Let S 1 (u,v) and S 2 (u,v) be two regular rational surfaces in R 3 The points B=(x,y,z) on the bisector surface of S 1 and S 2 must satisfy the following constraints

25/6/2006 Examples-Bisectors  Let  Then we can choose and the problem is reduced to

25/6/2006 Examples- Bisectors  4 variables with 2 equations – under constrained problem  Can fit a bi-variate surface to discrete points sampled on the solution space.

25/6/2006 Examples- Bisectors

25/6/2006 Further improvements  Preconditioning Transformation of the system f=0 into equivalent system Mf=0 where M is n X n non singular matrix global transformation  Useful when two or more functions have closed graphs on the domain, aims to increase the distance between them  The distance between two graphs is not trivial, so it is replaced by the distance between the vectors of the Bernstein basis coefficients Local straightening:  Interesting situation of reduction is when the zero-set of the function f i are orthogonal to the x i direction

25/6/2006 Preconditioning- global transformation  If then define the inner product between functions  Let and let E be a matrix of unitary eigenvectors of Q. Then f’=Ef is a system of polynomials which are orthogonal for the inner product above.  Proof:

25/6/2006 Preconditioning- global transformation

25/6/2006 Preconditioning- Local straightening Local straightening A better domain reduction can be done when the zero level of the functions f i are orthogonal to the x i axis. This can be done by transforming the system f=0 in order to be closed to case (b). Transform locally the system into a system J f -1 (u 0 )f, where J f is the Jacobian matrix of f at the point u 0

25/6/2006 Questions ??? Thanks

25/6/2006 Backup

25/6/2006 Improvements- Examples

25/6/2006 Further improvements - Reduction  For a function f and j=1,…,n let  Then  For any root u=(u 1, u 2,…, u n ) of the equation f(u)=0, we have where t1/t2 is either a root of m j (f;u j )=0 or M j (f;u j )=0 in [a j,b j ] or a j /b j if m j (f;u j )=0/ M j (f;u j )=0 has no roots in [a j,b j ] m j (f;u j )<=0<=M j (f;u j )

25/6/2006 Further improvements  Reduction Similar to the PP ( projected Polyhedron ), reduce the domain of search PP is based on the convex hull property The improvement consists in computing the first/last root of the polynomial m j (f k,u j )/M j (f k,u j ) in the interval [a j,b j ] and keep the intervals [t 1,t 2 ]  Analysis shows that with these improvements we get a quadratic convergence.