# Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics.

## Presentation on theme: "Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics."— Presentation transcript:

Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics Texas A&M University ACA 2004

Texas A&M University ACA2004 2 RUR Outline n What is Rational Univariate Reduction? n Computing RUR by Sparse Resultants n Complexity Analysis n Exact Implementation

Texas A&M University ACA2004 3 RUR Rational Univariate Reduction n Problem: Solve a system of n polynomials f 1, …, f n in n variables X 1, …, X n with coefficients in the field K n Reduce the system to n + 1 univariate polynomials h, h 1, …, h n with coefficients in K s.t. if  is a root of h then ( h 1 (  ), …, h n (  )) is a solution to the system

Texas A&M University ACA2004 4 RUR RUR via Sparse Resultant n Notation l e i the i -th standard basis vector  = { o, e 1, …, e n } l u 0, u 1,…, u n indeterminates l A i = Supp ( f i ) l the algebraic closure of K

Texas A&M University ACA2004 5 RUR Toric Perturbation n Toric Generalized Characteristic Polynomial Let f 1 *, …, f n * be n polynomials in n variables X 1, …, X n with coefficients in K and Supp ( f i * )  A i =Supp ( f i ), i = 1, …, n that have only finitely many solutions in ( \ { 0 }) n Define TGCP ( u, Y ) = Res ( , A 1, …, A n ) (  a  u a X a, f 1 - Y f 1 *, …, f n - Y f n *)

Texas A&M University ACA2004 6 RUR Toric Perturbation n Toric Perturbation [Rojas 99] Define Pert ( u ) to be the non-zero coefficient of the lowest degree term (in Y ) of TGCP ( u, Y ) l Pert ( u ) is well-defined l A version of “projective operator technique” [Rojas 98, D’Andrea and Emiris 03]

Texas A&M University ACA2004 7 RUR Toric Perturbation n Toric Perturbation l If (  1, …,  n )  ( \ { 0 }) n is an isolated root of the input system f 1, …, f n then  a  u a  a  Pert ( u ) l Pert ( u ) completely splits into linear factors over ( \ { 0 }) n. For every irreducible component of the zero set of the input system, there is at least one factor of Pert ( u )

Texas A&M University ACA2004 8 RUR Computing RUR n Step 1: l Compute Pert ( u ) l Use Emiris’ sparse resultant algorithm [Canny and Emiris 93, 95, 00] to construct Newton matrix whose determinant is some multiple of the resultant l Evaluate resultant with distinct u and interpolate them

Texas A&M University ACA2004 9 RUR Computing RUR n Step 2: l Compute h ( T ) l Set h ( T ) = Pert ( T, u 1, …, u n ) for some values of u 1, …, u n l Evaluate Pert ( u ) with distinct u 0 and interpolate them

Texas A&M University ACA2004 10 RUR Computing RUR n Step 3: l Compute h 1 ( T ), …, h n ( T ) l Computation of h i involves - Evaluating Pert ( u ), - Interpolate them, and - Some univariate polynomial operations

Texas A&M University ACA2004 11 RUR Complexity Analysis n Count the number of arithmetic operations n Notation l O˜( ) the polylog factor is ignored l  Gaussian elimination of m dimensional matrix requires O ( m  )

Texas A&M University ACA2004 12 RUR Complexity Analysis n Quantities l M A The mixed volume MV ( A 1, …, A n ) of the convex hull of A 1, …, A n l R A MV ( A 1, …, A n ) +  i = 1,…,n MV ( , A 1, …, A i-1, A i+1, …, A n ) l The total degree of the sparse resultant l S A The dimension of Newton matrix l Possibly exponentially bigger than R A

Texas A&M University ACA2004 13 RUR Complexity Analysis n [Emiris and Canny 95] l Evaluating Res ( , A 1, …, A n ) (  a  u a X a, f 1, …, f n ) requires O ˜ ( n R A S A  ) or O ˜ ( S A 1+  ) if char K = 0

Texas A&M University ACA2004 14 RUR Complexity Analysis n [Rojas 99] l Evaluating Pert ( u ) requires O ˜ ( n R A 2 S A  ) or O ˜ ( S A 1+  ) if char K = 0

Texas A&M University ACA2004 15 RUR Complexity Analysis l Computing h ( T ) requires O ˜ ( n M A R A 2 S A  ) or O ˜ ( M A S A 1+  ) if char K = 0

Texas A&M University ACA2004 16 RUR Complexity Analysis l Computing every h i ( T ) requires O ˜ ( n M A R A 2 S A  ) or O ˜ ( M A S A 1+  ) if char K = 0

Texas A&M University ACA2004 17 RUR Complexity Analysis l Computing RUR h ( T ), h 1 ( T ), …, h n ( T ) for fixed u 1, …, u n requires O ˜ ( n 2 M A R A 2 S A  ) or O ˜ ( n M A S A 1+  ) if char K = 0

Texas A&M University ACA2004 18 RUR Complexity Analysis n Derandomize the choice of u 1, …, u n l Computing RUR h ( T ), h 1 ( T ), …, h n ( T ) requires O ˜ ( n 4 M A 3 R A 2 S A  ) or O ˜ ( n 3 M A 3 S A 1+  ) if char K = 0

Texas A&M University ACA2004 19 RUR Complexity Analysis Emiris DivisionEmiris GCD char K = 0 “Small” Newton Matrix Evaluating Res n R A S A  S A 1+  RARA Evaluating Pert n R A 2 S A  S A 1+  R A 1+  RUR for fixed u n 2 M A R A 2 S A  n M A S A 1+  n M A R A 1+  RUR n 4 M A 3 R A 2 S A  n 3 M A 3 S A 1+  n 3 M A 3 R A 1+ 

Texas A&M University ACA2004 20 RUR Complexity Analysis l A great speed up is achieved if we could compute “small” Newton matrix whose determinant is the resultant  No such method is known

Texas A&M University ACA2004 21 RUR Khetan’s Method l Khetan’s method gives Newton matrix whose determinant is the resultant of unmixed systems when n = 2 or 3 [Kehtan 03, 04] l Let B =   A 1    A n Then, computing RUR requires n 3 M A 3 R B 1+  arithmetic operations

Texas A&M University ACA2004 22 RUR ERUR:Implementation n Current implementation l The coefficients are rational numbers l Use the sparse resultant algorithm [Emiris and Canny 93, 95, 00] to construct Newton matrix l All the coefficients of RUR h, h 1,…, h n are exact

Texas A&M University ACA2004 23 RUR ERUR l Non square system is converted to some square system l Solutions in ( ) n are computed by adding the origin o to supports.

Texas A&M University ACA2004 24 RUR ERUR n Exact Sign l Given an expression e, tell whether or not e ( h 1 (  ), …, h n (  )) = 0 l Use (extended) root bound approach. l Use Aberth’s method [Aberth 73] to numerically compute an approximation for a rootof h to any precision.

Texas A&M University ACA2004 25 RUR Applications by ERUR n Real Root l Given a system of polynomial equations, list all the real roots of the system n Positive Dimensional Component l Given a system of polynomial equations, tell whether or not the zero set of the system has a positive dimensional component

Texas A&M University ACA2004 26 RUR Applications by ERUR l Presented today’s last talk in Session 3 “Applying Computer Algebra Techniques for Exact Boundary Evaluation” 4:30 – 5:00 pm

Texas A&M University ACA2004 27 RUR The Other RUR n GB+RS [Rouillier 99, 04] l Compute the exact RUR for real solutions of a 0-dimensional system l GB computes the Gröebner basis n [Giusti, Lecerf and Salvy01] l An iterative method

Texas A&M University ACA2004 28 RUR Conclusion n ERUR l Strong for handling degeneracies l Need more optimizations and faster algorithms

Texas A&M University ACA2004 29 RUR Future Work n RUR l Faster sparse resultant algorithms l Take advantages of sparseness of matrices [Emiris and Pan 97] l Faster univariate polynomial operations

Texas A&M University ACA2004 30 RUR Thank you for listening! n Contact l Koji Ouchi, kouchi@cs.tamu.edukouchi@cs.tamu.edu l John Keyser, keyser@cs.tamu.edukeyser@cs.tamu.edu l Maurice Rojas, rojas@math.tamu.edurojas@math.tamu.edu n Visit Our Web l http://research.cs.tamu.edu/keyser/geom/erur/ http://research.cs.tamu.edu/keyser/geom/erur/ Thank you

Download ppt "Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics."

Similar presentations