Presentation is loading. Please wait.

Presentation is loading. Please wait.

A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics.

Similar presentations


Presentation on theme: "A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics."— Presentation transcript:

1 A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics and its Applications

2 What would happen when we try numerical computation on algebraic problems? A numerical analyst got a surprise 50 years ago on a deceptively simple problem. 1

3 James H. Wilkinson (1919-1986) Britain’s Pilot Ace Start of project:1948 Completed:1950 Add time:1.8 microseconds Input/output:cards Memory size:352 32-digit words Memory type:delay lines Technology:800 vacuum tubes Floor space:12 square feet Project leader:J. H. Wilkinson 2

4 The Wilkinson polynomial p(x) = (x- 1 )(x- 2 )...(x- 20 ) = x 20 - 210 x 19 + 20615 x 18 +... Wilkinson wrote in 1984: Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst. 3

5

6 Matrix rank problem 5

7 Factoring a multivariate polynomial: A factorable polynomialirreducible approximation 6

8 Solving polynomial systems: Example: A distorted cyclic four system: Translation: There are two 1-dimensional solution set: 7

9 Distorted Cyclic Four system in floating point form: 1-dimensional solution set Isolated solutions approximation 8

10 tiny perturbation in data (< 0.00000001) huge error In solution ( >10 5 ) 9

11 What could happen in approximate algebraic computation? “traumatic” error dramatic deformation of solution structure complete loss of solutions miserable failure of classical algorithms Polynomial division Euclidean Algorithm Gaussian elimination determinants … … 10

12 So, why bother with approximation in algebra? 1. You may have no choice (e.g. Abel’s Impossibility Theorem) All subsequent computations become approximate Either or 11

13 So, why bother with approximation solutions? 1. You may have no choice 2. Approximate solutions are better! true image Application: Image restoration (Pillai & Liang) blurred image

14 Application: Image restoration (Pillai & Liang) true image blurred image restored image Approximate solution is better than exact solution! 13

15 Perturbed Cyclic 4 Exact solutions by Maple: 16 isolated (codim 4) solutions Approximate solutions by Bertini (Courtesy of Bates, Hauenstein, Sommese, Wampler)

16 Perturbed Cyclic 4 Exact solutions by Maple: 16 isolated (codim 4) solutions Or, by an experimental approximate elimination combined with approximate GCD Approximate solutions are better than exact ones, arguably 15

17 So, why bother with approximation solutions? 1. You may have no choice 2. Approximate solutions are better 3. Approximate solutions (usually) cost less Example: JCF computation Special case: Maple takes 2 hours On a similar 8x8 matrix, Maple and Mathematica run out of memory 1. You may have no choice 2. Approximate solutions are better 3. Approximate solutions (usually) cost less 16

18 Pioneer works in numerical algebraic computation (incomplete list) Homotopy method for solving polynomial systems (Li, Sommese, Wampler, Verschelde, …) Numerical Polynomial Algerba (Stetter) Numerical Algebraic Geometry (Sommese, Wampler, Verschelde, …) 17

19 What is an “approximate solution”? To solvewith 8 digits precision: backward error:0.00000001-- method is good forward error:0.0001-- problem is bad backward error forward error exact computation approximate solution using 8-digits precision axact solution 0)10()1( 242   x 18

20 The condition number [Forward error] < [Condition number] [Backward error] A large condition number The problem is sensitive or, ill-conditioned From numerical method From problem An infinite condition number The problem is ill-posed 19

21 Wilkinson’s Turing Award contribution: Backward error analysis A numerical algorithm solves a “nearby” problem A “good” algorithm may still get a “bad” answer, if the problem is ill-conditioned (bad) 20

22 A well-posed problem: (Hadamard, 1923) the solution satisfies existence uniqueness continuity w.r.t data Ill-posed problems are common in applications - image restoration - deconvolution - IVP for stiction damped oscillator- inverse heat conduction - some optimal control problems- electromagnetic inverse scatering - air-sea heat fluxes estimation- the Cauchy prob. for Laplace eq. … … 21 An ill-posed problem is infinitely sensitive to perturbation tiny perturbation  huge error

23 Ill-posed problems are common in algebraic computing - Multiple roots - Polynomial GCD - Factorization of multivariate polynomials - The Jordan Canonical Form - Multiplicity structure/zeros of polynomial systems - Matrix rank 22

24 If the answer is highly sensitive to perturbations, you have probably asked the wrong question. Maxims about numerical mathematics, computers, science and life, L. N. Trefethen. SIAM News 23 Does that mean: (Most) algebraic problems are wrong problems? A numerical algorithm seeks the exact solution of a nearby problem Ill-posed problems are infinitely sensitive to data perturbation Conclusion: Numerical computation is incompatible with ill-posed problems. Solution: Formulate the right problem.

25 P : D ata  S olution P P Data Solution P Challenge in solving ill-posed problems: Can we recover the lost solution when the problem is inexact? 24

26 William Kahan: This is a misconception Are ill-posed problems really sensitive to perturbations? Kahan’s discovery in 1972: Ill-posed problems are sensitive to arbitrary perturbation, but insensitive to structure preserving perturbation. 25

27 Why are ill-posed problems infinitely sensitive? Plot of pejorative manifolds of degree 3 polynomials with multiple roots The solution structure is lost when the problem leaves the manifold due to an arbitrary perturbation The problem may not be sensitive at all if the problem stays on the manifold, unless it is near another pejorative manifold Problems with certain solution structure form a “pejorative manifold” W. Kahan’s observation (1972) 26

28 Geometry of ill-posed algebraic problems Similar manifold stratification exists for problems like factorization, JCF, multiple roots … 27

29 Manifolds of 4x4 matrices defined by Jordan structures (Edelman, Elmroth and Kagstrom 1997) e.g. {2,1} {1} is the structure of 2 eigenvalues in 3 Jordan blocks of sizes 2, 1 and 1 28

30 B 29 Illustration of pejorative manifolds A ? ? Problem AProblem B perturbation The “nearest” manifold may not be the answer The right manifold is of highest codimension within a certain distance

31 A “three-strikes” principle for formulating an “approximate solution” to an ill-posed problem: Backward nearness: The approximate solution is the exact solution of a nearby problem Maximum codimension: The approximate solution is the exact solution of a problem on the nearby pejorative manifold of the highest codimension. Minimum distance: The approximate solution is the exact solution of the nearest problem on the nearby pejorative manifold of the highest codimension. Finding approximate solution is (likely) a well-posed problem Approximate solution is a generalization of exact solution. 30

32 Continuity of the approximate solution:

33 Formulation of the approximate rank /kernel: The approximate rank of A within  Backward nearness: app-rank of A is the exact rank of certain matrix B within . Maximum codimension: That matrix B is on the pejorative manifold  possessing the highest co-dimension and intersecting the  neighborhood of A. with The approximate kernel of A within  Minimum distance: That B is the nearest matrix on the pejorative manifold . An exact rank is the app-rank within sufficiently small . App-rank is continuous (or well-posed) 31

34 Rank = 4 nullity = 2 +  E = 6 nullity = 0 kernel basis +  E = 4 nullity  = 2 Rank  = 4 nullity  = 2 After reformulating the rank: 32 Ill-posedness is removed successfully. App-rank/kernel can be computed by SVD and other rank-revealing algorithms (e.g. Li-Zeng, SIMAX, 2005)

35 Formulation of the approximate GCD The AGCD within  : Finding AGCD is well-posed if   (f,g) is sufficiently small EGCD is an special case of AGCD for sufficiently small  (Z. Zeng, Approximate GCD of inexact polynomials, part I&II) 33

36 Similar formulation strikes out ill-posedness in problems such as Approximate rank/kernel (Li,Zeng 2005, Lee, Li, Zeng 2006) Approximate multiple roots/factorization (Zeng 2005) Approximate GCD (Zeng-Dayton 2004, Gao-Kaltofen-May-Yang-Zhi 2004) Approximate Jordan Canonical Form (Zeng-Li 2006) Approximate irreducible factorization (Sommesse-Wampler-Verschelde 2003, Gao et al 2003, 2004, in progress) Approximate dual basis and multiplicity structure (Dayton-Zeng 05, Bates-Peterson-Sommese ’06) Approximate elimination ideal (in progress) 34

37 after formulating the approximate solution to problem P within  P The two-staged algorithm Stage II: Find/solve problem Q such that Q Stage I: Find the pejorative manifold  of the highest dimension s.t.  Exact solution of Q is the approximate solution of P within  which approximates the solution of S where P is perturbed from S 35

38 Case study: Univariate approximate GCD: Stage I: Find the pejorative manifold for a least squares solution ( u,v,w ) by Gauss-Newton iteration Stage II: solve the (overdetermined) quadratic system (key theorem: The Jacobian of F(u,v,w) is injective.) 36

39 Start: k = n Is AGCD of degree k possible? no k := k-1 Successful? no k := k-1 Refine with G-N Iteration probably yes Output GCD Univariate AGCD algorithm Max-codimension Min-distance nearness 37

40 Case study: Multivariate approximate GCD: Stage I: Find the max-codimension pejorative manifold by applying univariate AGCD algorithm on each variable x j Stage II: solve the (overdetermined) quadratic system for a least squares solution ( u,v,w ) by Gauss-Newton iteration (key theorem: The Jacobian of F(u,v,w) is injective.) 38

41 Case study: univariate factorization: Stage I: Find the max-codimension pejorative manifold by applying univariate AGCD algorithm on ( f, f’ ) Stage II: solve the (overdetermined) polynomial system F(z 1,…,z k )=f for a least squares solution ( z 1,…,z k ) by Gauss-Newton iteration (key theorem: The Jacobian is injective.) (in the form of coefficient vectors) 39

42 Case study: Finding the nearest matrix with a Jordan structure  J = 1 Segre characteristic = [3,1] Equations determining the manifold 31 2 1 1 Ferrer’s diagram A ~ J codim = -1 + 3 + 3(1) = 5 A B I+S= s 13 s 23 s 14 s 24 s 34 Wyre characteristic = [2,1,1] 40

43 Case study: Finding the nearest matrix with a Jordan structure  Equations determining the manifold A ~ J A B For B not on the manifold, we can still solve for a least squares solution : Whenis minimized, so isThe crucial requirement: The Jacobian ofis injective. (Zeng & Li, 2006) 41

44 tangent plane P 0 : u = G(z 0 )+J(z 0 )(z- z 0 ) initial iterate u 0 =G(z 0 ) Least squares solution u * =G(z * ) a Project to tangent plane u 1 = G(z 0 )+J(z 0 )(z 1 - z 0 ) ~ new iterate u 1 =G(z 1 ) Pejorative manifold u = G( z ) Solve G( z ) = a for nonlinear least squares solution z=z * Solve G(z 0 )+J(z 0 )( z - z 0 ) = a for linear least squares solution z = z 1 G(z 0 )+J(z 0 )( z - z 0 ) = a J(z 0 )( z - z 0 ) = - [G(z 0 ) - a ] z 1 = z 0 - [J(z 0 ) + ] [G(z 0 ) - a] Solving G(z) = a 42

45 Stage II: Find/solve the nearest problem on the manifold via solving an overdetermined system G(z)=a for a least squares solution z * s.t. ||G(z * )-a||=min z ||G(z)-a|| by the Gauss-Newton iteration Stage I: Find the nearby max-codim manifold Key requirement: Jacobian J(z * ) of G(z) at z * is injective (i.e. the pseudo-inverse exists) condition number (sensitivity measure) 43

46 Summary: An (ill-posed) algebraic problem can be formulated using the three-strikes principle (backward nearness, maximum-codimension, and minimum distance) to remove the ill-posedness The re-formulated problem can be solved by numerical computation in two stages (finding the manifold, solving least squares) The combined numerical approach leads to Matlab/Maple toolbox ApaTools for approximate polynomial algebra. The toolbox consists of univariate/multivariate GCD matrix rank/kernel dual basis for a polynomial ideal univariate factorization irreducible factorization elimination ideal … (to be continued in the workshop next week) 44


Download ppt "A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics."

Similar presentations


Ads by Google