Karatsuba’s Algorithm for Integer Multiplication

Slides:



Advertisements
Similar presentations
Fast Fourier Transform for speeding up the multiplication of polynomials an Algorithm Visualization Alexandru Cioaca.
Advertisements

A simple example finding the maximum of a set S of n numbers.
Instructor Neelima Gupta Table of Contents Divide and Conquer.
Dr. Deshi Ye Divide-and-conquer Dr. Deshi Ye
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture.
Algorithm : Design & Analysis [4]
Instructor: Shengyu Zhang 1. Example 1: Merge sort 2.
Fast Fourier Transform Lecture 6 Spoken Language Processing Prof. Andrew Rosenberg.
SECTION 3.6 COMPLEX ZEROS; COMPLEX ZEROS; FUNDAMENTAL THEOREM OF ALGEBRA FUNDAMENTAL THEOREM OF ALGEBRA.
Solving Quadratic Equations Algebraically Lesson 2.2.
Richard Fateman CS 282 Lecture 101 The Finite-Field FFT Lecture 10.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
TECHNIQUES OF INTEGRATION
Faster Multiplication, Powering of Polynomials
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
Richard Fateman CS 282 Lecture 51 Faster Multiplication, Powering of Polynomials Lecture 5.
CSE 421 Algorithms Richard Anderson Lecture 13 Divide and Conquer.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
Introduction to Algorithms
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
The Fourier series A large class of phenomena can be described as periodic in nature: waves, sounds, light, radio, water waves etc. It is natural to attempt.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Fast Fourier Transform Irina Bobkova. Overview I. Polynomials II. The DFT and FFT III. Efficient implementations IV. Some problems.
1 Linear Recurrence Relations Part I Jorge Cobb The University of Texas at Dallas.
Asymptotic Growth Rates Themes –Analyzing the cost of programs –Ignoring constants and Big-Oh –Recurrence Relations & Sums –Divide and Conquer Examples.
Bell Work: Find the values of all the unknowns: R T = R T T + T = 60 R = 3 R =
Divide-and-Conquer 7 2  9 4   2   4   7
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Fingerprinting techniques. 2 Is X equal to Y? = ? = ?
1 Computer Algorithms Lecture 7 Master Theorem Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
7.4 Integration of Rational Functions by Partial Fractions TECHNIQUES OF INTEGRATION In this section, we will learn: How to integrate rational functions.
Chapter 8 With Question/Answer Animations 1. Chapter Summary Applications of Recurrence Relations Solving Linear Recurrence Relations Homogeneous Recurrence.
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
14.1 CompSci 102© Michael Frank Today’s topics Recurrence relationsRecurrence relations –Stating recurrences –LiHoReCoCo –Divide & conquer –Master’s method.
1 Section 5.5 Solving Recurrences Any recursively defined function ƒ with domain N that computes numbers is called a recurrence or recurrence relation.
Module #17: Recurrence Relations Rosen 5 th ed., §
5.6 Convolution and FFT. 2 Fast Fourier Transform: Applications Applications. n Optics, acoustics, quantum physics, telecommunications, control systems,
Divide & Conquer  Themes  Reasoning about code (correctness and cost)  recursion, induction, and recurrence relations  Divide and Conquer  Examples.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 300) Karatsuba’s Algorithm for Integer Multiplication Jeremy R. Johnson.
The Fast Fourier Transform and Applications to Multiplication
Asymptotic Growth Rates  Themes  Analyzing the cost of programs  Ignoring constants and Big-Oh  Recurrence Relations & Sums  Divide and Conquer 
1 Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Master Method Some of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 567) The Fast Fourier Transform (FFT) and Convolution Jeremy R. Johnson TexPoint fonts used.
May 9, 2001Applied Symbolic Computation1 Applied Symbolic Computation (CS 680/480) Lecture 6: Multiplication, Interpolation, and the Chinese Remainder.
Mathematical Analysis of Recursive Algorithm CSG3F3 Lecture 7.
College Algebra Chapter 6 Matrices and Determinants and Applications
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Polynomial + Fast Fourier Transform
Applied Symbolic Computation
Algorithm : Design & Analysis [4]
September 4, 1997 Applied Symbolic Computation (CS 300) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Applied Symbolic Computation
Polynomials and the FFT(UNIT-3)
Applied Symbolic Computation
DFT and FFT By using the complex roots of unity, we can evaluate and interpolate a polynomial in O(n lg n) An example, here are the solutions to 8 =
September 4, 1997 Applied Symbolic Computation (CS 300) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Advanced Algorithms Analysis and Design
Applied Symbolic Computation
Applied Symbolic Computation
September 4, 1997 Applied Symbolic Computation (CS 567) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Applied Symbolic Computation
Richard Anderson Lecture 14 Inversions, Multiplication, FFT
Applied Symbolic Computation
Applied Symbolic Computation
Divide-and-Conquer 7 2  9 4   2   4   7
Fast Polynomial and Integer Multiplication
Presentation transcript:

Karatsuba’s Algorithm for Integer Multiplication September 4, 1997 Karatsuba’s Algorithm for Integer Multiplication Jeremy R. Johnson

September 4, 1997 Introduction Objective: To derive a family of asymptotically fast integer multiplication algorithms using polynomial interpolation Karatsuba’s Algorithm Polynomial algebra Interpolation Vandermonde Matrices Toom-Cook algorithm Polynomial multiplication using interpolation Faster algorithms for integer multiplication References: Lipson, Cormen et al.

Karatsuba’s Algorithm Using the classical pen and paper algorithm two n digit integers can be multiplied in O(n2) operations. Karatsuba came up with a faster algorithm. Let A and B be two integers with A = A110k + A0, A0 < 10k B = B110k + B0, B0 < 10k C = A*B = (A110k + A0)(B110k + B0) = A1B1102k + (A1B0 + A0 B1)10k + A0B0 Instead this can be computed with 3 multiplications T0 = A0B0 T1 = (A1 + A0)(B1 + B0) T2 = A1B1 C = T2102k + (T1 - T0 - T2)10k + T0

Complexity of Karatsuba’s Algorithm Let T(n) be the time to compute the product of two n-digit numbers using Karatsuba’s algorithm. Assume n = 2k. T(n) = (nlg(3)), lg(3)  1.58 T(n)  3T(n/2) + cn  3(3T(n/4) + c(n/2)) + cn = 32T(n/22) + cn(3/2 + 1)  32(3T(n/23) + c(n/4)) + cn(3/2 + 1) = 33T(n/23) + cn(32/22 + 3/2 + 1) …  3iT(n/2i) + cn(3i-1/2i-1 + … + 3/2 + 1) ...  c3k + cn[((3/2)k - 1)/(3/2 -1)] --- Assuming T(1)  c  c3k + 2c(3k - 2k)  3c3lg(n) = 3cnlg(3)

Divide & Conquer Recurrence Assume T(n) = aT(n/b) + (n) T(n) = (n) [a < b] T(n) = (nlog(n)) [a = b] T(n) = (nlogb(a)) [a > b]

Polynomial Algebra Let F[x] denote the set of polynomials in the variable x whose coefficients are in the field F. F[x] becomes an algebra where +, * are defined by polynomial addition and multiplication.

Interpolation A polynomial of degree n is uniquely determined by its value at (n+1) distinct points. Theorem: Let A(x) and B(x) be polynomials of degree m. If A(i) = B(i) for i = 0,…,m, then A(x) = B(x). Proof. Recall that a polynomial of degree m has m roots. A(x) = Q(x)(x- ) + A(), if A() = 0, A(x) = Q(x)(x- ), and deg(Q) = m-1 Consider the polynomial C(x) = A(x) - B(x). Since C(i) = A(i) - B(i) = 0, for m+1 points, C(x) = 0, and A(x) must equal B(x).

Lagrange Interpolation Formula Find a polynomial of degree m given its value at (m+1) distinct points. Assume A(i) = yi Observe that

Matrix Version of Polynomial Evaluation Let A(x) = a3x3 + a2x2 + a1x + a0 Evaluation at the points , , ,  is obtained from the following matrix-vector product

Matrix Interpretation of Interpolation Let A(x) = anxn + … + a1x +a0 be a polynomial of degree n. The problem of determining the (n+1) coefficients an,…,a1,a0 from the (n+1) values A(0),…,A(n) is equivalent to solving the linear system

Vandermonde Matrix V(0,…, n) is non-singular when 0,…, n are distinct.

Polynomial Multiplication using Interpolation Compute C(x) = A(x)B(x), where degree(A(x)) = m, and degree(B(x)) = n. Degree(C(x)) = m+n, and C(x) is uniquely determined by its value at m+n+1 distinct points. [Evaluation] Compute A(i) and B(i) for distinct i, i=0,…,m+n. [Pointwise Product] Compute C(i) = A(i)*B(i) for i=0,…,m+n. [Interpolation] Compute the coefficients of C(x) = cnxm+n + … + c1x +c0 from the points C(i) = A(i)*B(i) for i=0,…,m+n.

Interpolation and Karatsuba’s Algorithm Let A(x) = A1x + A0, B(x) = B1x + B0, C(x) = A(x)B(x) = C2x2 + C1x + C0 Then A(10k) = A, B(10k) = B, and C = C(10k) = A(10k)B(10k) = AB Use interpolation based algorithm: Evaluate A(), A(), A() and B(), B(), B() for  = 0,  = 1, and  =. Compute C() = A()B(), C() = A() B(), C() = A()B() Interpolate the coefficients C2, C1, and C0 Compute C = C2102k + C110k + C0

Matrix Equation for Karatsuba’s Algorithm Modified Vandermonde Matrix Interpolation

Integer Multiplication Splitting the Inputs into 3 Parts Instead of breaking up the inputs into 2 equal parts as is done for Karatsuba’s algorithm, we can split the inputs into three equal parts. This algorithm is based on an interpolation based polynomial product of two quadratic polynomials. Let A(x) = A2x2 + A1x + A0, B(x) = B2x2 + B1x + B, C(x) = A(x)B(x) = C4x4 + C3x3 + C2x2 + C1x + C0 Thus there are 5 products. The divide and conquer part still takes time = O(n). Therefore the total computing time T(n) = 5T(n/3) + O(n) = (nlog3(5)), log3(5)  1.46

Asymptotically Fast Integer Multiplication We can obtain a sequence of asymptotically faster multiplication algorithms by splitting the inputs into more and more pieces. If we split A and B into k equal parts, then the corresponding multiplication algorithm is obtained from an interpolation based polynomial multiplication algorithm of two degree (k-1) polynomials. Since the product polynomial is of degree 2(k-1), we need to evaluate at 2k-1 points. Thus there are (2k-1) products. The divide and conquer part still takes time = O(n). Therefore the total computing time T(n) = (2k-1)T(n/k) + O(n) = (nlogk(2k-1)).

Asymptotically Fast Integer Multiplication Using the previous construction we can find an algorithm to multiply two n digit integers in time (n1+ ) for any positive . logk(2k-1) = logk(k(2-1/k)) = 1 + logk(2-1/k) logk(2-1/k)  logk(2) = ln(2)/ln(k)  0. Can we do better? The answer is yes. There is a faster algorithm, with computing time (nlog(n)loglog(n)), based on the fast Fourier transform (FFT). This algorithm is also based on interpolation and the polynomial version of the CRT.