Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Reductions Some of these lecture slides are adapted from CLRS Chapter 31.5 and.

Slides:



Advertisements
Similar presentations
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Advertisements

1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Having Proofs for Incorrectness
Divide-and-Conquer The most-well known algorithm design strategy:
1 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Algorithms Recurrences. Definition – a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Example.
Spring 2015 Lecture 5: QuickSort & Selection
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Lectures on Network Flows
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
VLSI Arithmetic. Multiplication A = a n-1 a n-2 … a 1 a 0 B = b n-1 b n-2 … b 1 b 0  eg)  Shift.
Shortest Path With Negative Weights s 3 t
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Linear Time Selection These lecture slides are adapted from CLRS 10.3.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
1 Chapter 2 Limits and Continuity Rates of Change and Limits.
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions Analysis of.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fibonacci Heaps These lecture slides are adapted from CLRS, Chapter 20.
Hardness Results for Problems
Recurrences The expression: is a recurrence. –Recurrence: an equation that describes a function in terms of its value on smaller functions BIL741: Advanced.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
The Fundamentals: Algorithms, Integers, and Matrices CSC-2259 Discrete Structures Konstantin Busch - LSU1.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Advanced Algorithms Piyush Kumar (Lecture 5: Weighted Matching) Welcome to COT5405 Based on Kevin Wayne’s slides.
Mathematics Review Exponents Logarithms Series Modular arithmetic Proofs.
Advanced Algorithms Piyush Kumar (Lecture 5: Weighted Matching) Welcome to COT5405 Based on Kevin Wayne’s slides.
MA/CSSE 473 Day 17 Divide-and-conquer Convex Hull Strassen's Algorithm: Matrix Multiplication.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Project 2 due … Project 2 due … Project 2 Project 2.
Copyright © 2014, 2010 Pearson Education, Inc. Chapter 2 Polynomials and Rational Functions Copyright © 2014, 2010 Pearson Education, Inc.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
Advanced Algebraic Algorithms on Integers and Polynomials Prepared by John Reif, Ph.D. Analysis of Algorithms.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 3 Polynomial and Rational Functions Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014 Design and Analysis of Algorithms.
NP-Complete problems.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
MA/CSSE 473 Day 08 Extended Euclid's Algorithm Modular Division Fermat's little theorem.
ADVANCED ALGORITHMS REVIEW OF ANALYSIS TECHNIQUES (UNIT-1)
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
1 CSC 421: Algorithm Design & Analysis Spring 2014 Complexity & lower bounds  brute force  decision trees  adversary arguments  problem reduction.
10/11/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 21 Network.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Cryptography Lecture 14 Arpita Patra © Arpita Patra.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 16.
May 9, 2001Applied Symbolic Computation1 Applied Symbolic Computation (CS 680/480) Lecture 6: Multiplication, Interpolation, and the Chinese Remainder.
Fuw-Yi Yang1 Textbook: Introduction to Cryptography 2nd ed. By J.A. Buchmann Chap 1 Integers Department of Computer Science and Information Engineering,
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Agenda Review:  Relation Properties Lecture Content:  Divisor and Prime Number  Binary, Octal, Hexadecimal Review & Exercise.
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Advanced Algorithms Analysis and Design
Algorithm Design and Analysis
Lectures on Network Flows
Chapter 5 Induction and Recursion
Chapter 5. Optimal Matchings
Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Parameterised Complexity
Instructor: Shengyu Zhang
Chapter 11 Limitations of Algorithm Power
CSCI 256 Data Structures and Algorithm Analysis Lecture 12
Cryptography Lecture 16.
Complexity Theory: Foundations
Presentation transcript:

Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Reductions Some of these lecture slides are adapted from CLRS Chapter 31.5 and Kozen Chapter 30.

2 Contents Contents. n "Linear-time reductions." n Undirected and directed shortest path. n Matrix inversion and multiplication. n Integer division and multiplication. n Sorting and convex hull.

3 Reduction Intuitively, decision problem X reduces to problem Y if: n Any instance of X can be "rephrased" as an instance of Y. n The solution to instance of Y provides solution to instance of X. Consequences: n Used to establish relative difficulty between two problems. n Given algorithm for Y, we can also solve X. (design algorithms) n If X is hard, then so is Y. (prove intractability)

4 Reduction Problem X linearly reduces to problem Y if, given a black box that solves Y in O(f(N)) time, we can devise an O(f(N)) algorithm for X. Ex 1. X = PRIME linearly reduces to Y = COMPOSITE. n PRIME(x): Is x prime? n COMPOSITE(x): Is x composite? n To compute PRIME(x), call COMPOSITE(x) and return opposite answer.

5 Reduction: Undirected to Directed Shortest Path Ex 2. Undirected shortest path (with nonnegative weights) linearly reduces to directed shortest path. n Replace each directed arc by two undirected arcs. n Shortest directed path will use each arc at most once. s t s t

6 Reduction: Undirected to Directed Shortest Path Ex 2. Undirected shortest path (with nonnegative weights) linearly reduces to directed shortest path. n Replace each directed arc by two undirected arcs. n Shortest directed path will use each arc at most once. n Note: reduction invalid in networks with negative cost arcs, even if no negative cycles. t 2 s 7 -4 t 2 s 7 7

7 Network Flow Running Times and Linear Time Reductions undirected shortest path nonnegative weights O(m) shortest path nonnegative weights O(m + n log n) undirected shortest path no negative cycles O(mn + n 2 log n) shortest path no negative cycles O(mn) assignment (weighted bipartite matching) O(mn + n 2 log n) weighted non- bipartite matching O(mn + n 2 log n) directed MST O(m + n log n) MST undirected O(m  (m,n) log  (m,n)) non-bipartite matching O(mn 1/2 ) bipartite matching O(mn 1/2 ) max flow bipartite DAG O(mn log(m/ n 2 )) max flow O(mn log(m/ n 2 )) min cut O(mn log(m/ n 2 )) max flow undirected min cut undirected min cost flow O(m 2 log n + mn log 2 n) transportation O(m 2 log n + mn log 2 n) min vertex cover bipartite O(mn 1/2 )

8 Matrix Inversion Fundamental problem in numerical analysis. n Intimately tied to solving system of linear equations. n Note: avoid explicitly taking inverses in practice.

9 Matrix Multiplication vs. Matrix Inversion (CLR 31.5) Matrix multiplication and inversion have same asymptotic complexity. n M(N) = time to multiply to N x N matrices. n I(N) = time to invert N x N matrix. n Note: we don't know asymptotic complexity of either! Proof (matrix multiplication linearly reduces to inversion). n Regularity assumption: I(3N) = O(I(N)).  Holds if I(N) = N , since then I(3N) = (3N)  = 3  I(N).  Holds if if I(N) =  ( N  log  N). n To compute C = AB, define 3N x 3N matrix D.

10 Matrix Multiplication vs. Matrix Inversion Proof (matrix inversion linearly reduces to multiplication). n Regularity assumption: M(N + k) = O(M(N)) for 0  k < N.  Holds if M(N) =  ( N  log  N) for some   2,   0. n WLOG: assume N is a power of 2.  Pad with 0s. n WLOG: assume A is symmetric positive definite.  if A is invertible, then A T A is symmetric positive definite.  A -1 = (A T A) -1 A T.  Only two extra matrix multiplications.

11 Matrix Multiplication vs. Matrix Inversion Proof (matrix inversion linearly reduces to multiplication). n To invert N x N symmetric positive definite matrix A, partition into 4 N/2 x N/2 submatrices.  Note: B and S (Schur complement) are symmetric positive definite since A is.

12 Matrix Multiplication vs. Matrix Inversion Proof (matrix inversion linearly reduces to multiplication). n Running time.  4 half-size matrix multiplications.  2 half-size matrix inversions.  2 half-size matrix addition, subtraction.

13 Integer Arithmetic Fundamental questions. n Is integer addition easier than integer multiplication? n Is integer multiplication easier than integer division? n Is integer division easier than integer multiplication? Addition Operation O(N) Upper Bound  (N) Lower Bound MultiplicationO(N log N log log N)  (N) DivisionO(N log N log log N)  (N)

14 Warmup: Squaring vs. Multiplication Integer multiplication: given two N-digit integer s and t, compute st. Integer squaring: given an N-digit integer s, compute s 2. Theorem. Integer squaring and integer multiplication have the same asymptotic complexity. Proof. n Squaring linearly reduces to multiplication. – trivial: multiply s and s n Multiplication linearly reduces to squaring. – regularity assumption: S(N+1) = O(S(N))

15 Integer Division (See Kozen, Chapter 30) Integer division: given two integers s and t of at most N digits each, compute the quotient q and remainder r: n q =  s / t , r = s mod t. n Alternatively, s = qt +r, 0  r < t. Example. n s = 1000, t = 110  q = 9, r = 10. n s = , t = 100  r = 85. We show integer division linearly reduces to integer multiplication.

16 Integer Division: "Grade-School" Divide two integers, each is N bits or less. n q =  s / t  n r = s mod t. Running time. O(N 2 ). n O(N) per iteration + recursive calls. n Denominator increases by factor of 2 each iteration. – s < 2 N and does not change – 1  t  s throughout  O(N) recursive calls IF (s < t) RETURN (0, t) (q', r')  IntegerDivision(s, 2t) IF (r' < t) RETURN(2q', r') ELSE RETURN (2q' + 1, r' - t) (q, r) = IntegerDivision (s, t)

17 Integer Division: "Grade-School" The algorithm correctly compute q =  s / t , r = s mod t. Proof by reverse induction. n Base case: t > s. n Inductive step: algorithm computes q', r' such that – q' =  s / 2t , r' = s mod 2t. – s = q' (2t) + r', 0  r' < 2t. n Goal: show

18 Newton's Method Given a differentiable function f(x), find a value x* such that f(x*) = 0. Newton's method. n Start with initial guess x 0. n Compute a sequence of approximations: n Equivalent to finding line of tangent to curve y = f(x) at x i and taking x i+1 to be point where line crosses x-axis. xixi x i+1

19 Newton's Method Convergence of Newton's method. n Not guaranteed to converge to a root x*. n If function is well-behaved, and x 0 sufficiently close to x* then Newton's method converges quadratically. – number of bits of accuracy doubles at each iteration Applications. n Computing square roots: n Finding min / max of function.  Extends to multivariate case. n Cornerstone problem in continuous optimization. n Interior point methods for linear programming.

20 Integer Division: Newton's Method Our application of Newton's method. n We will use exact binary arithmetic and obtain exact solution. n Approximately compute x = 1 / t using Newton's method. n We'll show exact answer is either  s x  or  s x . Theorem: given a O(M(N)) algorithm for multiplying two N-digit integers, there exists an O(M(N)) algorithm for dividing two integers, each of which is at most N-digits.

21 Integer Division: Newton's Method Example Compute: 1 / 7. n x 0 = 0.1 n x 1 = 0.13 n x 2 = n x 3 = n x 4 = n x 5 = n x 6 = n x 7 = Compute  / 7 . n * x 5 = n Correct answer is either or

22 Integer Division: Newton's Method Arbitrary precision rational x. Choose x to be unique fractional power of 2 in interval (1/2t, 1/t]. WHILE ( s – s x t  t ) x  2x – tx 2 IF ( s -  s x  t < t ) q =  s x  ELSE q =  s x , r = s - qt r = s - qt (q, r) = NewtonIntegerDivision (s, t)

23 Analysis L1: Proof by induction on i. n Base case: n Inductive hypothesis:

24 Analysis L2: Sequence of Newton iterations converges quadratically to 1 / t. Iterate x i is approximates 1 / t to 2 i significant bits of accuracy. Proof by induction on i. n Base case: n Inductive hypothesis:

25 Analysis L3: Algorithm terminates after O(log N) steps. n By L2, after k =  log 2 log 2 (s / t)  steps, we have: Note: 2 k = O(N), k = O(log N). L4: Algorithm returns correct answer. n By L1, x k  1 / t. n Combining with proof of L3: n This implies,  s / t  is either  s x k  or  s x k  ; the remainder can be found by subtraction.

26 Analysis Theorem: Newton's method does integer division in O(M(N)) time, where M(N) is the time to do multiply two N-digit integers. n By L3, 2 k = O(N), and the number of iterations is O(log N). n Each Newton iteration involves two multiplications, one addition, and one subtraction. n Technical fact (not proved here): algorithm still works if we only keep track of 2 i significant digits in iteration i.  Bottleneck operation = multiplications.  2M(1) + 2M(2) + 2M(4) M(2 k ) = O(M(N)).

27 Integer Arithmetic Theorem: The following integer operations have the same asymptotic bit complexity. n Multiplication. n Squaring. n Division. n Reciprocal: N-significant bit approximation of 1/s.

28 Sorting and Convex Hull Sorting. n Given N distinct integers, rearrange in increasing order. Convex hull. n Given N points in the plane, find their convex hull in counter- clockwise order.  Find shortest fence enclosing N points.

29 Sorting and Convex Hull Sorting. n Given N distinct integers, rearrange in increasing order. Convex hull. n Given N points in the plane, find their convex hull in counter- clockwise order. Lower bounds. n Recall, under comparison-based model of computation, sorting N items requires  (N log N) comparisons. n We show sorting linearly reduces to convex hull. n Hence, finding convex hull of N points requires  (N log N) comparisons.

30 Sorting Reduces to Convex Hull Sorting instance: Convex hull instance. Key observation. n Region {x : x 2  x} is convex  all points are on hull. n Counter-clockwise order of convex hull (starting at point with most negative x) yields items in sorted order. f(x) = x 2