Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recurrences.

Similar presentations


Presentation on theme: "Recurrences."— Presentation transcript:

1 Recurrences

2 Execution of a recursive program
Deriving & solving recurrence equations

3 Recursion What is the recursive definition of n! ? Program
int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); } // Note '*' is done after returning from fact(n-1)

4 Recursive algorithms A recursive algorithm typically contains recursive calls to the same algorithm In order for the recursive algorithm to terminate, it must contain code for solving directly some “base case(s)” A direct solution does not include recursive calls. We use the following notation: DirectSolutionSize is the “size” of the base case DirectSolutionCount is the count of the number of operations done by the “direct solution”

5 A Call Tree for fact(3) The Run Time Environment
The following 2 slides show the program stack after the third call from fact(3) When a function is called an activation records('ar') is created and pushed on the program stack. The activation record stores copies of local variables, pointers to other ‘ar’ in the stack and the return address. When a function returns the stack is popped. A Call Tree for fact(3) returns 6 fact(3) 6 * fact(2) 3 2 * 2 fact(1) 1 1 int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); }

6 ep= environmental pointer
AR=Activation Record ep= environmental pointer free memory N =1 function value [=1] AR(fact 1) return address EP Program stack Val fact: <ipF,ep0> N =3 function value [=?] return address static link = ep0 dynamic link = ep1 free memory AR(Driver) AR(fact 3) Snapshot of the environment at end of second call to fact ep0 N =2 function value [=2] dynamic link = EP AR(fact 2) ep1 static link = ep0 dynamic link = EP EP N =2 function value [=?] AR(fact 2) return address static link = ep0 dynamic link =ep2 ep2 N =3 function value [=?] AR(fact 3) return address static link = ep0 dynamic link =ep1 ep1 Val AR(Driver) fact: <ipF,ep0> ep0 Program stack Snapshot of the environment at end of third call to fact

7 EP Program stack Val fact: <ipF,ep0> N =3 function value [=6] return address static link = ep0 dynamic link = EP free memory AR(Driver) AR(fact 3) Snapshot of the environment at end of first call to fact ep0 EP Program stack Val fact: <ipF,ep0> free memory AR(Driver) Snapshot of the environment after fact l returns

8 Goal: Analyzing recursive algorithms
Until now we have only analyzed (derived a count) for non-recursive algorithms. In order to analyze recursive algorithms we must learn to: Derive the recurrence equation from the code Solve recurrence equations.

9 Deriving a Recurrence Equation for a Recursive Algorithm
Our goal is to compute the count (Time) T(n) as a function of n, where n is the size of the problem We will first write a recurrence equation for T(n) For example T(n)=T(n-1)+1 and T(1)=0 Then we will solve the recurrence equation When we solve the recurrence equation above we will find that T(n)=n, and any such recursive algorithm is linear in n.

10 Deriving a Recurrence Equation for a Recursive Algorithm
Determine the “size of the problem”. The count T is a function of this size 2. Determine DirectSolSize, such that for size DirectSolSize the algorithm computes a direct solution, and the DirectSolCount(s).

11 Deriving a Recurrence Equation for a Recursive Algorithm
To determine GeneralCount: Identify all the recursive calls done by a single call of the algorithm and their counts, T(n1), …,T(nk) and compute RecursiveCallSum = 4. Determine the “non recursive count” t(size) done by a single call of the algorithm, i.e., the amount of work, excluding the recursive calls done by the algorithm

12 Deriving DirectSolutionCount for Factorial
int fact(int n) { if (n<=1) return 1; else return n*fact(n-1); } 1. Size = n 2.DirectSolSize is (n<=1) 3. DirectSolCount is (1) The algorithm does a small constant number of operations (comparing n to 1, and returning)

13 Deriving a GeneralCount for Factorial
int fact(int n) { if (n<=1) return 1; // Note '*' is done after returning from fact(n-1) else return n * fact(n-1); 3. RecursiveCallSum = T( n - 1) 4. t(n) = (1) (if, *, -, return) Operations counted in t(n) The only recursive call, requiring T(n - 1) operations

14 Deriving a Recurrence Equation for Mystery
In this example we are not concerned with what mystery computes. Our goal is to simply write the recurrence equation mystery (n) 1. if n £ 1 2. then return n 3. else return (5 * mystery(n -1) * mystery(n -2) )

15 Deriving a DirectSolutionCount for Mystery
1. Size = n 2. DirectSolSize is (n<=1) 3. DirectSolCount is (1) The algorithm does a small constant number of operations (comparing n to 1, and returning)

16 Deriving a GeneralCount for Mystery
A recursive call to Mystery requiring T(n - 1) mystery (n) 1. if n £ 1 2. then return n 3. else return (5 * mystery(n -1) * mystery(n - 2) ) A recursive call to Mystery requiring T(n - 2) Operations counted in t(n)

17 Deriving a Recurrence Equation for Mystery
3. RecursiveCallSum = T(n-1) + T(n-2) 4. t (n)= (1) The non recursive count includes the comparison n<=1, the 2 multiplications, the subtraction, and the return. So (1).

18 Deriving a Recurrence Equation for Multiply
Barometer operation Multiply(y, z)//binary multiplication 1. if (z ==0) return 0 2. else if z is odd 3. then return (multiply(2y,ëz/2û) + y) 4. else return (multiply(2y,ëz/2û)) To derive the recurrence equation we need to determine the size of the problem. The variable which is decreased and provides the direct solution is z. So we can use z as the size. Another possibility is to use n which is the number of bits in z (n=ëlg zû+1). Let addition be our barometer operation. We do 0 additions for the direct solution. So DirectSolutionSize =0, and DirectSolutionCount=0

19 Deriving a Recurrence Equation for Multiply
1. RecursiveCallSum = T(z/2) t(z) = 1 addition in the worst case. So Solving this equation we get T(z)= (lg z)= (n). 2. RecursiveCallSum = T(n-1) since z/2 has n-1 bits Solving this equation we get T(n)= (n)

20 Deriving a recurrence equation for minimum
Minimum(A,lt,rt) // minimum(A,1,length(A)) 1. if rt - lt £ 1 2. then return (min(A[lt], A[rt])) 3. else m1 = minimum(A,lt, ë(lt+rt)/2û ) 4. m2 = minimum(A,ë(lt+rt)/2û +1,rt) 5. return (min (m1, m2)) This procedure computes the minimum of the left half of the array, the right half of the array, and the minimum of the two. Size = rt-lt+1= n DirectSolutionSize is n = (rt - lt)+1 £ 1+1=2. DirectSolutionCount = (1)

21 Deriving a recurrence equation for minimum
Minimum(A,lt,rt) // minimum(A,1,length(A)) 1. if rt - lt £ 1 2. then return (min(A[lt], A[rt])) 3. else m1 = minimum(A,lt, ë(lt+rt)/2û ) 4. m2 = minimum(A,ë(lt+rt)/2û +1,rt) 5. return (min (m1, m2)) Two recursive calls with n/2 elements. So T(n/2) + T(n/2)

22 Computing xn We can decrease the number of times that x is multiplied by itself Example: compute x8 we need only 3 multiplications (x2=x*x, x4=x2*x2, and x8=x4*x4) compute x9, we need 4 multiplications (x2=x*x, x4=x2*x2, x8=x4*x4 and x9= x8*x) A recursive procedure for computing xn should be (lg n) For the following algorithm we will derive the worst case recurrence equation. Note: we are ignoring the fact that the numbers will quickly become too large to store in a long integer

23 Recursion - Power long power (long x, long n) {
if (n == 0) { return 1; } else { long p = power(x, n/2); if ((n % 2) == 0) { return p*p; } else { return x * p*p;} } We use the * as our barometer operation.

24 Deriving a Recurrence Equation for Power
Size = n DirectSolutionSize is n=0; DirectSolutionCount is 0; RecursiveCallSum is T(n/2) t(n) = 2 multiplications in the worst case T(n)=0 for n=0 T(n)=T(n/2)+2 for n>0

25 Solving recurrence equations
5 techniques for solving recurrence equations: Recursion tree Iteration method Induction (Guess and Test) Master Theorem (master method) Characteristic equations We discuss these methods with examples.

26 Deriving the count using the recursion tree method
Recursion trees provide a convenient way to represent the unrolling of recursive algorithm It is not a formal proof but a good technique to compute the count. Once the tree is generated, each node contains its “non recursive number of operations” t(n) or DirectSolutionCount The count is derived by summing the “non recursive number of operations” of all the nodes in the tree For convenience we usually compute the sum for all nodes at each given depth, and then sum these sums over all depths.

27 Building the recursion tree
The initial recursion tree has a single node containing two fields: The recursive call, (for example Factorial(n)) and the corresponding count T(n) . The tree is generated by: unrolling the recursion of the node depth 0, then unrolling the recursion for the nodes at depth 1, etc. The recursion is unrolled as long as the size of the recursive call is greater than DirectSolutionSize

28 Building the recursion tree
When the “recursion is unrolled”, each current leaf node is substituted by a subtree containing a root and a child for each recursive call done by the algorithm. The root of the subtree contains the recursive call, and the corresponding “non recursive count”. Each child node contains a recursive call, and its corresponding count. The unrolling continues, until the “size” in the recursive call is DirectSolutionSize Nodes with a call of DirectSolutionSize, are not “unrolled”, and their count is replaced by DirectSolutionCount

29 Example: recursive factorial
Factorial(n) T(n) Initially, the recursive tree is a node containing the call to Factorial(n), and count T(n). When we unroll the computation this node is replaced with a subtree containing a root and one child: The root of the subtree contains the call to Factorial(n) , and the “non recursive count” for this call t(n)= (1). The child node contains the recursive call to Factorial(n-1), and the count for this call, T(n-1).

30 After the first unrolling
depth nd T(n) Factorial(n) t(n)= (1) (1) T(n-1) Factorial(n-1) T(n-1) nd denotes the number of nodes at that depth

31 After the second unrolling
depth nd T(n) Factorial(n) t(n)= (1) (1) Factorial(n-1) t(n-1)= (1) (1) Factorial(n-2) T(n-2) T(n-2)

32 After the third unrolling
depth nd T(n) Factorial(n) t(n)= (1) (1) Factorial(n-1) t(n-1)= (1) (1) Factorial(n-2) t(n-2)= (1) (1) Factorial(n-3) T(n-3) T(n-3) For Factorial DirectSolutionSize = 1 and DirectSolutionCount= (1)

33 The recursion tree depth nd T(n) Factorial(n) t(n)= (1) 0 1 (1)
(1) Factorial(n-1) t(n-1)= (1) (1) Factorial(n-2) t(n-2)= (1) (1) Factorial(n-3) t(n-3)= (1) (1) Factorial(1) T(1)= (1) n T(1)= (1) The sum T(n)=n* (1) = (n)

34 Divide and Conquer Basic idea: divide a problem into smaller portions, solve the smaller portions and combine the results. Name some algorithms you already know that employ this technique. D&C is a top down approach. Usually, we use recursion to implement D&C algorithms. The following is an “outline” of a divide and conquer algorithm

35 Divide and Conquer begin procedure Solution(I);
if size(I) <=smallSize then {calculate solution} return(DirectSolution(I)) {use direct solution} else {decompose, solve each and combine} Decompose(I, I1,...,Ik); for i=1 to k do S(i)=Solution(Ii); {solve a smaller problem} return(Combine(S1,...,Sk)); {combine solutions} end {Solution}

36 Divide and Conquer Let size(I) = n DirectSolutionCount = DS(n)
t(n) = D(n) + C(n) where: D(n) = count for dividing problem into subproblems C(n) = count for combining solutions

37 Divide and Conquer Main advantages Code: simple Algorithm: efficient
Implementation: parallel computation

38 Binary search Assumption - the list S[low…high] is sorted, and x is the search key If the search key x is in the list, x==S[i], and the index i is returned. If x is not in the list a NoSuchKey is returned

39 Binary search The problem is divided into 3 subproblems: x=S[mid], xÎ S[low,..,mid-1], xÎ S[mid+1,..,high] The first case x=S[mid]) is easily solved The other cases xÎ S[low,..,mid-1], or xÎ S[mid+1,..,high] require a recursive call When the array is empty the search terminates with a “non-index value”

40 BinarySearch(S, k, low, high)
if low > high then return NoSuchKey else mid  floor ((low+high)/2) if (k == S[mid]) return mid else if (k < S[mid]) then return BinarySearch(S, k, low, mid-1) return BinarySearch(S, k, mid+1, high)

41 Worst case analysis A worst input (what is it?) causes the algorithm to keep searching until low>high We count number of comparisons of a list element with x per recursive call. Assume 2k  n < 2k+1 k = ëlg nû T (n) - worst case number of comparisons for the call to BS(n)

42 Recursion tree for BinarySearch (BS)
BS(n) T(n) Initially, the recursive tree is a node containing the call to BS(n), and total amount of work in the worst case, T(n). When we unroll the computation this node is replaced with a subtree containing a root and one child: The root of the subtree contains the call to BS(n) , and the “nonrecursive work” for this call t(n). The child node contains the recursive call to BS(n/2), and the total amount of work in the worst case for this call, T(n/2).

43 After first unrolling depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) T(n/2)
BS(n/2) T(n/2) T(n/2)

44 After second unrolling
depth nd T(n) BS(n) t(n)=1 BS(n/2) t(n/2)=1 BS(n/4) T(n/4) T(n/4)

45 After third unrolling depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2)
BS(n/2) t(n/2)=1 BS(n/4) t(n/4)=1 BS(n/8) T(n/8) T(n/8) For BinarySearch DirectSolutionSize =0, 1 and DirectSolutionCount=0 for 0, and 1 for 1

46 Terminating the unrolling
Let 2k n < 2k+1 k= lg n When a node has a call to BS(n/2k), (or to BS(n/2k+1)): The size of the list is DirectSolutionSize since   n/2k  = 1, (or  n/2k+1  = 0) In this case the unrolling terminates, and the node is a leaf containing DirectSolutionCount (DSC) = 1, (or 0)

47 The recursion tree depth nd T(n) BS(n) t(n)=1 0 1 1 BS(n/2) t(n/2)=1
BS(n/2) t(n/2)=1 BS(n/4) t(n/4)=1 BS(n/2k) DSC=1 k T(n)=k+1=lgn +1

48 Merge Sort Input: S of size n.
Output: a permutation of S, such that if i > j then S[ i ]  S[ j ] Divide: If S has at least 2 elements divide into S1 and S2. S1 contains the the first  n/2  elements of S. S2 has the last  n/2 elements of S. Recurs: Recursively sort S1 , and S2. Conquer: Merge sorted S1 and S2 into S.

49 Merge Sort: Input: S 1 if (n  2) 2 moveFirst( S, S1, (n+1)/2))// divide moveSecond( S, S2, n/2) // divide 4 Sort(S1) // recurs Sort(S2) // recurs 6 Merge( S1, S2, S ) // conquer

50 Merge( S1 , S2, S ): Pseudocode
Input: S1 , S2, sorted in nondecreasing order Output S is a sorted sequence. 1. while (S1 is Not Empty && S2 is Not Empty) 2. if (S1.first()  S2.first() ) remove first element of S1 and move it to the end of S else remove first element of S2 and move it to the end of S 5. move the remaining elements of the non-empty sequence (S1 or S2 ) to S

51 Deriving a recurrence equation for Merge Sort:
1 if (n  2) moveFirst( S, S1, (n+1)/2)) // divide moveSecond( S, S2, n/2) // divide 4 Sort(S1) // recurs Sort(S2) // recurs 6 Merge( S1, S2, S ) // conquers DirectSolutionSize is n<2 DirectSolutionCount is (1) RecursiveCallSum is T( n/2 ) + T( n/2 ) The non recursive count t(n) = (n)

52 Recurrence Equation (cont’d)
The implementation of the moves costs Q(n) and the merge costs Q(n). So the total cost for dividing and merging is Q(n). The recurrence relation for the run time of Sort is T(n) = T( n/2 ) + T( n/2 ) + Q(n) . T(0) =T(1) = Q(1) The solution is T(n)= Q(nlg n)

53 Recursion tree for MergeSort (MS)
MS(n) T(n) Initially, the recursive tree is a node containing the call to MS(n), and total amount of work in the worst case, T(n). When we unroll the computation this node is replaced with a subtree: The root of the subtree contains the call to MS(n) , and the “nonrecursive work” for this call t(n). The two children contain a recursive call to MS(n/2), and the total amount of work in the worst case for this call, T(n/2).

54 After first unrolling of mergeSort
MS(n) depth nd T(n) t(n)=(n) (n) MS(n/2) T(n/2) MS(n/2) T(n/2) 2T(n/2)

55 After second unrolling
depth nd T(n) MS(n) (n) (n) MS(n/2) (n/2) MS(n/2) (n/2) (n) + MS(n/4) T(n/4) MS(n/4) T(n/4) MS(n/4) T(n/4) 4T(n/4)

56 After third unrolling depth nd T(n) 0 1 (n) 1 2 (n) + 2 4 (n) + + +
MS(n) (n) (n) MS(n/2) (n/2) MS(n/2) (n/2) (n) + MS(n/4) (n/4) MS(n/4) (n/4) MS(n/4) (n/4) MS(n/4) (n/4) (n) + + + MS(n/8) T(n/8) MS(n/8) T(n/8) MS(n/8) T(n/8) 8T(n/8)

57 Terminating the unrolling
For simplicity let n = 2k lg n = k When a node has a call to MS(n/2k): The size of the list to merge sort is DirectSolutionSize since n/2k = 1 In this case the unrolling terminates, and the node is a leaf containing DirectSolutionCount = (1)

58 The recursion tree d nd T(n) Q (n/20) 0 1 Q (n) 1 2 Q (n) Q (n / 21)
k 2k Q (n) T(n)= (k+1) Q (n) = Q( n lg n)

59 The sum for each depth d=0 nd=1: 21 22 . Q (n ) ´ Q n æ è ç ö ø ÷
d=k nd=2k:

60 Induction method Estimate the form of the solution
Prove by mathematical induction For Example: Computing Factorial: Fact(n) = Fact(n-1)*n; Number of “*”: T(n) = T(n-1) (Recurrence equation) Proof: Induction base: for n=0  T(0) = 0; Induction hypothesis: for arbitrary positive integer n  T(n) = n (Guess) Induction step to show  T(n+1) = n+1 Replace n by n+1 in the recurrence equation, we get: T(n+1) = T(n) +1 = n+1 (It’s correct!) So this completes the proof that T(n) is correct  T(n) = (n)

61 Induction (Guess and Test) for Binary Search W(n) = lg n + 1
Proof by Induction: Induction base: n = 1 W(1) = lg 1 = = 1 Induction hypothesis : Assume for 1  k < n, W(k) = lg k + 1 (Assume that above condition holds for n/2) Proof for n: W(n) = 1+ W(n/2) = 1+ lg n/2  (see next slide)

62 Proof Continued.

63 Iteration for binary search

64 Characteristic Equation (example)
Solve the recurrence:

65 Characteristic Equation (one case)
The homogeneous linear recurrence equation with constant coefficients: Its characteristic equation is: If there are k distinct solutions r1, r2, …, rk, the only solution to the recurrence is:

66 The Master Method Solving recurrence in the form: T(n) = aT(n/b) + f(n) (a>=1, b>1, f(n) is an asympototically positive function.) The time complexity T(n) is bounded in following three cases: Case 1: (If ) Case 2: (If ) Case 3: (If and af(n/b) <= cf(n) for c<1 and large n ) Example: T(n) = 4T(n/2)+n  a=4, b=2, Case 1 applies:  Solution T(n) = (n2)

67 Master theorem (alternative form)

68 Master method examples Case 1:
T(n) = 9T(n/3) + Q(n). a = 9, b = 3 so nlogba =nlog39 Since Q(n)= Q(nlog39 -1 ) we are dealing with Case 1. Therefore T(n) = Q (nlogba )=Q(nlog39) = Q(n2). T(n) = 7T(n/2) + Q(n2). a = 7, b = 2 so nlogba =nlog27 Since Q(n2)/nlog27 = Q(n2-lg 7) = Q(n-0.8), we are dealing with Case 1. Therefore T(n) = Q (nlogba )= Q(nlog27).

69 Master method examples Case 2
T(n) = T(n/2) + Q(1) a = 1, b = 2 so nlogba =nlog21 Since nlog21 = n0 = 1, Q(1)/1 = Q(lg0n), we are dealing with Case 2. T(n) = Q (nlogba lgk+1n)= Q(lg n) T(n) = 2T(n/2) + Q(n) a = 2, b = 2 so nlogba = nlog22 Since nlog22 = n, Q(n)/n = Q(1) = Q(lg0n), we are dealing with Case 2. T(n) = Q(nlogba lgk+1n) = Q(nlgn)

70 Master method examples: Case 3
T(n) = 4T(n/2) + n3 a = 4, b = 2 so nlogba =nlog24 Since n3/nlg4 = n, we are dealing with Case 3. We must find a c such that af(n/b) < cf(n) for all n >N 4(n/2)3 =n3/2<5/8n3 for all n³1 T(n) = Q(n3) The master method does not solve all recurrences of the given form.

71 Master method fails T(n) = 4T(n/2) + n2/lgn a = 4, b = 2 so nlogba = nlog24 = n2 (n2/lgn)/n2 = 1/lgn and no case applies The solution is T(n) = Q(n2lglgn) and can be verified by the substitution method.

72 A More General Recursion Tree: T(n) = a T(n/b) + f(n)
f(n/b) f(n/b) f(n/b) af(n/b) f(n/b2) f(n/b2) f(n/b2) a2f(n/b2) k Q(1) Q(1) Q(1) Q(1) Q(1) Q(1) Q(1) Q(1) Q(1) akf(n/bk) n=bk, k=logbn, f(n/bk )=f(1)= Q (1) alogbn = nlogba

73 Deriving a Recurrence Equation from Recursive Algorithms
Decide what “n”, the problem size, is. See what value of n is used as the base of the recursion. It will usually be a single value (ie n = 1). However, it may be multiple values. Let the base value be n0 . Figure out what T(n0 ) is. You can usually use “some constant c”. Sometimes a specific number will be needed. The General T(n) is usually a sum of various choices of T(m ), the recursive calls, plus the sum of the other work done. T(n ) = aT(f(n)) + bT(g(n)) c(n) # of subProblems other work size function

74 Recursion - Power long power (long x, long n) {
if (n == 0) { return 1; } else if ((n % 2) == 0) { return power (x, n/2) * power (x, n/2); } else { return x * power (x, n/2) * power (x, n/2); } Worst case -- Count 2 multiplications.

75 T(n) = 2 T(n/2) + 2, n = 2k T(0) = 0 (n=0) --- Worst case
Problem number number Size nodes of ops n n/ *T(n/2) n/ *2 = 4 n/ *T(n/4) T(n ) 2 T(n / 2) T(n / 2) 2 2 2 T(n / 4) T(n / 4) T(n / 4) T(n / 4)

76 T(n) = 2 T(n/2) + 2, n = 2k Problem number number Size nodes of ops
n n/ *2=4 n/ *2=8 n/ *T(n/8) 2 2 2 2 2 2 2 T(n/8) T(n/8) T(n/8) T(n/8)

77 T(n) = 2 T(n/2) + 2, n = 2k Problem number number Size nodes of ops n *2=2 n/ 2=4 n/ 2=8 n/ 2=16 n/2k 2k 2k  k+1 2k+10 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 T(n/2k+1) T(n/2k+1) T(0) T(0) SUMMING operations: 2( … + 2k) = 2(2k+1-1) = 2(2n -1)


Download ppt "Recurrences."

Similar presentations


Ads by Google