CE 221 Data Structures and Algorithms

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 2: Algorithm Analysis
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Introduction to Analysis of Algorithms
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
CSC 2300 Data Structures & Algorithms January 26, 2007 Chapter 2. Algorithm Analysis.
 Last lesson  Arrays for implementing collection classes  Performance analysis (review)  Today  Performance analysis  Logarithm.
CSC 2300 Data Structures & Algorithms January 30, 2007 Chapter 2. Algorithm Analysis.
Data Structure Algorithm Analysis TA: Abbas Sarraf
Cpt S 223 – Advanced Data Structures
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
CHAPTER 10 Recursion. 2 Recursive Thinking Recursion is a programming technique in which a method can call itself to solve a problem A recursive definition.
Chapter 6 Algorithm Analysis Bernard Chen Spring 2006.
CS 201 Data Structures and Algorithms Chapter 2: Algorithm Analysis - II Text: Read Weiss, §2.4.3 – Izmir University of Economics.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Vishnu Kotrajaras, PhD.1 Data Structures. Vishnu Kotrajaras, PhD.2 Introduction Why study data structure?  Can understand more code.  Can choose a correct.
1 Programming with Recursion. 2 Recursive Function Call A recursive call is a function call in which the called function is the same as the one making.
Analysis of Algorithms
Project 2 due … Project 2 due … Project 2 Project 2.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Introduction to Analysis of Algorithms COMP171 Fall 2005.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 2 Prepared by İnanç TAHRALI.
Algorithm Analysis Part of slides are borrowed from UST.
Algorithm Analysis Chapter 5. Algorithm An algorithm is a clearly specified set of instructions which, when followed, solves a problem. –recipes –directions.
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
Chapter 2: Algorithm Analysis Application of Big-Oh to program analysis Logarithms in Running Time Lydia Sinapova, Simpson College Mark Allen Weiss: Data.
23 February Recursion and Logarithms CSE 2011 Winter 2011.
Vishnu Kotrajaras, PhD.1 Data Structures
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Chapter 4 With Question/Answer Animations 1. Chapter Summary Divisibility and Modular Arithmetic - Sec 4.1 – Lecture 16 Integer Representations and Algorithms.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
CS 116 Object Oriented Programming II Lecture 13 Acknowledgement: Contains materials provided by George Koutsogiannakis and Matt Bauer.
Algorithm Analysis 1.
19 Searching and Sorting.
Introduction to Analysis of Algorithms
CSC 427: Data Structures and Algorithm Analysis
OBJECT ORIENTED PROGRAMMING II LECTURE 23 GEORGE KOUTSOGIANNAKIS
Analysis of Algorithms
COMP108 Algorithmic Foundations Algorithm efficiency
Recursion and Logarithms
Algorithm Analysis CSE 2011 Winter September 2018.
Recursion "To understand recursion, one must first understand recursion." -Stephen Hawking.
CS 3343: Analysis of Algorithms
Algorithm Analysis (not included in any exams!)
Building Java Programs
CSE 143 Lecture 5 Binary search; complexity reading:
Algorithm design and Analysis
Chapter 12: Analysis of Algorithms
CSE 143 Lecture 5 More Stacks and Queues; Complexity (Big-Oh)
Data Structures Review Session
Lecture 5: complexity reading:
CS 201 Fundamental Structures of Computer Science
Programming and Data Structure
Programming and Data Structure
24 Searching and Sorting.
Building Java Programs
CE 221 Data Structures and Algorithms
CSC 427: Data Structures and Algorithm Analysis
CE 221 Data Structures and Algorithms
Building Java Programs
At the end of this session, learner will be able to:
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
David Kauchak cs161 Summer 2009
Analysis of Algorithms
Algorithm Analysis How can we demonstrate that one algorithm is superior to another without being misled by any of the following problems: Special cases.
Presentation transcript:

CE 221 Data Structures and Algorithms Chapter 2: Algorithm Analysis - II Text: Read Weiss, §2.4.3 – 2.4.6 Izmir University of Economics

Solutions for the Maximum Subsequence Sum Problem: Algorithm 1 exhaustively tries all possibilities: for all combinations of all the values for starting and ending points (i and j respectively), the partial sum (ThisSum) is calculated and compared with the maximum sum value (MaxSum) computed so far. The running time is O(N3 ) and is entirely due to lines 5 and 6. A more precise analysis; public static int MaxSubSum1( int [ ] A) { int ThisSum, MaxSum, i, j, k; int N = A.length; /* 1*/ MaxSum = 0; /* 2*/ for( i = 0; i < N; i++ ) /* 3*/ for( j = i; j < N; j++ ) { /* 4*/ ThisSum = 0; /* 5*/ for( k = i; k <= j; k++ ) /* 6*/ ThisSum += A[ k ]; /* 7*/ if( ThisSum > MaxSum ) /* 8*/ MaxSum = ThisSum; } /* 9*/ return MaxSum;

Solutions for the Maximum Subsequence Sum Problem: Algorithm 2 public static int MaxSubSum2( int [ ] A ) { int ThisSum, MaxSum, i, j; int N = A.length; /* 1*/ MaxSum = 0; /* 2*/ for( i = 0; i < N; i++ ) { /* 3*/ ThisSum = 0; /* 4*/ for( j = i; j < N; j++ ) { /* 5*/ ThisSum += A[ j ]; /* 6*/ if( ThisSum > MaxSum ) /* 7*/ MaxSum = ThisSum; } /* 8*/ return MaxSum; We can improve upon Algorithm 1 to avoid the cubic running time by removing a for loop. Obviously, this is not always possible, but in this case there are an awful lot of unnecessary computations present in Algorithm 1. Notice that so the computation at lines 5 and 6 in Algorithm 1 is unduly expensive. Algorithm 2 is clearly O(N2 ); the analysis is even simpler than before. Izmir University of Economics

Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 It is a recursive O(N log N) algorithm using a divide-and-conquer strategy. Divide part: Split the problem into two roughly equal subproblems, each half the size of the original. The subproblems are then solved recursively. Conquer part: Patch together the two solutions of the subproblems possibly doing a small amount of additional work, to arrive at a solution for the whole problem. The maximum subsequence sum can (1) either occur entirely in the left half of the input, or (2) entirely in the right half, or (3) it crosses the middle and is in both halves. Solve (1) and (2) recursively. For (3), Add the largest sum in the first half including the last element in the first half and the largest sum in the second half including the first element in the second half. Example: (1) first half: 6 (A0 - A2), (2) second half: 8 (A5 - A6). (3) max sum (first half) covering the last item: 4 (A0 - A3), max sum (second half) spanning the first element: 7 (A4 - A6). Thus, the max sum crossing the middle is 4+7=11 (A0 - A6). Answer! First Half Second Half 4,-3,5,-2 -1,2,6,-2 Izmir University of Economics

Izmir University of Economics Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Implementation I /* Initial Call */ public static int MaxSubSum3( int [ ] a ) { return maxSumRec ( a, 0, a.length - 1 ); } Izmir University of Economics

Izmir University of Economics Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Implementation II Izmir University of Economics

Izmir University of Economics Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Implementation III Izmir University of Economics

Izmir University of Economics Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Analysis T(N): time to solve a maximum subsequence sum problem of size N T(1) = 1; constant amount of time to execute lines 6 to 12. Otherwise, the program must perform two recursive calls, the two for loops between lines 18 and 32, and some small amount of bookkeeping, such as lines 14 and 34. The two for loops combine to touch every element from a0 to aN-1, and there is constant work inside the loops, so the time spent in lines 18 to 32 is O(N). The remainder of the work is performed in lines 15 and 16 to solve two subsequence problems of size N/2 (assuming N is even). The total time for the algorithm then obeys: T(1) = 1 T(N) = 2T(N/2) + O(N) we can replace the O(N) term in the equation above with N; since T(N) will be expressed in Big-Oh notation anyway, this will not affect the answer. T(N) = 2(2T(N/4)+N/2) + N = 4T(N/4) + 2N = 4(2T(N/8)+N/4) + 2N = 8T(N/8) + 3N = ... = 2kT(N/2k) + kN If N = 2k then T(N) = N + kN = N log N + N = O(N log N) Izmir University of Economics

Solutions for the Maximum Subsequence Sum Problem: Algorithm 4 Algorithm 4 is O(N). Why does the algorithm actually work? It’s an improvement over Algorithm 2 given the following: Observation 1: If A[i] < 0 then it can not start an optimal subsequence. Hence, no negative subsequence can be a prefix in the optimal. Observation 2: If i can advance to j+1. Proof: Let . Any subsequence starting at p is not larger than the corresponding sequence starting at i, since j is the first index causing sum<0). Izmir University of Economics

Logarithms in the Running Time The most frequent appearance of logarithms centers around the following general rule: An algorithm is O(log n) if it takes constant (O(1)) time to cut the problem size by a fraction (which is usually 1/2). On the other hand, if constant time is required to merely reduce the problem by a constant amount (such as to make the problem smaller by 1), then the algorithm is O(n). We usually presume that the input is preread (Otherwise Ω(n)). Izmir University of Economics

Izmir University of Economics Binary Search - I Definition: Given an integer x and integers A0, A1, . . . , AN-1, which are presorted and already in memory, find i such that Ai = x, or return i = -1 if x is not in the input. The loop is O(1) per iteration. It starts with high-low=N - 1 and ends with high-low ≤-1. Every time through the loop the value high-low must be at least halved from its previous value; thus, the loop is repeated at most = O(log N). Izmir University of Economics

Izmir University of Economics Binary Search - II Initially, high – low = N – 1 = d Assume 2k ≤ d < 2k+1 After each new iteration, new value for d may be one of high – mid – 1 or mid – 1 – low which are both bounded from above as shown below: ≤ Hence, after k iterations, d becomes 1. Loop iterates 2 more times where d takes on the values 0 and -1 in this order. Thus, it is repeated k+2 times. Izmir University of Economics

Izmir University of Economics Euclid’s Algorithm It computes the greatest common divisor. The greatest common divisor (gcd) of two integers is the largest integer that divides both. Thus, gcd (50, 15) = 5. It computes gcd(m, n), assuming m ≥ n (If n > m, the first iteration of the loop swaps them). Fact: If m>n, then m mod n < m/2 Proof: There are two cases: If n≤m/2, then since the remainder is always smaller than n, the theorem is true for this case. If n>m/2,But then n goes into m once with a remainder m-n<m/2, proving the theorem. iteration M N After 1st rem1=M mod N < M/2 rem1 < N After 2nd rem1<M/2 rem2=N mod rem1 < N/2 Thus, the Algorithm takes O(log N) Izmir University of Economics

Izmir University of Economics Exponentiation - I Algorithm pow(X, N) raises an integer to an integer power. Count the number of multiplications as the measurement of running time. XN : N -1 multiplications. Lines 1 to 4 handle the base case of the recursion. XN=XN/2*XN/2 if N is even XN=X(N-1)/2*X(N-1)/2*X if N is odd # of multiplications required is clearly at most 2 log N, because at most two multiplications are required to halve the problem. public static boolean isEven(int n){ return n % 2 == 0; } public static long pow(long X, int N) { /* 1*/ if( N == 0 ) /* 2*/ return 1; /* 3*/ if( N == 1 ) /* 4*/ return X; /* 5*/ if( IsEven( N ) ) /* 6*/ return pow(X*X, N/2); else /* 7*/ return pow(X*X, N/2)*X; Izmir University of Economics

Izmir University of Economics Exponentiation - II It is interesting to note how much the code can be tweaked. Lines 3 and 4 are unnecessary (Line 7 does the right thing). Line 7 /* 7*/ return pow(X*X, N/2)*X; can be rewritten as /* 7*/ return pow(X, N-1)*X; Line 6, on the other hand, /* 6*/ return pow(X*X, N/2); cannot be substituted by any of the following: /*6a*/ return( pow( pow( X, 2 ), N/2 ) ); /*6b*/ return( pow( pow( X, N/2 ), 2 ) ); /*6c*/ return( pow( X, N/2 ) * pow( X, N/2 ) ); Both lines 6a and 6b are incorrect because pow(X, 2) can not make any progress and an infinite loop results. Using line 6c affects the efficiency, because there are now two recursive calls of size N/2 instead of only one. An analysis will show that the running time is no longer O(log N). Izmir University of Economics

Checking Your Analysis - I Once an analysis has been performed, it is desirable to see if the answer is correct and as good as possible. One way to do this is to code up the program and see if the empirically observed running time matches the running time predicted by the analysis. When n doubles, the running time goes up by a factor of 2 for linear programs, 4 for quadratic programs, and 8 for cubic programs. Programs that run in logarithmic time take only an additive constant longer when n doubles, and programs that run in O(n log n) take slightly more than twice as long to run under the same circumstances. Another commonly used trick to verify that some program is O(f(n)) is to compute the values T(n)/ f(n) for a range of n where T(n) is the empirically observed running time. If f(n) is a tight answer for the running time, then the computed values converge to a positive constant. If f(n) is an over-estimate, the values converge to zero. If f(n) is an under-estimate and hence wrong, the values diverge. Izmir University of Economics

Checking Your Analysis - II This program segment computes the probability that two distinct positive integers, less than or equal to N and chosen randomly , are relatively prime (as N gets large, the answer approaches 6/π2). What is the running time complexity? O(N2log N) Are you sure? Izmir University of Economics

Checking Your Analysis - III As the table dictates, last column is most likely to be the correct one. Izmir University of Economics

Izmir University of Economics Homework Assignments 2.13, 2.14, 2.15, 2.22, 2.25, 2.26, 2.28 You are requested to study and solve the exercises. Note that these are for you to practice only. You are not to deliver the results to me. Izmir University of Economics