Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Design & Analysis of the algorithms Lecture 1. 2010.. by me M. Sakalli.

Similar presentations


Presentation on theme: "The Design & Analysis of the algorithms Lecture 1. 2010.. by me M. Sakalli."— Presentation transcript:

1 The Design & Analysis of the algorithms Lecture 1. 2010.. by me M. Sakalli

2 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-1 You must have learnt these today.. b Why analysis?.. b What parameters to observe? Slight # 6.. b Methodological differences to achieve better outcomes, there Euclid’s algorithm with three methods. b Asymptotic order of growth. b Examples with insertion and selection sorting

3 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-2 Finite number of steps. An algorithm: A sequence of unambiguous – well defined procedures, instructions for solving a problem. For a desired output. For given range of input values Execution must be completed in a finite amount of time. “computer” Problem, relating input parameters with certain state parameters, analytic rendering.. algorithm inputoutput

4 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-3 Important points to keep in mind b Reduce ambiguity Keep the code simple, clean with well-defined and correct steps.Keep the code simple, clean with well-defined and correct steps. Specify the range of inputs applicable.Specify the range of inputs applicable. b Investigate other approaches solving the problem leading to determine if other efficient algorithms possible. b Theoretically: Prove its correctness..Prove its correctness.. Efficiency: Theoretical and Empirical analysisEfficiency: Theoretical and Empirical analysis Its optimalityIts optimality

5 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-4 Efficiency. b Complexity Time efficiency: Estimation in the asymptotic sense Big O, omega, theta. In comparison two also you have a hidden constant.Time efficiency: Estimation in the asymptotic sense Big O, omega, theta. In comparison two also you have a hidden constant. Space efficiency.Space efficiency. The methods applied, recursive, parallel.The methods applied, recursive, parallel. Desired scalability: Various range of inputs and the size and dimension of the problem under consideration. Examples: Video sequences. ROI.Desired scalability: Various range of inputs and the size and dimension of the problem under consideration. Examples: Video sequences. ROI. b Computational Model in terms of an abstract computer: A Turing machine. b http://en.wikipedia.org/wiki/Analysis_of_algorithms.

6 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-5 Historical Perspective … b Muhammad ibn Musa al-Khwarizmi – 9 th century mathematician www.lib.virginia.edu/science/parshall/khwariz.html www.lib.virginia.edu/science/parshall/khwariz.html b…b…b…b…

7 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-6 Analysis means: b evaluate the costs, time and space costs, and manage the resources and the methods.. b A generic RAM model of computation in which instructions are executed consecutively, but not concurrently or in parallel. Running time analysis: The # of the primitive operations executed for every line of the code c i (defined as machine independent as possible). b Asymptotic analysis b Ignore machine-dependent constants b Look at computational growth of T(n) as n→∞, n: input size that determines the number of iterations: Relative speed (in the same machine) Absolute speed (between computers)

8 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-7 Euclid’s Algorithm Problem definition: gcd(m,n) of two nonnegative, not both zero integers m and n, m > n Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ? Euclid’s algorithm is based on repeated application of equality gcd(m,n) = gcd(n, m mod n) until the second number reaches to 0. Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12 r 0 = m, r 1 = n, r i-1 = r i q i + r i+1, 0 < r i+1 < r i, 1  i<t, … r t-1 = r t q t + 0

9 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-8 Step 1 If (n == 0 or m==n), return m and stop; otherwise go to Step 2 Step 2 Divide m by n and assign the value of the remainder to r Step 3 Assign the value of n to m and the value of r to n. Go to Step 1. while n ≠ 0 do {r ← m mod n m← n m← n n ← r } n ← r } return m The upper bound of i iterations: i  log  (n)+1 where  is (1+sqrt(5))/2. !!! Not sure on this.. I need to check.. i = O(log(max(m, n))), since r i+1  r i-1 /2. The Lower bound,  (log(max(m, n))),  (log(max(m, n))).

10 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-9 Proof of Correctness of the Euclid’s algorithm b Step 1, If n divides m, then gcd(m,n)=n b Step 2, gcd(mn,)=gcd(n, m mod(n)). If gcd(m, n) divides n, this implies that gcd must be  n: gcd(m, n)  n. n divides m and n, implies that n must be smaller than the gcd of the pair {m,n}, n  gcd(m, n)If gcd(m, n) divides n, this implies that gcd must be  n: gcd(m, n)  n. n divides m and n, implies that n must be smaller than the gcd of the pair {m,n}, n  gcd(m, n) If m=n*b+r, for r, b integer numbers, then gcd(m, n) = gcd(n, r). Every common divisor of m and n, also divides r.If m=n*b+r, for r, b integer numbers, then gcd(m, n) = gcd(n, r). Every common divisor of m and n, also divides r. Proof: m=cp, n=cq, c(p-qb)=r, therefore g(m,n), divides r, so that this yields g(m,n)  gcd(n,r).Proof: m=cp, n=cq, c(p-qb)=r, therefore g(m,n), divides r, so that this yields g(m,n)  gcd(n,r).

11 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-10 Other methods for computing gcd(m,n) Consecutive integer checking algorithm, not a good way, it checks all the.. Step 1 Assign the value of min{m,n} to t Step 2 Divide m by t. If the remainder is 0, go to Step 3; otherwise, go to Step 4 Step 3 Divide n by t. If the remainder is 0, return t and stop; otherwise, go to Step 4 Step 4 Decrease t by 1 and go to Step 2 Exhaustive??.. Very slow even if zero inputs would be checked.. O(min(m, n)).  (min(m, n)), when gcd(m,n) =1.  (1) for each operation, Overall complexity is  (min(m, n))

12 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-11 Other methods for gcd(m,n) [cont.] Middle-school procedure Step 1 Find the prime factorization of m Step 2 Find the prime factorization of n Step 3 Find all the common prime factors Step 4 Compute the product of all the common prime factors and return it as gcd(m,n) Is this an algorithm?

13 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-12 Sieve of Eratosthenes Input: Integer n ≥ 2 Output: List of primes less than or equal to n, sift out the numbers that are not. for p ← 2 to n do A[p] ← p for p ← 2 to n do if A[p]  0 //p hasn’t been previously eliminated from the list j ← p * p if A[p]  0 //p hasn’t been previously eliminated from the list j ← p * p while j ≤ n do while j ≤ n do A[j] ← 0 // mark element as eliminated A[j] ← 0 // mark element as eliminated j ← j + p j ← j + p Ex: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 2 3 5 7 9 11 13 15 17 19 21 23 25 2 3 5 7 9 11 13 15 17 19 21 23 25 2 3 5 7 11 13 17 19 23 25 2 3 5 7 11 13 17 19 23 25 2 3 5 7 11 13 17 19 23 2 3 5 7 11 13 17 19 23

14 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-13 Asymptotic order of growth A way of comparing functions that ignores constant factors and small input sizes b O(g(n)): class of functions f(n) that grow no faster than g(n)  Θ (g(n)): class of functions f(n) that grow at same rate as g(n)  Ω (g(n)): class of functions f(n) that grow at least as fast as g(n)

15 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-14 Establishing order of growth using the definition Definition: f(n) is in O(g(n)),  O(g(n)), if order of growth of f(n) ≤ order of growth of g(n) (within constant multiple), i.e., there exist a positive constant c,  c  N, and non-negative integer n 0 such that f(n) ≤ c g(n) for every n ≥ n 0 f(n) ≤ c g(n) for every n ≥ n 0 f(n) is o(g(n)), if f(n) ≤ (1/c) g(n) for every n ≥ n 0 which means f grows strictly!! more slowly than any arbitrarily small constant of g. ??? f(n) is  (g(n)),  c  N, f(n) ≥ c g(n) for  n ≥ n 0 f(n) is  (n) if f(n) is both O(n) and  (n) Examples: b 10n is O(cn 2 ), c ≥ 10, since 10n ≤ 10n 2 for n ≥ 1 or 10n ≤ n 2 for n ≥ 10since 10n ≤ 10n 2 for n ≥ 1 or 10n ≤ n 2 for n ≥ 10 b 5n+20 is O(cn), for all n>0, c>=25, since 5n+20 ≤ 5n+20n ≤ 25n, or c>=10 for n ≥ 4since 5n+20 ≤ 5n+20n ≤ 25n, or c>=10 for n ≥ 4

16 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-15 O, Ω, Θ

17 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-16 Example of computational problem: sorting b Statement of problem: Input: A sequence of n numbers Input: A sequence of n numbers Problem: Reorder in ascending or descending order.Problem: Reorder in ascending or descending order. Output desired: a ´ i ≤ a ´ j, for i jOutput desired: a ´ i ≤ a ´ j, for i j b Instance: The sequence b Instance: The sequence b Algorithms: Selection sortSelection sort Insertion sortInsertion sort Merge sortMerge sort (many others)(many others)

18 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-17 Selection Sort  Input: array a[1],…,a[n]  Output for example: array a sorted in non-decreasing order *** Scanning elements of unsorted part, n-1, n-2,.. Swapping. (n-1)*n/2=theta(n^2). Independent from the input. b Algorithm: (Insertion in place) for i =1 to n swap a[i] with smallest of a[i],…a[n]

19 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-18 Insertion-Sort and an idea of runtime analysis t j times line-4 executed Input size is n = length[A], and t j times line-4 executed ← 2 1: for j ← 2 to n do ← 2: key ← A[j] //Insert A[j] into the sorted sequence A[1... j − 1]. ← 3: i ← j - 1 4: while i > 0 and A[i] > key do ← 5: A[i+1] ← A[i] ← 6: i ← i-1 -4: end while ← 7: A[i+1] ← key -1: end for The total runtime for is T(n) The Best Case c 1 n + c 2 (n-1)+c 3 (n-1)+ c 4 (n-1)+c 7 (n-1) = an + b.. Times Cost nc 1 n-1 c 2 - n-1 n-1 c 3 Σ j=2:n t j Σ j=2:n t j c 4 Σ j=2:n (t j -1) Σ j=2:n (t j -1) c 5 Σ j=2:n (t j -1) Σ j=2:n (t j -1) c 6 n-10 n-1 n-1 c 7 n0 = c 1 n + c 2 (n-1)+c 3 (n-1)+ Σ j=2:n t j Σ j=2:n (t j -1) c 4 Σ j=2:n t j + c 5 Σ j=2:n (t j -1) Σ j=2:n (t j -1)+ (n-1) + c 6 Σ j=2:n (t j -1)+ c 7 (n-1)

20 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-19 Insertion-Sort Analysis continued For the worst case run time.. If the array A is sorted in reverse order, then.. while loop, Σ j=2:n t j Σ j=2:n Σ j=2:n Σ j=2:n t j = Σ j=2:n j = Σ j=2:n (n(n-1)/2) -1 Σ j=2:n (t j -1) = Σ j=2:n Σ j=2:n Σ j=2:n (t j -1) = Σ j=2:n (j-1) = Σ j=2:n n(n-1)/2 T(n) = ( + + )n 2 /2 + ( + + - ( - - )/2 + )n + T(n) = ( c 4 + c 5 + c 6 )n 2 /2 + ( c 1 + c 2 + c 3 - ( c 4 - c 5 - c 6 )/2 + c 7 )n + ( + + + ) ( c 2 + c 3 + c 4 + c 7 ) = An 2 + Bn + C = An 2 + Bn + C Average case runtime: Assume that about half of the subarray is out of order, then t j Average case runtime: Assume that about half of the subarray is out of order, then t j = j/2, which will lead a similar kind quadratic function

21 M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes 1-20 The methods b Brute force b Divide and conquer b Decrease and conquer b Transform and conquer b Greedy approach b Dynamic programming b Iterative improvement b Backtracking b Branch and bound b Randomized algorithms


Download ppt "The Design & Analysis of the algorithms Lecture 1. 2010.. by me M. Sakalli."

Similar presentations


Ads by Google