Presentation is loading. Please wait.

Presentation is loading. Please wait.

2-1 2 Algorithms Principles. Efficiency. Complexity. O-notation. Recursion. © 2001, D.A. Watt and D.F. Brown.

Similar presentations


Presentation on theme: "2-1 2 Algorithms Principles. Efficiency. Complexity. O-notation. Recursion. © 2001, D.A. Watt and D.F. Brown."— Presentation transcript:

1 2-1 2 Algorithms Principles. Efficiency. Complexity. O-notation. Recursion. © 2001, D.A. Watt and D.F. Brown

2 2-2 Principles (1) An algorithm is a step-by-step procedure for solving a stated problem. The algorithm will be performed by a processor (which may be human, mechanical, or electronic). The algorithm must be expressed in steps that the processor is capable of performing. The algorithm must eventually terminate.

3 2-3 Principles (2) The algorithm must be expressed in some language that the processor “understands”. (But the underlying procedure is independent of the particular language chosen.) The stated problem must be solvable, i.e., capable of solution by a step-by-step procedure.

4 2-4 Efficiency Given several algorithms to solve the same problem, which algorithm is “best”? Given an algorithm, is it feasible to use it at all? In other words, is it efficient enough to be usable in practice? How much time does the algorithm require? How much space (memory) does the algorithm require? In general, both time and space requirements depend on the algorithm’s input (typically the “size” of the input).

5 2-5 Example 1: efficiency Hypothetical profile of two sorting algorithms: Algorithm B’s time grows more slowly than A’s. items to be sorted, n 10 1 2 3 time (ms) Key: Algorithm A Algorithm B 0 4 20 30 40

6 2-6 Efficiency: measuring time Measure time in seconds? +is useful in practice –depends on language, compiler, and processor. Count algorithm steps? +does not depend on compiler or processor –depends on granularity of steps. Count characteristic operations? (e.g., arithmetic ops in math algorithms, comparisons in searching algorithms) +depends only on the algorithm itself +measures the algorithm’s intrinsic efficiency.

7 2-7 Remedial Mathematics Powers Consider a number b and a non-negative integer n. Then b to the power of n (written b n ) is the multiplication of n copies of b: b n = b  …  b E.g.:b 3 = b  b  b b 2 = b  b b 1 = b b 0 = 1

8 2-8 Remedial Mathematics Logarithms Consider a positive number y. Then the logarithm of y to the base 2 (written log 2 y) is the number of copies of 2 that must be multiplied together to equal y. If y is a power of 2, log 2 y is an integer: E.g.:log 2 1= 0 log 2 2= 1 log 2 4= 2 log 2 8= 3 since 2 0 = 1 since 2 1 = 2 since 2 2 = 4 since 2 3 = 8 If y is not a power of 2, log 2 y is fractional: E.g.:log 2 5  2.32 log 2 7  2.81

9 2-9 Remedial Mathematics Logarithm laws log 2 (2 n )= n log 2 (x  y)= log 2 x + log 2 y log 2 (x/y)= log 2 x – log 2 y since the no. of 2s multiplied to make xy is the sum of the no. of 2s multiplied to make x and the no. of 2s multiplied to make y

10 2-10 Remedial Mathematics Logarithms example (1) How many times must we halve the value of n (discarding any remainders) to reach 1? Suppose that n is a power of 2: E.g.:8  4  2  1(8 must be halved 3 times) 16  8  4  2  1(16 must be halved 4 times) If n = 2 m, n must be halved m times. Suppose that n is not a power of 2: E.g.:9  4  2  1(9 must be halved 3 times) 15  7  3  1(15 must be halved 3 times) If 2 m < n < 2 m+1, n must be halved m times.

11 2-11 Remedial Mathematics Logarithms example (2) In general, n must be halved m times if: 2 m  n < 2 m+1 i.e.,log 2 (2 m )  log 2 n < log 2 (2 m+1 ) i.e.,m  log 2 n < m+1 i.e.,m = floor(log 2 n). The floor of x (written floor(x) or  x  ) is the largest integer not greater than x. Conclusion: n must be halved floor(log 2 n) times to reach 1. Also: n must be halved floor(log 2 n)+1 times to reach 0.

12 2-12 Example 2: power algorithms (1) Simple power algorithm: To compute b n : 1.Set p to 1. 2.For i = 1, …, n, repeat: 2.1.Multiply p by b. 3.Terminate with answer p.

13 2-13 Example 2 (2) Analysis (counting multiplications): Step 2.1 performs a multiplication. This step is repeated n times. No. of multiplications = n

14 2-14 Example 2 (3) Implementation in Java: static int power (int b, int n) { // Return b n (where n is non-negative). int p = 1; for (int i = 1; i <= n; i++) p *= b; return p; }

15 2-15 Example 2 (4) Idea: b 1000 = b 500  b 500. If we know b 500, we can compute b 1000 with only 1 more multiplication! Smart power algorithm: To compute b n : 1.Set p to 1, set q to b, and set m to n. 2.While m > 0, repeat: 2.1.If m is odd, multiply p by q. 2.2.Halve m (discarding any remainder). 2.3.Multiply q by itself. 3.Terminate with answer p.

16 2-16 Example 2 (5) Analysis (counting multiplications): Steps 2.1–3 together perform at most 2 multiplications. They are repeated as often as we must halve the value of n (discarding any remainder) until it reaches 0, i.e., floor(log 2 n) + 1 times. Max. no. of multiplications= 2(floor(log 2 n) + 1) = 2 floor(log 2 n) + 2

17 2-17 Example 2 (6) Implementation in Java: static int power (int b, int n) { // Return b n (where n is non-negative). int p = 1, q = b, m = n; while (m > 0) { if (m%2 != 0) p *= q; m /= 2; q *= q; } return p; }

18 2-18 10203040 multiplications 10 20 30 40 50 0 0 n Example 2 (7) Comparison: simple power algorithm smart power algorithm

19 2-19 Complexity For many interesting algorithms, the exact number of operations is too difficult to analyse mathematically. To simplify the analysis:  identify the fastest-growing term  neglect slower-growing terms  neglect the constant factor in the fastest-growing term. The resulting formula is the algorithm’s time complexity. It focuses on the growth rate of the algorithm’s time requirement. Similarly for space complexity.

20 2-20 Example 3: analysis of power algorithms (1) Analysis of simple power algorithm (counting multiplications): No. of multiplications = n Time taken is approximately proportional to n. Time complexity is of order n. This is written O(n).

21 2-21 Example 3 (2) Analysis of smart power algorithm (counting multiplic’ns): Max. no. of multiplications = 2 floor(log 2 n) + 2 then tolog 2 n Time complexity is of order log n. This is written O(log n). then tofloor(log 2 n) Simplify to2 floor(log 2 n) Neglect slow-growing term, +2. Neglect constant factor, 2. Neglect floor(), which on average subtracts 0.5, a constant term.

22 2-22 10203040 n 10 20 30 40 50 0 0 Example 3 (3) Comparison: n log n

23 2-23 O-notation (1) We have seen that an O(log n) algorithm is inherently better than an O(n) algorithm for large values of n. O(log n) signifies a slower growth rate than O(n). Complexity O(X) means “of order X”, i.e., growing proportionally to X. Here X signifies the growth rate, neglecting slower-growing terms and constant factors.

24 2-24 O-notation (2) Common time complexities: O(1)constant time(feasible) O(log n)logarithmic time(feasible) O(n)linear time(feasible) O(n log n)log linear time(feasible) O(n 2 )quadratic time(sometimes feasible) O(n 3 )cubic time(sometimes feasible) O(2 n )exponential time (rarely feasible)

25 2-25 Growth rates (1) Comparison of growth rates: 11111 log n3.34.34.95.3 n10203040 n log n3386147213 n2n2 1004009001,600 n3n3 1,0008,00027,00064,000 2n2n 1,0241.0 million1.1 billion1.1 trillion

26 2-26 10203040 20 40 60 0 80 100 500 n Growth rates (2) Graphically: log n n n log nn2n2 2n2n

27 2-27 Example 4: growth rates (1) Consider a problem that requires n data items to be processed. Consider several competing algorithms to solve this problem. Suppose that their time requirements on a particular processor are as follows: Algorithm Log:0.3 log 2 nseconds Algorithm Lin:0.1 nseconds Algorithm LogLin:0.03 n log 2 nseconds Algorithm Quad:0.01 n 2 seconds Algorithm Cub:0.001 n 3 seconds Algorithm Exp:0.0001 2 n seconds

28 2-28 Log Lin LogLin Quad Cub Exp 1020304050607080901000 0:00 n 0:01 Example 4 (2) Compare how many data items (n) each algorithm can process in 1, 2, …, 10 seconds: 0:020:030:040:050:060:070:080:090:10

29 2-29 Recursion A recursive algorithm is one expressed in terms of itself. In other words, at least one step of a recursive algorithm is a “call” to itself. In Java, a recursive method is one that calls itself.

30 2-30 When should recursion be used? Sometimes an algorithm can be expressed using either iteration or recursion. The recursive version tends to be: +more elegant and easier to understand –less efficient (extra calls consume time and space). Sometimes an algorithm can be expressed only using recursion.

31 2-31 When does recursion work? Given a recursive algorithm, how can we sure that it terminates? The algorithm must have:  one or more “easy” cases  one or more “hard” cases. In an “easy” case, the algorithm must give a direct answer without calling itself. In a “hard” case, the algorithm may call itself, but only to deal with an “easier” case.

32 2-32 Example 5: recursive power algorithms (1) Recursive definition of b n : b n = 1if n = 0 b n = b  b n–1 if n > 0 Simple recursive power algorithm: To compute b n : 1.If n = 0: 1.1.Terminate with answer 1. 2.If n > 0: 2.1.Terminate with answer b  b n–1. Easy case: solved directly. Hard case: solved by comput- ing b n–1, which is easier since n–1 is closer than n to 0.

33 2-33 Example 5 (2) Implementation in Java: static int power (int b, int n) { // Return b n (where n is non-negative). if (n == 0) return 1; else return b * power(b, n-1); }

34 2-34 Example 5 (3) Idea: b 1000 = b 500  b 500, and b 1001 = b  b 500  b 500. Alternative recursive definition of b n : b n = 1if n = 0 b n = b n/2  b n/2 if n > 0 and n is even b n = b  b n/2  b n/2 if n > 0 and n is odd (Recall: n/2 discards the remainder if n is odd.)

35 2-35 Example 5 (4) Smart recursive power algorithm: To compute b n : 1.If n = 0: 1.1.Terminate with answer 1. 2.If n > 0: 2.1.Let p be b n/2. 2.2.If n is even: 2.2.1.Terminate with answer p  p. 2.3.If n is odd: 2.3.1.Terminate with answer b  p  p. Easy case: solved directly. Hard case: solved by comput- ing b n/2, which is easier since n/2 is closer than n to 0.

36 2-36 Example 5 (5) Implementation in Java: static int power (int b, int n) { // Return b n (where n is non-negative). if (n == 0) return 1; else { int p = power(b, n/2); if (n % 2 == 0) return p * p; else return b * p * p; } }

37 2-37 Example 5 (6) Analysis (counting multiplications): Each recursive power algorithm performs the same number of multiplications as the corresponding non-recursive algorithm. So their time complexities are the same: non-recursiverecursive Simple power algorithmO(n)O(n)O(n)O(n) Smart power algorithmO(log n)

38 2-38 Example 5 (7) Analysis (space): The non-recursive power algorithms use constant space, i.e., O(1). A recursive algorithm uses extra space for each recursive call. The simple recursive power algorithm calls itself n times before returning, whereas the smart recursive power algorithm calls itself floor(log 2 n) times. non-recursiverecursive Simple power algorithmO(1)O(n)O(n) Smart power algorithmO(1)O(log n)

39 2-39 Example 6: rendering integers (1) Rendering a value means computing a string representation of it (suitable for printing or display). Problem: Render a given integer i to the base r (where r is in the range 2…10). E.g.: irrendering +292“11101” +298“35” –298“–35” +2910“29”

40 2-40 Example 6 (2) Recursive integer rendering algorithm: To render the integer i to the base r: 1.If i < 0: 1.1.Render the character ‘–’. 1.2.Render (–i) to the base r. 2.If 0  i < r: 2.1.Let d be the digit corresponding to i. 2.2.Render the character d. 3.If i ≥ r: 3.1.Let d be the digit corresponding to (i modulo r). 3.2.Render (i/r) to the base r. 3.3.Render the character d. 4.Terminate. Hard case: solved by rendering (i/r), which has one fewer digit than i. Easy case: solved directly. Hard case: solved by rendering (–i), which is unsigned.

41 2-41 Example 6 (3) Possible implementation in Java: static String render (int i, int r) { // Render i to the base r, where r is in the range 2…10. String s = ""; if (i < 0) { s += '-'; s += render(-i, r); } else if (i < r) { char d = (char)('0' + i); s += d; } else { char d = (char)('0' + i%r); s += render(i/r, r); s += d; } return s; }

42 2-42 Example 7: Towers of Hanoi (1) Three vertical poles (1, 2, 3) are mounted on a platform. A number of differently-sized disks are threaded on to pole 1, forming a tower with the largest disk at the bottom and the smallest disk at the top. We may move one disk at a time, from any pole to any other pole, but we must never place a larger disk on top of a smaller disk. Problem: Move the tower of disks from pole 1 to pole 2.

43 2-43 Example 7 (2) Animation (with 2 disks): 123123123123

44 2-44 Example 7 (3) Towers of Hanoi algorithm: To move a tower of n disks from pole source to pole dest: 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate.

45 2-45 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedest 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare Example 7 (4) Animation (with 6 disks): 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare

46 2-46 Example 7 (5) Analysis (counting moves): Let the total no. of moves required to move a tower of n disks be moves(n). Then: moves(n) = 1if n = 1 moves(n) = 1 + 2 moves(n–1)if n > 1 Solution: moves(n) = 2 n – 1 Time complexity is O(2 n ).


Download ppt "2-1 2 Algorithms Principles. Efficiency. Complexity. O-notation. Recursion. © 2001, D.A. Watt and D.F. Brown."

Similar presentations


Ads by Google