Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithms Classification – Part 2

Similar presentations


Presentation on theme: "Algorithms Classification – Part 2"— Presentation transcript:

1 Algorithms Classification – Part 2
Algorithm Course Dr. Aref Rashad Algorithms Classification – Part 2

2 Classification of Algorithms
Function Implementation Design Paradigm Recursive vs Iterative Sorting  Divide & Conquer Searching Determ. vs  non-Determ. Dynamic Programming Selection Logical vs Procedural Greedy method Arithmatic  Heuristic Serial vs Parallel Text

3 Classification of Algorithms
By Function By Implementation By Design Paradigm

4 Classification by Function
Sorting Algorithms Searching Algorithms Selection Algorithms Arithmetic Algorithms Text Algorithms

5 Classification by implementation
Recursive or iterative A recursive algorithm: calls itself repeatedly until a certain limit Iterative algorithms: use repetitive constructs like loops. Deterministic or non-deterministic Deterministic algorithm: solve the problem with a predefined process Non-deterministic algorithm: must perform guesses of best solution at each step through the use of heuristics.

6 Classification by implementation
Logical or procedural Procedural algorithm: follow a certain procedure Logical Algorithm: uses controlled deduction. Apply rules Serial or parallel Serial Algorithms: Based on executing one instruction of an algorithm at a time. Parallel algorithms: take advantage of computer architectures to process several instructions at once

7 Classification by design paradigm
Divide and conquer Repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively), until the instances are small enough to solve easily. Sub-problems are independent. Dynamic programming Having the optimal solution to a problem from optimal solutions to subproblems, avoiding recomputing solutions that have already been computed.  Sub-problems are overlaped

8 Divide-&-conquer works best when all subproblems are independent
Divide-&-conquer works best when all subproblems are independent. So, pick partition that makes algorithm most efficient & simply combine solutions to solve entire problem. Divide-&-conquer is best suited for the case when no “overlapping subproblems” are encountered. Dynamic programming is needed when subproblems are dependent; we don’t know where to partition the problem. In dynamic programming algorithms, we typically solve each subproblem only once and store their solutions. But this is at the cost of space

9 Classification by design paradigm
 The greedy method Similar to dynamic programming, but the solutions to the sub-problems do not have to be known at each stage Using graphs Many problems can be modeled as problems on graphs. A graph exploration algorithms are used. This category also includes the search algorithms and backtracking.

10 Greedy Algorithm solves the sub-problems from top down
Greedy Algorithm solves the sub-problems from top down. We first need to find the greedy choice for a problem, then reduce the problem to a smaller one. The solution is obtained when the whole problem disappears. Dynamic Programming solves the sub-problems bottom up. The problem can’t be solved until we find all solutions of sub-problems. The solution comes up when the whole problem appears. Dynamic Programming has to try every possibility before solving the problem. It is much more expensive than greedy. However, there are some problems that greedy cannot solve while dynamic programming can. Therefore, we first try greedy algorithm. If it fails then try dynamic programming.

11 Classification by design paradigm
Linear programming The problem is expressed as a set of linear inequalities and then an attempt is made to maximize or minimize the inputs Probabilistic  Those that make some choices randomly. Heuristic  Whose general purpose is not to find an optimal solution, but an approximate solution where the time or resources to find a perfect solution are not practical.

12 Algorithm Course Dr. Aref Rashad  Sorting Algorithm- Part 3

13 Sorting Algorithms classification
Comparison based vs Counting based In-Place vs Not-In-Place algorithm Internal structure of the algorithm Data Structure of the algorithm

14 Comparison based vs Counting based
Comparison based sorting technique: Bubble Sort Selection Sort Insertion Sort Heap Sort Quick Sort Merge Sort Counting based sorting technique: Radix Sort Bucket Sort

15 In-Place vs Not-In-Place algorithm
In-place : In In-Place algorithm, no additional data structure or array is required for sorting. Bubble Sort Selection Sort Insertion Sort Heap Sort Quick Sort Not-In-Place : In Not-In-Place algorithm additional ,data structure or array is required for sorting. Radix Sort. Bucket Sort Merge Sort.

16 Internal structure of the algorithm
Swap-based sorts begin conceptually with the entire list, and exchange particular pairs of elements moving toward a more sorted list. Merge-based sorts creates initial "naturally" or "unnaturally" sorted sequences, and then add either one element (insertion sort) or merge two already sorted sequences. Tree-based sorts store the data, at least conceptually, in a binary tree; either based on heaps, or based on search trees. Other sorts which use additional key-value information, such as radix or bucket sort.

17 Techniques of the Algorithm
Sorting by Insertion insertion sort, shellsort Sorting by Exchange bubble sort, quicksort Sorting by Selection selection sort, heapsort Sorting by Merging merge sort Sorting by Distribution radix sort

18 Importance of Sorting 1. Computers spend more time sorting than anything else, historically 25% on mainframes. 2. Sorting is the best studied problem in computer science, with a variety of different algorithms known. 3. Many the interesting ideas can be taught in the context of sorting, such as divide-and-conquer, randomized algorithms, and lower bounds.

19 Applications of Sorting
Searching Binary search lets you test whether an item is in a dictionary Closest pair Given n numbers, find the pair which are closest to each other. Once the numbers are sorted, the closest pair will be next to each other in sorted order Element Uniqueness Given a set of n items, are they all unique or are there any duplicates? Sort them and do a linear scan to check adjacent pairs

20 Applications of Sorting
Mode Given a set of n items, which element occurs the largest number of times? Sort them and do a linear scan to measure the length of all adjacent runs. Median and Selection What is the kth largest item in the set? Once the keys are placed in sorted order in an array, the kth largest can be by looking in the kth position of the array.

21 Applications of Sorting
Convex hulls Given n points in two dimensions, find the smallest area polygon which contains them all. Convex hulls are the most important building block for more sophisticated geometric algorithms. Comparison Functions Alphabetic is the sorting of text strings. There is a built-in sort routine as a library function

22 The Problem of Sorting

23 Elementary Sorting Methods
(Selection, Insertion, Bubble) Easier to understand the basic mechanisms of sorting. Good for small files. Good for well-structured files that are relatively easy to sort, such as those almost sorted. Can be used to improve efficiency of more powerful methods.

24 Example of Insertion Sort

25 Insertion Sort

26

27 Insertion Sort: Given a list, take the current element and insert it on the proper place on the right hand side of the current element.

28 Insertion Sort Complexity Analysis
BEST CASE is when array is already sorted. Best case complexity  O(n) WORST CASE is when the array is already reverse sorted in decreasing order. Worst-Case complexity  O(n2) AVERAGE CASE assumes a random distribution of data. On the average, 1/2 of all the inner loops are performed since 1/2 of the elements A[i] will be greater. Average Case complexity  O(n2)

29 Example; Insertion Sort
{38, 27, 43,  3,  9, 82, 10} 1 2 3 4 5 6 7 {38, 27, 43,  3,  9, 82, 10}   {27, 38, 43,  3,  9, 82, 10},   i: 1 {27, 38, 43,  3,  9, 82, 10},   i: 2 { 3, 27, 38, 43,  9, 82, 10},   i: 3 { 3,  9, 27, 38, 43, 82, 10},   i: 4 { 3,  9, 27, 38, 43, 82, 10},   i: 5 { 3,  9, 10, 27, 38, 43, 82},   i: 6

30 Selection Sort: Given a list, take the current element and exchange it with the smallest element on the right hand side of the current element.

31 Selection Sort Analysis
The number of comparisons is (n2) in all cases. For each i from 1 to n-1, there is: one exchange and n-i comparisons A total of: n-1 exchanges and (n-1) + (n-2) = n(n-1)/2 comparisons

32 Example; Selection Sort
{38, 27, 43,  3,  9, 82, 10} 1 2 3 4 5 6 7 {38, 27, 43,  3,  9, 82, 10},   i: 0, min: 3,  minValue:  3 { 3, 27, 43, 38,  9, 82, 10},   i: 1, min: 4,  minValue:  9 { 3,  9, 43, 38, 27, 82, 10},   i: 2, min: 6,  minValue: 10 { 3,  9, 10, 38, 27, 82, 43},   i: 3, min: 4,  minValue: 27 { 3,  9, 10, 27, 38, 82, 43},   i: 4, min: 4,  minValue: 38 { 3,  9, 10, 27, 38, 82, 43},   i: 5, min: 6,  minValue: 43 { 3,  9, 10, 27, 38, 43, 82},   i: 6

33 Exchange (Bubble) Sort
Based on repeatedly exchanging adjacent elements, if necessary. When no exchanges are required, the file is sorted. Search for adjacent pairs that are out of order Switch the out of order keys Repeat this n-1 times After the first iteration, the last key is guaranteed to be the largest 77 42 35 12 101 5

34 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping Swap 42 77 77 42 35 12 101 5

35 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping Swap 35 77 42 77 35 12 101 5

36 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping Swap 12 77 42 35 77 12 101 5

37 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping 42 35 12 77 101 5 No need to swap

38 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping Swap 5 101 42 35 12 77 101 5

39 "Bubbling Up" the Largest Element
Traverse a collection of elements Move from the front to the end “Bubble” the largest value to the end using pair-wise comparisons and swapping 101 42 35 12 77 5 Largest value correctly placed

40 Items of Interest Notice that only the largest value is correctly placed All other values are still out of order So we need to repeat this process 101 42 35 12 77 5 Largest value correctly placed

41 Repeat “Bubble Up” How Many Times?
If we have N elements… And if each time we bubble an element, we place it in its correct location… Then we repeat the “bubble up” process N – 1 times. This guarantees we’ll correctly place all N elements.

42 “Bubbling” All the Elements
77 12 35 42 5 101 N - 1 5 42 12 35 77 101 42 5 35 12 77 101 42 35 5 12 77 101 42 35 12 5 77 101

43 Bubble Sort Pseudo Code
for i = 1 to n do     for j = n to i+1 do         If A[A] < A[j-1] then             Exchange A[j] ↔ A[j-1] Inner for loop executed O(n2) times Best- and worst-case times same O(n2)

44 Divide and Conquer paradigm
A common approach to solving a problem is to partition the problem into smaller parts, find solutions for the parts, and then combine the solutions for the parts into a solution for the whole. This approach, especially when used recursively, often yields efficient solutions to problems in which the sub-problems are smaller versions of the original problem.

45 Divide and Conquer Recursive in structure
Divide the problem into sub-problems that are similar to the original but smaller in size Conquer the sub-problems by solving them recursively. If they are small enough, just solve them in a straightforward manner. Combine the solutions to create a solution to the original problem In most algorithms, either the divide or the combine step is trivial

46 An Example: Merge Sort Sorting Problem: Sort a sequence or n elements into non-decreasing order. Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer.

47 Merge Sort – Example Original Sequence Sorted Sequence 18 26 32 6 43
15 9 1 Original Sequence Sorted Sequence

48 Merging Two sorted Arrays

49 Merge Sort Least number of comparisons occur if each entry in the left subarray is less than all the entries in the right subarray The number of comparisons may be as high as (n-1) To merge two sorted arrays n1 and n2 (n=n1+n2) the number of comparisons is at least n1 and at most (n-1)

50 Merge Sort

51

52 T(n) = 2T(n/2) + O(n) if n > 1
Analysis of Merge Sort Running time T(n) of Merge Sort: Divide: computing the middle takes O(1) Conquer: solving 2 subproblems takes 2T(n/2) Combine: merging n elements takes O(n) Total: T(n) = O(1) if n = 1 T(n) = 2T(n/2) + O(n) if n > 1  T(n) = O(n lg n)

53 Recursion Tree for Merge Sort
For the original problem, we have a cost of cn, plus two subproblems each of size (n/2) and running time T(n/2). Each of the size n/2 problems has a cost of cn/2 plus two subproblems, each costing T(n/4). cn cn/2 T(n/4) Cost of divide and merge. cn T(n/2) Cost of sorting subproblems.

54 Recursion Tree for Merge Sort
lg n + 1 levels height is lg n. cn cn/2 cn/4 c cn cn lg n cn cn Total cost = sum of costs at each level = (lg n + 1) cn = cnlgn + cn = O(n lgn).

55

56

57

58

59

60 Running time analysis Worst-Case (Data is sorted already)
„ When the pivot is the smallest (or largest) element at partitioning on a block of size n, the result yields one empty sub-block, one element (pivot) in the “correct” place, and one sub-block of size (n-1) „ Recurrence Equation: Solution: O(n2) Worse than Mergesort!!!

61 Running time analysis Best case: „ The pivot is in the middle (median) (at each partition step), i.e. after each partitioning, on a block of size n, the result yields two sub-blocks of approximately equal size and the pivot element in the “middle” position „ Recurrence Equation becomes Solution: O(n logn)…. Master Theorm Comparable to Mergesort!! Average case: It turns out the average case running time also is O(n logn)

62

63

64

65

66 Recursion Tree Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 c lg n + 1 levels height is lg n. lg n Total cost = sum of costs at each level = (lg n + 1)cn = cnlgn + cn = O(n lgn).

67

68 Loop 1: C=0 …………………….……….. K ….. Range of Elements
Loop 2: A C ………………….. N ……… Number of Elements Loop 3: Accumulate C …….….….. K Loop 4: A C B … n

69 N = 5 K = 4 Loop 1

70 Loop 2: A C

71 Loop 3: Accumulate C

72 Loop 4: A C B J=4 A(4)=4 C((A(4))=C(4)=5 B(5)=4

73 Analysis

74

75 Solving Recurrences– Part 4
Algorithm Course Dr. Aref Rashad Solving Recurrences– Part 4

76 Methods to Solve Recurrences
Iteration Iterate the expansion of the relation Substitution Guess what the right answer is. (Intuition, experience, black magic) Use induction to prove that the guess is right. Recursion trees Visualize how the recurrence unfolds. Use the tree to Obtain a guess, which is then verified using e.g. substitution Master theorem Cook book recipe for solving common recurrences.

77 Iteration Method – Example
T(n) = n + 2T(n/2) = n + 2(n/2 + 2T(n/4)) …………………………… = n + n + 4T(n/4) …………………………… = n + n + 4(n/4 + 2T(n/8)) ……………………… = n + n + n + 8T(n/8) = 3n + 23T(n/23) …… = in + 2iT(n/2i) ……… n/2i = 1 … n= 2i .. Lg n=i = nlgn + nT(1) = O(nlgn)

78 Iterating to Solve the Recurrence
T(n) = 2T(n-1) + 1 = 2[2T(n-2) + 1] + 1, because T(n-1) = 2T(n-2) + 1. = 4T(n-2) = 4T(n-2) + 3 = 4[2T(n-3) + 1] + 3, because T(n-2) = 2T(n-3) + 1 = 8T(n-3) = 8T(n-3) = 23T(n – 3) +( 23 – 1) ……… = 2kT(n – k) + (2k – 1) n – k = 1. Equivalently, k = n – 1. = 2n-1T(1) + 2n-1 – 1 = 2n-1 + 2n-1 – 1 = 2n – 1.

79

80 Substitution method 1.Guessthe form of the solution.
2.Verifyby induction. 3.Solvefor constants.

81 Substitution method T(n) = T(n-1) + n Guess: T(n) = O(n2)
Induction goal: T(n) ≤ c n2, for some c and n ≥ n0 Induction hypothesis: T(n-1) ≤ c(n-1)2 for all k < n Proof of induction goal: T(n) = T(n-1) + n ≤ c (n-1)2 + n = cn2 – (2cn – c - n) ≤ cn2 if: 2cn – c – n ≥ 0  c ≥ n/(2n-1)  c ≥ 1/(2 – 1/n) For n ≥ 1  2 – 1/n ≥ 1  any c ≥ 1 will work

82 Substitution method T(n) = 3T(n/4) + cn2 Guess: T(n) = O(n2)
Induction goal: T(n) ≤ dn2, for some d and n ≥ n0 Induction hypothesis: T(n/4) ≤ d (n/4)2 Proof of induction goal: ≤ 3d (n/4)2 + cn2 = (3/16) d n2 + cn2 ≤ d n2 if: d ≥ (16/13)c Therefore: T(n) = O(n2)

83

84 Recursion-tree method
•A recursion tree models the costs (time) of a recursive execution of an algorithm. •The recursion-tree method can be unreliable •The recursion-tree method promotes intuition, •The recursion tree method is good for generating guesses for the substitution method.

85 Recursion-tree Method
Recursion Trees Show successive expansions of recurrences using trees. Keep track of the time spent on the subproblems of a divide and conquer algorithm. Help organize the algebraic bookkeeping necessary to solve a recurrence.

86 Recursion Tree – Example
Running time of Merge Sort: T(n) = (1) if n = 1 T(n) = 2T(n/2) + (n) if n > 1 Rewrite the recurrence as T(n) = c if n = 1 T(n) = 2T(n/2) + cn if n > 1 c > 0: Running time for the base case and time per array element for the divide and combine steps.

87 Recursion Tree for Merge Sort
For the original problem, we have a cost of cn, plus two subproblems each of size (n/2) and running time T(n/2). Each of the size n/2 problems has a cost of cn/2 plus two subproblems, each costing T(n/4). cn cn/2 T(n/4) Cost of divide and merge. cn T(n/2) Cost of sorting subproblems.

88 Recursion Tree for Merge Sort
Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 c cn lg n + 1 levels height is lg n. cn lg n cn cn Total cost = sum of costs at each level = (lg n + 1)cn = cnlgn + cn = (n lgn).

89

90

91

92 The master method Three common cases

93

94 Examples

95

96 Idea of master theorem

97

98

99

100

101 End Part: 4

102 Step-by-step example Let us take the array of numbers " ", and sort the array from lowest number to greatest number using bubble sort. In each step, elements written in bold are being compared. Three passes will be required. First Pass: ( 5 1 4 2 8 )        ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1. ( 1 5 4 2 8 )        ( 1 4 5 2 8 ), Swap since 5 > 4 ( 1 4 5 2 8 )        ( 1 4 2 5 8 ), Swap since 5 > 2 ( 1 4 2 5 8 )        ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: ( 1 4 2 5 8 )        ( 1 4 2 5 8 ) ( 1 4 2 5 8 )        ( 1 2 4 5 8 ), Swap since 4 > 2 ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted. Third Pass: ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 )


Download ppt "Algorithms Classification – Part 2"

Similar presentations


Ads by Google