Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1

Similar presentations


Presentation on theme: "Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1"— Presentation transcript:

1 Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Dr. Aref Rashad Algorithms Lecture 3 Sorting Algorithm-1

2 Classification of Algorithms
By Function By Implementation By Design Paradigm

3 Classification by Function
Sorting Algorithms Searching Algorithms Selection Algorithms Arithmetic Algorithms Text Algorithms

4 Classification by implementation
Recursive or iterative A recursive algorithm: calls itself repeatedly until a certain limit Iterative algorithms: use repetitive constructs like loops. Deterministic or non-deterministic Deterministic algorithm: solve the problem with a predefined process Non-deterministic algorithm: must perform guesses of best solution at each step through the use of heuristics.

5 Classification by implementation
Logical or procedural Procedural algorithm: follow a certain procedure Logical Algorithm: uses controlled deduction. Apply rules Serial or parallel Serial Algorithms: Based on executing one instruction of an algorithm at a time. Parallel algorithms: take advantage of computer architectures to process several instructions at once

6 Classification by design paradigm
Divide and conquer Repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively), until the instances are small enough to solve easily. Sub-problems are independent. Dynamic programming Having the optimal solution to a problem from optimal solutions to subproblems, avoiding recomputing solutions that have already been computed.  Sub-problems are overlaped

7 Divide-&-conquer works best when all subproblems are independent
Divide-&-conquer works best when all subproblems are independent. So, pick partition that makes algorithm most efficient & simply combine solutions to solve entire problem. Divide-&-conquer is best suited for the case when no “overlapping subproblems” are encountered. Dynamic programming is needed when subproblems are dependent; we don’t know where to partition the problem. In dynamic programming algorithms, we typically solve each subproblem only once and store their solutions. But this is at the cost of space

8 Classification by design paradigm
 The greedy method Similar to dynamic programming, but the solutions to the sub-problems do not have to be known at each stage Using graphs Many problems can be modeled as problems on graphs. A graph exploration algorithms are used. This category also includes the search algorithms and backtracking.

9 Greedy Algorithm solves the sub-problems from top down
Greedy Algorithm solves the sub-problems from top down. We first need to find the greedy choice for a problem, then reduce the problem to a smaller one. The solution is obtained when the whole problem disappears. Dynamic Programming solves the sub-problems bottom up. The problem can’t be solved until we find all solutions of sub-problems. The solution comes up when the whole problem appears. Dynamic Programming has to try every possibility before solving the problem. It is much more expensive than greedy. However, there are some problems that greedy cannot solve while dynamic programming can. Therefore, we first try greedy algorithm. If it fails then try dynamic programming.

10 Classification by design paradigm
Linear programming The problem is expressed as a set of linear inequalities and then an attempt is made to maximize or minimize the inputs Probabilistic  Those that make some choices randomly. Heuristic  Whose general purpose is not to find an optimal solution, but an approximate solution where the time or resources to find a perfect solution are not practical.

11 Algorithm Course Dr. Aref Rashad  Sorting Algorithm- Part 3

12 Sorting Algorithms classification
Comparison based vs Counting based In-Place vs Not-In-Place algorithm Internal structure of the algorithm Data Structure of the algorithm

13 Comparison based vs Counting based
Comparison based sorting technique: Bubble Sort Selection Sort Insertion Sort Heap Sort Quick Sort Merge Sort Counting based sorting technique: Radix Sort Bucket Sort

14 In-Place vs Not-In-Place algorithm
In-place : In In-Place algorithm, no additional data structure or array is required for sorting. Bubble Sort Selection Sort Insertion Sort Heap Sort Quick Sort Not-In-Place : In Not-In-Place algorithm additional ,data structure or array is required for sorting. Radix Sort. Bucket Sort Merge Sort.

15 Internal structure of the algorithm
Swap-based sorts begin conceptually with the entire list, and exchange particular pairs of elements moving toward a more sorted list. Merge-based sorts creates initial "naturally" or "unnaturally" sorted sequences, and then add either one element (insertion sort) or merge two already sorted sequences. Tree-based sorts store the data, at least conceptually, in a binary tree; either based on heaps, or based on search trees. Other sorts which use additional key-value information, such as radix or bucket sort.

16 Techniques of the Algorithm
Sorting by Insertion insertion sort, shellsort Sorting by Exchange bubble sort, quicksort Sorting by Selection selection sort, heapsort Sorting by Merging merge sort Sorting by Distribution radix sort

17 Importance of Sorting 1. Computers spend more time sorting than anything else, historically 25% on mainframes. 2. Sorting is the best studied problem in computer science, with a variety of different algorithms known. 3. Many the interesting ideas can be taught in the context of sorting, such as divide-and-conquer, randomized algorithms, and lower bounds.

18 Applications of Sorting
Searching Binary search lets you test whether an item is in a dictionary Closest pair Given n numbers, find the pair which are closest to each other. Once the numbers are sorted, the closest pair will be next to each other in sorted order Element Uniqueness Given a set of n items, are they all unique or are there any duplicates? Sort them and do a linear scan to check adjacent pairs

19 Applications of Sorting
Mode Given a set of n items, which element occurs the largest number of times? Sort them and do a linear scan to measure the length of all adjacent runs. Median and Selection What is the kth largest item in the set? Once the keys are placed in sorted order in an array, the kth largest can be by looking in the kth position of the array.

20 Applications of Sorting
Convex hulls Given n points in two dimensions, find the smallest area polygon which contains them all. Convex hulls are the most important building block for more sophisticated geometric algorithms. Comparison Functions Alphabetic is the sorting of text strings. There is a built-in sort routine as a library function

21 The Problem of Sorting

22 Elementary Sorting Methods
(Selection, Insertion, Bubble) Easier to understand the basic mechanisms of sorting. Good for small files. Good for well-structured files that are relatively easy to sort, such as those almost sorted. Can be used to improve efficiency of more powerful methods.

23 Example of Insertion Sort

24 Insertion Sort

25

26 Insertion Sort Complexity Analysis
BEST CASE is when array is already sorted. Best case complexity  O(n) WORST CASE is when the array is already reverse sorted in decreasing order. Worst-Case complexity  O(n2) AVERAGE CASE assumes a random distribution of data. On the average, 1/2 of all the inner loops are performed since 1/2 of the elements A[i] will be greater. Average Case complexity  O(n2)

27 Example; Insertion Sort
{38, 27, 43,  3,  9, 82, 10} 1 2 3 4 5 6 7 {38, 27, 43,  3,  9, 82, 10}   {27, 38, 43,  3,  9, 82, 10},   i: 1 {27, 38, 43,  3,  9, 82, 10},   i: 2 { 3, 27, 38, 43,  9, 82, 10},   i: 3 { 3,  9, 27, 38, 43, 82, 10},   i: 4 { 3,  9, 27, 38, 43, 82, 10},   i: 5 { 3,  9, 10, 27, 38, 43, 82},   i: 6

28 Selection Sort: Given a list, take the current element and exchange it with the smallest element on the right hand side of the current element.

29 Selection Sort Analysis
The number of comparisons is (n2) in all cases. For each i from 1 to n-1, there is: one exchange and n-i comparisons A total of: n-1 exchanges and (n-1) + (n-2) = n(n-1)/2 comparisons

30 Example; Selection Sort
{38, 27, 43,  3,  9, 82, 10} 1 2 3 4 5 6 7 {38, 27, 43,  3,  9, 82, 10},   i: 0, min: 3,  minValue:  3 { 3, 27, 43, 38,  9, 82, 10},   i: 1, min: 4,  minValue:  9 { 3,  9, 43, 38, 27, 82, 10},   i: 2, min: 6,  minValue: 10 { 3,  9, 10, 38, 27, 82, 43},   i: 3, min: 4,  minValue: 27 { 3,  9, 10, 27, 38, 82, 43},   i: 4, min: 4,  minValue: 38 { 3,  9, 10, 27, 38, 82, 43},   i: 5, min: 6,  minValue: 43 { 3,  9, 10, 27, 38, 43, 82},   i: 6

31 Step-by-step example Let us take the array of numbers " ", and sort the array from lowest number to greatest number using bubble sort. In each step, elements written in bold are being compared. Three passes will be required. First Pass: ( 5 1 4 2 8 )        ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1. ( 1 5 4 2 8 )        ( 1 4 5 2 8 ), Swap since 5 > 4 ( 1 4 5 2 8 )        ( 1 4 2 5 8 ), Swap since 5 > 2 ( 1 4 2 5 8 )        ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: ( 1 4 2 5 8 )        ( 1 4 2 5 8 ) ( 1 4 2 5 8 )        ( 1 2 4 5 8 ), Swap since 4 > 2 ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted. Third Pass: ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 ) ( 1 2 4 5 8 )        ( 1 2 4 5 8 )


Download ppt "Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1"

Similar presentations


Ads by Google