Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Slides:



Advertisements
Similar presentations
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Advertisements

Sorting Comparison-based algorithm review –You should know most of the algorithms –We will concentrate on their analyses –Special emphasis: Heapsort Lower.
Introduction to Algorithms
Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
CS4413 Divide-and-Conquer
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Theory of Algorithms: Divide and Conquer
ISOM MIS 215 Module 7 – Sorting. ISOM Where are we? 2 Intro to Java, Course Java lang. basics Arrays Introduction NewbieProgrammersDevelopersProfessionalsDesigners.
Spring 2015 Lecture 5: QuickSort & Selection
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
1 Sorting Problem: Given a sequence of elements, find a permutation such that the resulting sequence is sorted in some order. We have already seen: –Insertion.
Lecture 8 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Chapter 3 The Greedy Method 3.
Sorting21 Recursive sorting algorithms Oh no, not again!
CSC 2300 Data Structures & Algorithms March 27, 2007 Chapter 7. Sorting.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
CSC 2300 Data Structures & Algorithms March 20, 2007 Chapter 7. Sorting.
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Divide-And-Conquer Sorting Small instance.  n
(c) , University of Washington
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
Sorting Sanghyun Park Fall 2002 CSE, POSTECH. Sorts To Consider Selection sort Bubble sort Insertion sort Merge sort Quick sort Why do we care about sorting?
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
Fundamentals of Algorithms MCS - 2 Lecture # 7
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
Order Statistics. Order statistics Given an input of n values and an integer i, we wish to find the i’th largest value. There are i-1 elements smaller.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CS 61B Data Structures and Programming Methodology July 28, 2008 David Sun.
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
CS 61B Data Structures and Programming Methodology July 21, 2008 David Sun.
Divide and Conquer Applications Sanghyun Park Fall 2002 CSE, POSTECH.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Searching and Sorting Recursion, Merge-sort, Divide & Conquer, Bucket sort, Radix sort Lecture 5.
Chapter 8 Sorting and Searching Goals: 1.Java implementation of sorting algorithms 2.Selection and Insertion Sorts 3.Recursive Sorts: Mergesort and Quicksort.
Review 1 Selection Sort Selection Sort Algorithm Time Complexity Best case Average case Worst case Examples.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Divide And Conquer A large instance is solved as follows:  Divide the large instance into smaller instances.  Solve the smaller instances somehow. 
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Ch22.Branch and Bound.
Chapter 9 sorting. Insertion Sort I The list is assumed to be broken into a sorted portion and an unsorted portion The list is assumed to be broken into.
1 Ch.19 Divide and Conquer. 2 BIRD’S-EYE VIEW Divide and conquer algorithms Decompose a problem instance into several smaller independent instances May.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Branch and Bound Searching Strategies
Chapter 9 Recursion © 2006 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.
CSE 250 – Data Structures. Today’s Goals  First review the easy, simple sorting algorithms  Compare while inserting value into place in the vector 
Sorting and Runtime Complexity CS255. Sorting Different ways to sort: –Bubble –Exchange –Insertion –Merge –Quick –more…
Divide and Conquer Sorting
Subject Name: Design and Analysis of Algorithm Subject Code: 10CS43
Dynamic Programming Sequence of decisions. Problem state.
Analysis and design of algorithm
Chapter 4: Divide and Conquer
Dynamic Programming General Idea
8/04/2009 Many thanks to David Sun for some of the included slides!
Dynamic Programming General Idea
Dynamic Programming Sequence of decisions. Problem state.
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Major Design Strategies
The Selection Problem.
Dynamic Programming Sequence of decisions. Problem state.
Major Design Strategies
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Presentation transcript:

Algorithm Design Methods (II) Fall 2003 CSE, POSTECH

Quick Sort Quicksort can be seen as a variation of mergesort in which front and back are defined in a different way.

Quicksort Algorithm Partition anArray into two non-empty parts. – Pick any value in the array, pivot. – small = the elements in anArray < pivot – large = the elements in anArray > pivot – Place pivot in either part, so as to make sure neither part is empty. Sort small and large by recursively calling QuickSort. You could use merge to combine them, but because the elements in small are smaller than elements in large, simply concatenate small and large, and put the result into anArray.

Quicksort: Complexity Analysis Like mergesort, a single invocation of quicksort on an array of size p has complexity O(p): – p comparisons = 2*p accesses – 2*p moves (copying) = 4*p accesses Best case: every pivot chosen by quicksort partitions the array into equal-sized parts. In this case quicksort is the same big-O complexity as mergesort – O(n log n)

Quicksort: Complexity Analysis Worst case: the pivot chosen is the largest or smallest value in the array. Partition creates one part of size 1 (containing only the pivot), the other of size p-1. n 1n-1 n

Quicksort: Complexity Analysis Worst case: There are n-1 invocations of quicksort (not counting base cases) with arrays of size: p = n, n-1, n-2, …, 2 Since each of these does O(p), the total number of accesses is O(n) + O(n-1) + … + O(1) = O(n 2 ) Ironically the worst case occurs when the list is sorted (or near sorted)!

Quicksort: Complexity Analysis The average case must be between the best case O(n log n) and the worst case is O(n 2 ). Analysis yields a complex recurrence relation. The average case number of comparisons turns out to be approximately 1.386*n*log n – 2.846*n. Therefore the average case time complexity is O(n log n).

Quicksort: Complexity Analysis Best case O(n log n) Worst case O(n2) Average case O(n log n) Note that the quick sort is inferior to insertion sort and merge sort if the list is sorted, nearly sorted, or reverse sorted.

Dynamic Programming Sequence of decisions Problem state Principle of optimality

Sequence of Decisions As in the greedy method, the solution to a problem is viewed as the result of a sequence of decisions. Unlike the greedy method, decisions are not made in a greedy manner. Examine the decision sequence to see whether an optimal decision sequence contains optimal decision subsequences.

Example: Matrix Chain Product M1 X M2 where M1 is n x m and M2 is m x q Then the total number of multiplications is n x m x q When there is a matrix chain product such as MCP(n) = M1 X M2 X M3 X ….. X Mn, Find a sequence of matrix products that results in the least number of matrix multiplications. Example : M1 X M2 X M3 X M4 – Possible sequence : (M1 X (M2 X (M3 X M4))), ((M1 X M2) X (M3 X M4)), (((M1 X M2) X M3) X M4), ((M1 X (M2 X M3)) X M4), (M1 X ((M2 X M3) X M4),

Matrix Chain Product Decision based space decomposition – Compute M1 X M2 first then compute with others or – Compute M2…Mn first then compute with M1 Subspace for M1 X M2 – Compute M3… Mn Subspace for M2..Mn – Compute M3.. Mn first then compute with M2 or – Compute M2 X M3 first then compute with others Recursively break down the space by having a decision Final decision is based on the number of computations required (in a recursive implementation). But, subspaces are repeating.  Use bottom-up approach to avoid the repetitive computation

Matrix Chain Product Non-recursive computation of MCP – Compute (M1: M2), (M2: M3), ….., (Mn-1: Mn) – Compute (M1:M3), (M2:M4),...., (Mn-2:Mn) using lower computation results Ex: (M1:M3) =min( (M1 X(M2:M3)), ((M1:M2) XM3) ) – Compute (M1:M4), (M2:M5), ….., (Mn-3:Mn) – ….. – Finally compute (M1:M5) = min( (M1X(M2:Mn)), ((M1:M2)X(M3:Mn), …., ((M1:Mn-1)XMn) ) Time Complexity – 1 st : O(n), 2 nd : O((n-1)X2), 3 rd : O((n-3)X3) ….. N-th: O(1X(n-1)) – Total time complexity O(n**2) Space complexity : O(n**2)

Back Tracking Systematic way to search for the solution to a problem Begin by defining a solution space for the problem. – Example: Rat in a maze => solution space: all possible paths to the destination 0/1 Knapsack => solution space: all possible 0/1 combinations Organize the solution space so that it can be searched easily. – Graph or tree Search the space in depth-first manner beginning at the start node. There is no more space to search in dfs, backtrack to the previous live node and expand over there. The search terminates when the destination reached or there is no more live node.

BackTracking: Example 0/1 Knapsack – n=3, c = 15, w = [8, 6, 9], p = [5, 4, 6] Search space – (0,-,-),..,(0,0,0), (0,0,1), (0,1,0), …., (1,1,1) Search start from (1,-,-) to (0,0,0) (1,1,1) gives a violation of capacity, so backtrack to (1,1,-) and choose (1,1,0) that satisfies the constraints  feasible solution with profit 9 Continue search using other live nodes (0,1,1) is another feasible solution with more profit.  optimal solution with profit 10

Time Complexity Exhaustive search – O(2**n) Can speed up the search for an optimal solution by introducing bounding function: “Whether a newly reached node can lead to a solution better than the best found so far.” The solution space needed for the search is the path information from the start node to the current expansion node. O(longest path)

Branch and Bound Another systematic way to search for the solution to a problem Each live node becomes E (expansion)-node exactly once. When a node becomes an E-node, all new nodes that can be reached using a single move are generated. Generated nodes that cannot possibly lead to a (optimal) feasible solution are discarded. The remaining nodes are added to the list of live nodes and then one node from the list is selected to become the next E-node. The expansion process is continued until either the answer is found or the list of live nodes become empty. Selection method from the list of live nodes – First-in First-out (BFS) – Least Cost or Max Profit

Branch and Bound: Example 0/1 Knapsack – n=3, c = 15, w = [8, 9, 6], p = [5, 6, 4] Search space – (1,-,-),…(0,0,0), (0,0,1), (0,1,0), …., (1,1,1) Search start from (-,-,-) and expand – (1,-,-), (0,-,-): both live nodes. Select (1,-,-) by FIFO selection and expand – (1,1,-), (1,0,-): (1,1,-) is infeasible (bound) but (1,0,-) is feasible Select (0,-,-) by FIFO selection and expand – (0,1,-), (0,0,-): both feasible Select (1,0,-) by FIFO selection and expand – (1,0,1), (1,0,0): (1,0,0) is feasible but less profit – (1,0,1) is feasible solution with profit 9

Branch and Bound: Example 0/1 Knapsack – n=3, c = 15, w = [8, 9, 6], p = [5, 4, 6] Select (0,1,-) by FIFO selection and expand – (0,1,1), (0,1,0): (0,1,1) feasible with more profit (10) Select (0,0,-) by FIFO selection and expand – (0,0,1) but less profit.  optimal solution (0,1,1) with profit 10

Time Complexity Exhaustive search – O(2**n) : depends on bounding function

Supplement slides for the dynamic programming

Principle of Optimality An optimal solution satisfies the following property: No matter what the first decision is, the remaining decisions are optimal with respect to the state that results from this decision. Dynamic programming may be used only when the principle of optimality holds.

0/1 Knapsack Problem Suppose that decisions are made in the order x 1, x 2, x 3, …, x n. Let x 1 =a 1, x 2 =a 2, …, x n =a n be an optimal solution. If a 1 = 0, then following the first decision the state is (2,c). a 2, a 3 …, a n must be an optimal solution to the knapsack instance given by the state (2,c).

x 1 = a 1 = 0 maximize Sigma(i=2…n) p i x i. subject to Sigma(i=2…n) w i x i <= c and x i = 0 or 1 for all i. If not, this instance has a better solution b 2, b 3, …, b n. Sigma(i=2,…n) p i b i > Sigma(i=2…n) p i a i.

x 1 = a 1 = 0 x 1 =a 1, x 2 =b 2, x 3 =b 3 …, x n =b n is a better solution to the original instance than is x 1 =a 1, x 2 =a 2, x 3 =a 3 …, x n =a n. So x 1 =a 1, x 2 =a 2, x 3 =a 3 …, x n =a n cannot be an optimal solution … a contradiction with the assumption that it is optimal.

x 1 = a 1 = 1 Next, consider the case a 1 = 1. Following the first decision the state is (2,c-w 1 ). a 2, a 3 …, a n must be an optimal solution to the knapsack instance given by the state (2,c- w 1 ).

x 1 = a 1 = 1 maximize Sigma(i=2…n) p i x i. subject to Sigma(i=2…n) w i x i <= c-w 1 and x i = 0 or 1 for all i. If not, this instance has a better solution b 2, b 3, …, b n. Sigma(i=2,…n) p i b i > Sigma(i=2…n) p i a i.

x 1 = a 1 = 1 x 1 =a 1, x 2 =b 2, x 3 =b 3 …, x n =b n is a better solution to the original instance than is x 1 =a 1, x 2 =a 2, x 3 =a 3 …, x n =a n. So x 1 =a 1, x 2 =a 2, x 3 =a 3 …, x n =a n cannot be an optimal solution … a contradiction with the assumption that it is optimal.

0/1 Knapsack Problem Therefore, no matter what the first decision is, the remaining decisions are optimal with respect to the state that results from this decision. The principle of optimality holds and dynamic programming may be applied.

Dynamic Programming Recurrence f(n,y) is the value of the optimal solution to the knapsack instance defined by the state (n,y). – Only item n is available. – Available capacity is y. If w n <= y, f(n,y) = p n. If w n > y, f(n,y) = 0.

Dynamic Programming Recurrence Suppose that i < n. f(i,y) is the value of the optimal solution to the knapsack instance defined by the state (i,y). – Items i through n are available. – Available capacity is y. Suppose that in the optimal solution for the state (i,y), the first decision is to set x i = 0. From the principle of optimality, it follows that f(i,y) = f(i+1,y).

Dynamic Programming Recurrence The only other possibility for the first decision is x i = 1. The case x i = 1 can arise only when y >= w i. From the principle of optimality, it follows that f(i,y) = f(i+1,y-w i ) + p i. Combining the two cases, we get – f(i,y) = f(i+1,y) if y < w i. – f(i,y) = max{f(i+1,y), f(i+1,y-w i )+p[i]} if y >= w i.

Recursive Code f(i,y) **/ private static int f (int i, int y) { if (i == n) return (y < w[n])? 0:p[n]; if (y < w[i]) return f(i+1,y); return max (f(i+1,y), f(i+1, y-w[i])+p[i]); }

Recursion Tree

Time Complexity Let t(n) be the time required when n items are available. t(0) = t(1) = a, where a is a constant. When n > 1, t(n) <= 2t(n-1) + b, where b is a constant. t(n) = O(2 n ). Solving dynamic programming recurrences recursively can be hazardous to run time.

Reducing Run Time

Time Complexity Level i of the recurrence tree has up to 2 i-1 nodes. At each such node an f(i,y) is computed. Several nodes may compute the same f(i,y). We can save time by not recomputing already computed f(i,y)s. Save computed f(i,y)s in a dictionary. – Key is (i,y) value. – f(i,y) is computed recursively only when (i,y) is not in the dictionary. – Otherwise, the dictionary value is used.