COSC 3100 Decrease and Conquer

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

CSC 421: Algorithm Design & Analysis
CSE 3101: Introduction to the Design and Analysis of Algorithms
ADA: 5. Quicksort1 Objective o describe the quicksort algorithm, it's partition function, and analyse its running time under different data conditions.
Chapter 5 Decrease and Conquer. Homework 7 hw7 (due 3/17) hw7 (due 3/17) –page 127 question 5 –page 132 questions 5 and 6 –page 137 questions 5 and 6.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 4 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Chapter 5: Decrease and Conquer
Chap 5: Decrease & conquer. Objectives To introduce the decrease-and-conquer mind set To show a variety of decrease-and-conquer solutions: Depth-First.
Design & Analysis of Algorithms CS315
Divide-and-Conquer The most-well known algorithm design strategy:
Divide and Conquer Strategy
Decrease-and-Conquer
CS4413 Divide-and-Conquer
Chapter 6: Transform and Conquer
Design and Analysis of Algorithms - Chapter 51 Decrease and Conquer 1. Reduce problem instance to smaller instance of the same problem 2. Solve smaller.
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
ISOM MIS 215 Module 7 – Sorting. ISOM Where are we? 2 Intro to Java, Course Java lang. basics Arrays Introduction NewbieProgrammersDevelopersProfessionalsDesigners.
Analysis of Algorithms CS 477/677 Randomizing Quicksort Instructor: George Bebis (Appendix C.2, Appendix C.3) (Chapter 5, Chapter 7)
CMPS1371 Introduction to Computing for Engineers SORTING.
Chapter 19: Searching and Sorting Algorithms
Algorithm Design Strategy Divide and Conquer. More examples of Divide and Conquer  Review of Divide & Conquer Concept  More examples  Finding closest.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2001 Makeup Lecture Chapter 23: Graph Algorithms Depth-First SearchBreadth-First.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
Lecture 10 Topics Application of DFS Topological Sort
COMP s1 Computing 2 Complexity
1 Decrease-and-Conquer Approach Lecture 06 ITS033 – Programming & Algorithms Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing.
MA/CSSE 473 Day 12 Insertion Sort quick review DFS, BFS Topological Sort.
1 More Sorting radix sort bucket sort in-place sorting how fast can we sort?
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
CHAPTER 09 Compiled by: Dr. Mohammad Omar Alhawarat Sorting & Searching.
1 Decrease-and-Conquer 1. Reduce problem instance to smaller instance of the same problem 2. Solve smaller instance 3. Extend solution of smaller instance.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
HKOI 2006 Intermediate Training Searching and Sorting 1/4/2006.
MA/CSSE 473 Day 15 BFS Topological Sort Combinatorial Object Generation Intro.
RAIK 283: Data Structures & Algorithms
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 4 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
UNIT-II DECREASE-AND-CONQUER ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 5:
Chapter 5 Decrease-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
MA/CSSE 473 Day 14 Permutations wrap-up Subset generation (Horner’s method)
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
Getting Started Introduction to Algorithms Jeff Chastine.
Lecture 13 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
CS 361 – Chapters 8-9 Sorting algorithms –Selection, insertion, bubble, “swap” –Merge, quick, stooge –Counting, bucket, radix How to select the n-th largest/smallest.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Sorting and Searching by Dr P.Padmanabham Professor (CSE)&Director
Divide and Conquer Strategy
Chapter 9 Sorting. The efficiency of data handling can often be increased if the data are sorted according to some criteria of order. The first step is.
ICS201 Lecture 21 : Sorting King Fahd University of Petroleum & Minerals College of Computer Science & Engineering Information & Computer Science Department.
Sorting Fundamental Data Structures and Algorithms Aleks Nanevski February 17, 2004.
Nirmalya Roy School of Electrical Engineering and Computer Science Washington State University Cpt S 122 – Data Structures Sorting.
2IS80 Fundamentals of Informatics Fall 2015 Lecture 7: Sorting and Directed Graphs.
Now, Chapter 5: Decrease and Conquer Reduce problem instance to smaller instance of the same problem and extend solution Solve smaller instance Extend.
The Linear and Binary Search and more Lecture Notes 9.
MA/CSSE 473 Day 12 Interpolation Search Insertion Sort quick review
Decrease-and-Conquer
CSC 421: Algorithm Design & Analysis
CSC 421: Algorithm Design & Analysis
Decrease-and-Conquer Approach
CSC 421: Algorithm Design & Analysis
Decrease-and-Conquer
Chapter 5 Decrease-and-Conquer
Decrease-and-Conquer
Decrease-and-Conquer
Decrease-and-Conquer
CSC 421: Algorithm Design & Analysis
Analysis and design of algorithm
CSC 421: Algorithm Design & Analysis
Divide and Conquer Merge sort and quick sort Binary search
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Presentation transcript:

COSC 3100 Decrease and Conquer Instructor: Tanvir

Decrease and Conquer Exploit the relationship between a solution to a given instance of a problem and a solution to its smaller instance Top-down: recursive Bottom-up: iterative 3 major types: Decrease by a constant Decrease by a constant factor Variable size decrease

Decrease and Conquer (contd.) Decrease by a constant Compute an where a ≠ 0 and n is a nonnegative an = an-1 × a Top down: recursive f(n) = Bottom up: iterative Multiply 1 by a, n times Can you improve an computation ? We shall see more interesting ones in this chapter! f(n-1) × a if n > 0 1 if n = 0

Decrease and Conquer (contd.) Decrease by a constant factor (usually 2) an = 𝒂 𝒏 𝟐 𝟐 If n is even and positive 𝒂 𝒏−𝟏 𝟐 𝟐 ∙𝒂 If n is odd 𝟏 If n = 0 529 =?

Decrease and Conquer (contd.), Decr. By const. factor ALGORITHM Exponentiate(a, n) if n = 0 return 1 tmp <- Exponentiate(a, n >> 1) if (n & 1) = 0 // n is even return tmp*tmp else return tmp*tmp*a 𝒂 𝒏 𝟐 𝟐 𝒂 𝒏−𝟏 𝟐 𝟐 ∙𝒂 𝟏 If n even If n is odd If n = 0 What’s the time complexity ? Θ(lgn)

Decrease and Conquer (contd.) Variable-size decrease We already have seen one example. Which one ? gcd(m, n) = gcd(n, m mod n) gcd( 31415, 14142 ) = gcd( 14142, 3131 ) Decr. By 11011 = gcd( 3131, 1618 ) Decr. By 1513 = gcd( 1618, 1513 ) Decr. By 105 = gcd( 1513, 105 ) Decr. By 1408 = gcd( 105, 43 ) Decr. By 62 = gcd( 43, 19 ) Decr. By 24 = gcd( 19, 5 ) Decr. By 14 = gcd( 5, 4 ) Decr. By 1 = gcd( 4, 1 ) Decr. By 3 = gcd( 1, 0 ) Decr. By 1

Decrease and Conquer (contd.) Apply decrease by 1 technique to sorting an array A[0..n-1] We assume that the smaller instance A[0..n-2] has already been sorted. How can we solve for A[0..n-1] now? Find the appropriate position for A[n-1] among the sorted elements and insert it there. (Straight) INSERTION SORT Scan the sorted subarray A[0..n-2] from right to left until an element smaller than or equal to A[n-1] is found, then insert A[n-1] right after that element.

Insertion Sort ALGORITHM InsertionSort(A[0..n-1]) for i <- 1 to n-1 do v <- A[i] j <- i-1 while j ≥ 0 and A[j] > v do A[j+1] <- A[j] j <- j-1 A[j+1] <- v 89 | 45 68 90 29 34 17 45 89 | 68 90 29 34 17 45 68 89 | 90 29 34 17 45 68 89 90 | 29 34 17 29 45 68 89 90 | 34 17 Input size: n Basic op: A[j] > v 29 34 45 68 89 90 | 17 17 29 34 45 68 89 90 Why not j ≥ 0 ? C(n) depends on input type ?

Insertion Sort (contd.) ALGORITHM InsertionSort(A[0..n-1]) for i <- 1 to n-1 do v <- A[i] j <- i-1 while j ≥ 0 and A[j] > v do A[j+1] <- A[j] j <- j-1 A[j+1] <- v What is the worst case scenario ? A[j] > v executes highest # of times When does that happen ? A[j] > A[i] for j = i-1, i-2, …, 0 Can you improve it ? Worst case input: An array of strictly decreasing values Cworst(n) = 𝒊=𝟏 𝒏−𝟏 𝒋=𝟎 𝒊−𝟏 𝟏 = 𝒊=𝟏 𝒏−𝟏 𝒊 = 𝒏−𝟏 𝒏 𝟐 є Θ(n2) For almost sorted files, insertion sort’s performance is excellent! What is the best case ? A[i-1] ≤ A[i] for i = 1, 2, …, n-1 Cavg(n) ≈ 𝒏 𝟐 𝟒 є Θ(n2) Cbest(n) = 𝒊=𝟏 𝒏−𝟏 𝟏 = n-1 є Θ(n) In place? Stable ?

Topological Sorting Ordering of the vertices of a directed graph such that for every edge uv, u comes before v in the ordering First studied in 1960s in the context of PERT (Project Evaluation and Review Technique) for scheduling in project management. Jobs are vertices, there is an edge from x to y if job x must be completed before job y can be started Then topological sorting gives an order in which to perform the jobs

Directed Graph or Digraph Directions for all edges 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 a b c d e a b c d e a b c d e Not symmetric One node for one edge

Digraph (contd.) a b c d e a b c d e a d Tree edge Back edge b e Forward edge Directed cycle: a, b, a Cross edge c DFS forest If no directed cycles, the digraph is called “directed acyclic graph” or “DAG”

Topological Sorting Scenario Consider a robot professor Dr. Botstead. He gets dressed in the morning. The professor must put on certain garments before others (e.g., socks before shoes). Other items may be put on in any order (e.g., socks and pants). undershorts pants belt socks shoes shirt tie jacket watch Wont work if directed cycle exists! Let us help him to decide on a correct order using topological sorting!

Method 1: DFS w9, so8, t7, shi6, j5, b4, sho3, p2, u1, u -> p -> sho p -> sho -> b b -> j shi -> b -> t t -> j so -> sho w -> null sho -> null j -> null shi6, 7 j5, 2 b4, 3 sho3, 1 Reverse the pop order! p2, 4 u1, 5 w9 so8 shi7 t6 u5 p4 b3 j2 sho1

Method 2: Decrease (by 1) and Conquer so w sho p shi b t j u, p, shi, b, t, j, so, sho, w

Topological Sorting Scenario 2 Set of 5 courses: { C1, C2, C3, C4, C5 } C1 and C2 have no prerequisites C3 requires C1 and C2 C4 requires C3 C5 requires C3 and C4 C1, C2, C3, C4, C5 ! C1 C4 C3 C2 C5

Topological Sorting: Usage A large project – e.g., in construction, research, or software development – that involves a multitude of interrelated tasks with known prerequisites Schedule to minimize the total completion time Instruction scheduling in program compilation, resolving symbol dependencies in linkers, etc.

Decrease and Conquer: Generating Permutations Have to generate all n! permutations of the set of integers from 1 to n They can be indices of n-element set {a1, a2, …, an} Assume that we already have (n-1)! permutations of integers 1, 2, …, n-1 Insert n in each of the n possible positions among elements of every permutation of n-1 elements

Generating Permutations (contd.) We shall generate all permutations of {1, 2, 3, 4}, 4! = 24 permutations in total Insert 4 in 4 possible positions of each previous permutation Start 1 Insert 2 in 2 possible positions 21, 12 4321, 3421, 3241, 3214, 4231, 2431, 2341, 2314, 4213, 2413, 2143, 2134, 4312, 3412, 3142, 3124, 4132, 1432, 1342, 1324, 4123, 1423, 1243, 1234 Insert 3 in 3 possible positions of each previous permutation 321, 231, 213, 312, 132, 123

Generating Permutations (contd.) b d c 2 3 1 5 8 7 a  b  d  c  a a  c  b  d  a a  c  d  b  a a  d  b  c  a a  d  c  b  a a  b  c  d  a 2+8+1+7 = 18 2+3+1+5 = 11 5+8+3+7 = 23 5+1+3+2 = 11 7+3+8+5 = 23 7+1+8+2 = 18 If we can produce a permutation from the previous one just by exchanging two adjacent elements, cost calculation takes constant time! This is possible and called minimal-change requirement Every time we process one of the old permutations to get a bunch of new ones, we reverse the order of insertion R2L: 1234, 1243, 1423, 4123, 1 L2R: 4132, 1432, 1342, 1324, Right to Left 12,21 R2L: 3124, 3142, 3412, 4312, L2R: 4321, 3421, 3241, 3214, R2L: 2314, 2341, 2431, 4231, L2R: 4213, 2413, 2143, 2134 R to L 123, 132, 312, L to R 321, 231, 213

Generating Permutations (contd.) It is possible to get the same ordering of permutations of n elements without explicitly generating permutations for smaller values of n Associate a direction with each element k in a permutation 3 2 4 1 Element k is mobile if its arrow points to a smaller number adjacent to it. mobile

Generating Permutations (contd.) ALGORITHM JohnsonTrotter(n) //Output: A list of all permutations of {1, …, n} initialize the first permutation with 1 2 … 𝑛 while the last permutation has a mobile element do find its largest mobile element k swap k with the adjacent element k’s arrow points to reverse direction of all the elements that are larger than k add the new permutation to the list 1 2 𝟑 1 𝟑 2 3 1 𝟐 𝟑 2 1 2 𝟑 1 2 1 3 𝟑 2 1 should have been the last one! This version is by Shimon Even Can be implemented in Θ(n!) time Lexicographic order

Generating Permutations: in Lexicographic Order ALGORITHM LexicographicPermute(n) initialize the first permutation with 12…n while last permutation has two consecutive elements in increasing order do let i be its largest index such that 𝑎 𝑖 < 𝑎 𝑖+1 find the largest index j such that 𝑎 𝑖 < 𝑎 𝑗 swap ai with aj reverse the order from ai+1 to an inclusive add the new permutation to the list 1 2 3 1 3 2 2 3 1 2 1 3 i i+1 =j i i+1 j i i+1 j i i+1 =j 2 3 1 3 2 1 3 1 2 3 2 1 i i+1 =j i i+1 =j i i+1= j Narayana Pandita in 14th century India

Take each subset of {a1, a2, …, an-1} Generating Subsets Generate all 2n subsets of the set A = {a1, a2, …, an} Can divide the subsets into two groups: Those that do not contain an Those that do contain an It’s the subsets of {a1, a2, …, an-1} Take each subset of {a1, a2, …, an-1} and add an to it

Generating Subsets (contd.) Let us generate 23 subsets of {a1, a2, a3} Ø n = 0 n = 1 Ø {a1} n = 2 Ø {a1} {a2} {a1, a2} n = 3 Ø {a1} {a2} {a1, a2} {a3} {a1, a3} {a2, a3} {a1, a2, a3}

Generating Subsets (contd.) There is a one-to-one correspondence between all 2n subsets of an n element set A = {a1, a2, …, an} and all 2n bit strings b1b2…bn of length n. Bit strings 000 001 010 011 100 101 110 111 Subsets Ø {a3} {a2} {a2, a3} {a1} {a1, a3} {a1, a2} {a1, a2, a3} How to generate subsets in “squashed order” from bitstrings? Ø {a1} {a2} {a1, a2} {a3} {a1, a3} {a2, a3} {a1, a2, a3}

Generating Subsets (contd.) Is there a minimal-change algorithm for generating bit strings so that every consecutive two differ by only a single bit ? In terms of subsets, every consecutive two differ by either an addition or deletion, but not both of a single element. Yes, there is! For n = 3, 000 001 011 010 110 111 101 100 It is called “Binary reflected Gray code”

Generating Subsets (contd.) ALGORITHM BRGC(n) //Generates recursively the binary reflected Gray code of order n //Output: A list of all bit strings of length n composing the Gray code if n = 1 make list L containing bit strings 0 and 1 in this order else generate list L1 of bit strings of size n-1 by calling BRGC(n-1) copy list L1 to list L2 in reverse order add 0 in front of each bit string in list L1 add 1 in front of each bit string in list L2 append L2 to L1 to get list L return L L : 00 01 11 10 L1 : 0 1 L2 : 1 0 L1 : 00 01 L2 : 11 10

Binary Reflected Gray Codes (contd.) 10 11 110 111 1 011 010 100 101 00 01 000 001

Decrease-by-a-Constant-Factor Algorithms Binary Search Highly efficient way to search for a key K in a sorted array A[0..n-1] Compare K with A’s middle element A[m] If they match, stop. Else if K < A[m] do the same for the first half of A Else if K > A[m] do the same for the second half of A

Binary Search (contd.) K (key) A[0] … A[m-1] A[m] A[m+1] … A[n-1] Search here if K > A[m] Search here if K < A[m] Let us apply binary search for K = 70 on the following array: 3, 14, 27, 31, 39, 42, 55, 70, 74, 81, 85, 93, 98 1 2 3 4 5 6 7 8 9 10 11 12

Binary Search (contd.) l <- 0 r <- n-1 while l ≤ r do ALGORITHM BinarySearch(A[0..n-1], K) //Input: A[0..n-1] sorted in ascending order and a search key K //Output: An index of A’s element equal to K or -1 if no such element l <- 0 r <- n-1 while l ≤ r do m <- 𝑙+𝑟 2 if K = A[m] return m else if K < A[m] r <- m-1 else l <- m+1 return -1 Input size: n Basic operation: “comparison” Does C(n) depend on input type? YES! Let us look at the worst-case… Worst-case input: K is absent or some K is at some special position… To simplify, assume n = 2k Then Cworst(n) є Θ(lgn) Cworst(n) = Cworst( 𝑛 2 ) + 1 Cworst(1) = 1 For any integer n > 0, Cworst(n) = 𝐥𝐨𝐠 𝟐 𝒏 +𝟏= 𝐥𝐨𝐠 𝟐 𝒏+𝟏

Variable-Size-Decrease Problem size decreases at each iteration in variable amount Euclid’s algorithm for computing the greatest common divisor of two integers is one example

Computing Median and the Selection Problem Find the k-th smallest element in a list of n numbers For k = 1 or k = n, we could just scan the list More interesting case is for k = 𝑛 2 Find an element that is not larger than one half of the list’s elements and not smaller than the other half; this middle element is called the “median” Important problem in statistics

Median Selection Problem Any idea ? Sort the list and select the k-th element Time efficiency is ? O(n lgn) It is an overkill, the problem needs only the k-th smallest, does not ask to order the entire list! We shall take advantage of the idea of “partitioning” a given list around some value p of, say the list’s first element. This element is called the “pivot”. p p all are < p all are ≥ p Lomuto partitioning & Hoare’s partitioning

Lomuto Partitioning p i i 61 93 56 90 11 p i i s 61 56 93 90 11 A[i] not less than p s p i A[i] not less than p 61 93 56 90 11 p i s s 61 56 93 90 11 A[i] less than p s s p i i A[i] less than p 61 56 93 90 11 p i s 61 56 11 90 93 s 11 56 61 90 93

Lomuto Partitioning (contd.) s i p < p ≥ p ? ? A[i] < p s p < p ≥ p swap s < p p ≥ p

Lomuto Partitioning (contd.) ALGORITHM LomutoPartition(A[l..r]) //Partition subarray by Lomuto’s algo using first element as pivot //Input: A subarray A[l..r] of array A[0..n-1], defined by its //left and right indices l and r (l ≤ r) //Output: Partition of A[l..r] and the new position of the pivot p <- A[l] s <- l for i <- l+1 to r do if A[i] < p s <- s+1 swap(A[s], A[i]) swap(A[l], A[s]) return s Time complexity is Θ(r-l) Length of subarray A[l..r]

Median Selection Problem (contd.) We want to use the Lomuto partitioning to efficiently find out the k-th smallest element of A[0..n-1] Let s be the partition’s split position If s = k-1, pivot p is the k-th smallest If s > k-1, the k-th smallest (of entire array) is the k-th smallest of the left part of the partitioned array If s < k-1, the k-th smallest (of entire array) is the [(k-1)-(s+1)+1]-th smallest of the right part of the partitioned array

Median Selection Problem (contd.) ALGORITHM QuickSelect(A[l..r], k) //Input: Subarray A[l..r] of array A[0..n-1] of orderable elements and integer k (1 ≤ k ≤ r-l+1) //Output: The value of the k-th smallest element in A[l..r] s <- LomutoPartiotion(A[l..r]) if s = k-1 return A[s] else if s > l+k-1 QuickSelect(A[l..s-1], k) else QuickSelect(A[s+1..r], k-1-s)

Median Selection: QuickSelect 4 1 10 8 7 12 9 2 15 s i 4 1 10 8 7 12 9 2 15 s i 4 1 10 8 7 12 9 2 15 s i 4 1 2 8 7 12 9 10 15 s i 4 1 2 8 7 12 9 10 15 s 2 1 4 8 7 12 9 10 15 s = 2 < k-1 = 4, so we proceed with the right part…

Median Selection: QuickSelect (contd.) 2 1 4 8 7 12 9 10 15 s i 8 7 12 9 10 15 s i 8 7 12 9 10 15 s i 8 7 12 9 10 15 s 7 8 12 9 10 15 Now s (=4) = k-1 (=5-1=4), we have found the median!

Median Selection (contd.) ALGORITHM QuickSelect(A[l..r], k) //Input: Subarray A[l..r] of array A[0..n-1] of orderable elements and integer //k (1 ≤ k ≤ r-l+1) //Output: The value of the k-th smallest element in A[l..r] s <- LomutoPartiotion(A[l..r]) if s = k-1 return A[s] else if s > l+k-1 QuickSelect(A[l..s-1], k) else QuickSelect(A[s+1..r], k-1-s) What is the time efficiency ? Does it depend on input-type? Partitioning A[0..n-1] takes n-1 comparisons If now s = k-1, this is the best-case, Cbest(n) = n є Θ(n) What’s the worst-case ? Why is QuickSelect in variable-size-decrease ? Say k = n and we have a strictly increasing array! Then, Cworst(n) = (n-1)+(n-2)+……+1 = 𝒏−𝟏 𝒏 𝟐 = 𝜣 𝒏 𝟐 Cworst(n) is worse than sorting based solution! Fortunately, careful mathematical analysis has shown that the average-case is linear

Thank you !