Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.

Slides:



Advertisements
Similar presentations
CS 332: Algorithms NP Completeness David Luebke /2/2017.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Greedy Algorithms.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Analysis of Algorithms
Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
COMP8620 Lecture 8 Dynamic Programming.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Review: Dynamic Programming
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization.
Dynamic Programming … Continued
CS583 Lecture 12 Jana Kosecka Dynamic Programming Longest Common Subsequence Matrix Chain Multiplication Greedy Algorithms Many slides here are based on.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
Greedy Algorithms (Chap. 16)
Greedy Algorithm.
JinJu Lee & Beatrice Seifert CSE 5311 Fall 2005 Week 10 (Nov 1 & 3)
Algorithm Design Methods
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Dynamic Programming Dynamic Programming 1/15/ :41 PM
3. Brute Force Selection sort Brute-Force string matching
Longest Common Subsequence
CSC 413/513- Intro to Algorithms
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Analysis of Algorithms CS 477/677
0-1 Knapsack problem.
Longest Common Subsequence
Advance Algorithm Dynamic Programming
Presentation transcript:

Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada

 Greedy Strategy  Backtracking Technique  Dynamic Programming 2

 In this lecture we will learn about backtracking and greedy algorithms, as well as the concept of dynamic programming and optimization. 3

Optimal structure: an optimal solution to a problem consists of optimal solutions to sub-problems Overlapping sub-problems: few sub-problems in total, many recurring instances of each Solve in “near linear” time through recursion, using a table to show state of problem at any given moment of time Reduces computation time from exponential to linear in some cases Dynamic programming

Greedy algorithm always makes the choice that is the locally optimal (best) at the moment It can reduce complex problem to linearly solvable time, and can be much easier to code 5 Greedy methods

 Shortest-path problem and multiple instance knapsack problems can be solved using greedy approach  Activity selection is another example 6 Greedy methods

Problem: Stampede midway problem » Buy a wristband that lets you onto any ride » Lots of rides, each starting and ending at different times » Your goal: ride as many rides as possible Alternative goal that we don’t solve here: maximize time spent on rides 7 Greedy methods

Let A be an optimal solution of S and let k be the minimum activity in A (i.e., the one with the earliest finish time). Then A - {k} is an optimal solution to S’ = {i ∈ S: si ≥ fk} In words: once activity #1 is selected, the problem reduces to finding an optimal solution for activity selection over activities in S compatible with #1 8 Greedy methods

The algorithm is following: » Sort the activities by finish time » Schedule the first activity » Then schedule the next activity in sorted list which starts after previous activity finishes » Repeat until no more activities Idea: Always pick the shortest ride available at the time Greedy-choice Property. » A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. 9 Greedy methods

 Backtracking can reduce a NP compete problem to linear problem by only going through selected branches of the global solution. 10 Backtracking

 Graph-coloring problem  8 queens problem  4 knight problem  Perfect hash function problems are examples of backtracking method 11 Backtracking

 The goal is to color vertices in a graph G={V,E} so that no 2 adjacent vertices have the same color. Partial 3-coloring problem means only 3 colors are considered.  Direct approach builds the tree of ALL possibilities in exponential time. 12 Backtracking

 Partial 3-coloring (3 colors) is solved by the following method:  Color first vertex with 1 st color, color next vertex with the next color, check if those two vertices are adjacent, if not - coloring is legal, proceed to next vertex, if yes and color is the same – coloring is illegal, try next color for second vertex. If all colors tried and all colorings are illegal, backtrack, try next color for previous vertex etc.  Note: sometimes solution is impossible.  Exponential O(3^n) complexity is reduced to O(n) on average. 13 Backtracking

 Finally, job scheduling problem can be solved by converting the algebraic relationship xi-xj>=c to directed graph with vertices xi, xj, direction from xj to xi, and edge cost (or time required to complete job xj) is c.  The problem of finding time (minimum) when the last activity can commence (i.e. all preceding activities has been completed) is then converted to longest path problem on the graph. 14 Job scheduling problem

Optimal structure: optimal solution to problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve in “near linear” time through recursion, using a table to show state of problem at any given moment of time Reduces computation time from exponential to linear in some cases Dynamic programming

 Longest Common Subsequence  Problem: Given 2 sequences, A = 〈 a1,...,an 〉 and B = 〈 b1,...,bm 〉, find a common subsequence of characters not necessarily adjacent whose length is maximum.  Subsequence must be in order. 16 Dynamic programming

 Straight-forward solution finds all permutations of substrings in string A in exponential time 2^n, then checks for every beginning position in string B – another m times, O(m2^n) 17 Dynamic programming

 Application: comparison of two DNA strings of lengths m and n  Ex: X= {A B C B D A B }, Y= {B D C A B A}  Longest Common Subsequence:  X = A B C BD A B  Y = B D C A B A 18 Dynamic programming

 Dynamic programming defines an algorithmic technique to solve the class of problems through recursion on small subproblems and noticing patterns. 19 Dynamic programming

We determine the length of Longest Common Substring and at the same time record the substring as well. A has length m, B has lengths n Define Ai, Bj to be the prefixes of A and B of length i and j respectively Define L[i,j] to be the length of LCS of Ai and Aj Then the length of LCS of A and B will be L[n,m] We start with i = j = 0 (empty substrings of A and B) LCS of empty string and any other string is empty, so for every i and j: c[0, j] = L[i,0] = 0 First case: A[i]=B[j]: one more symbol in strings A and B matches, so the length of LCS Ai and Aj equals to the length of LCS L [AXi-1, Bi-1]+1 Second case: As symbols don’t match, solution is not improved, and the length of L(Ai, Bj) is the same as before (i.e. maximum of L(Ai, Bj-1) and L(Ai-1,Bj)) 20 Dynamic programming

 LCS algorithm calculates the values of each entry of the array ARRAY[n,m] in O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array 21 Dynamic programming

Given some items, pack the knapsack to get the maximum total value. Each item has some size and some value. Total size of the knapsack is is no more than some constant C. We must consider sizes of items as well as their value. Item# Size Value Knapsack problem

23 Knapsack problem Given a knapsack with maximum capacity C, and a set U consisting of n items {U1,U2,Un} Each item j has some size sj and value vj Problem: How to pack the knapsack to achieve maximum total value of packed items?

24 There are two versions of the problem: (1) “0-1 knapsack problem” and (2) “Fractional knapsack problem” (1) Items are single; you either take an item or not. Solved with dynamic programming (2) Items can be taken a number of times. Solved with a greedy algorithm Knapsack problem

 Let’s first solve this problem with a straightforward algorithm  Since there are n items, there are 2n possible combinations of items. We go through all combinations and find the one with the most total value and with total size less or equal to C  Running time will be O(2n) 25 Knapsack problem

 “Fractional knapsack problem” – multiple instances of the same item are possible  Divide knapsack’s value by size, get w (worth) of an item  Sort items by their worth in decreasing order.  Start filling out knapsack with as many items of higher worth as possible (as size allows)  Go to next highest worth item if previous does not fit anymore  O(n) solution!  26

If items are labeled 1..n, then a subproblem would be to find an optimal solution for Ui = {items labeled 1, 2,.. i} However, the solution for U4 might not be part of the solution for U5, so this definition of a subproblem is flawed. Adding another parameter: j, which represents the exact size for each subset of items U, is the solution, woth the subproblem to be computed as V[i,j] 27 Knapsack problem

Now, the best subset of Uj that has total size J is one of the two: 1) the best subset of Ui-1 that has total size s, or 2) the best subset of Ui-1 that has total size (C-si) plus the item i 28 Knapsack problem

 Intuitively, the best subset of Ui that has the total size s either contains item i or not: First case: si>C. Item i can’t be part of the solution, since if it was, the total size would be more than C, which is unacceptable Second case: si <= C. Then the item i can be in the solution, and we choose the case with greater value benefit (whether it includes item i or not) 29 Knapsack problem

 What are greedy methods and how multi- instance knapsack problem can be solved through them?  How backtracking works for graph coloring?  How job scheduling problem can be solved through directed graphs?  What are the two types of knapsack problems? 30 Review questions

 Backtracking and Dynamic programming are useful technique of solving optimization problems  When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them to save time  Running time (Dynamic Programming) is much better than straight-forward approach  Sources: A. Drozdek textbook 31 Conclusions