Dynamic Programming Solving Optimization Problems.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Types of Algorithms.
Dynamic Programming.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Inexact Matching of Strings General Problem –Input Strings S and T –Questions How distant is S from T? How similar is S to T? Solution Technique –Dynamic.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
Sequence Alignment Bioinformatics. Sequence Comparison Problem: Given two sequences S & T, are S and T similar? Need to establish some notion of similarity.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
UNIVERSITY OF SOUTH CAROLINA College of Engineering & Information Technology Bioinformatics Algorithms and Data Structures Chapter 11: Core String Edits.
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Theory I Algorithm Design and Analysis (11 - Edit distance and approximate string matching) Prof. Dr. Th. Ottmann.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Dynamic Programming UNC Chapel Hill Z. Guo.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic Programming.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Dynamic Programming Min Edit Distance Longest Increasing Subsequence Climbing Stairs Minimum Path Sum.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming & Memoization. When to use? Problem has a recursive formulation Solutions are “ordered” –Earlier vs. later recursions.
Dynamic Programming (Edit Distance). Edit Distance Input: – Two input strings S1 (of size n) and S2 (of size m) E.g., S1 = ATTTCTAGTGGGTAAA S2 = ATCTAGTTTAGGGATA.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming for the Edit Distance Problem.
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Discrete Optimization
Types of Algorithms.
Prepared by Chen & Po-Chuan 2016/03/29
Unit-5 Dynamic Programming
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming.
Dynamic Programming-- Longest Common Subsequence
Bioinformatics Algorithms and Data Structures
Dynamic Programming II DP over Intervals
Algorithms and Data Structures Lecture X
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
Major Design Strategies
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Dynamic Programming.
Major Design Strategies
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming Solving Optimization Problems

Last Class “Those who cannot …”

Dynamic Programming Optimization problems have many solutions (usually we want the ‘best’ solution of some sort). Like the shortest path, the easiest way, the minimal cost, the maximum gain etc.

Dynamic Programming Applications –Finding potential partnersFinding potential partners (Optimal Stopping Problems) –Matching protein sequences (BioInformatics) –Scheduling jobs on a processor (Parallel Computing) –Shortest Path computations (MapQuest)

Main Features Replaces Exp-Time with Poly-Time algs Solves Optimization Problems Similar to D & C but sub-problems are overlapping ( Hence, Doesn’t resolve sub-problems)

Basic Elements Dictionary or Table Polynomial number of Sub-problems DP-Principle –For global problems to be solved optimally, each sub-problem should be solved optimally Top down (a.k.a Memoization) –Vs Bottom Up

Optimal Substructure A problem exhibits optimal substructure if an optimal solution contains within it optimal solutions to subproblems –Build an optimal solution from optimal solutions to subproblems

Dynamic Programming The sub-problem graph (DAG). Doing DFS on sub-problem graph tells us in which order to solve sub- problems. (a.k.a. Reverse Topological Sorting).

Fibonacci-DP A sub-problem depends on only two predecessor sub-problems. Time complexity : O(n) Space complexity : O(1)

Dynamic Programming- Recipe 1.Write the naïve top down algorithm (Recursive D&C) 2.Use Dictionary on (1) 3.Analyze sub-problem graph and simplify the dynamic program if possible. (e.g. Fibonacci) 4.Optional : Decide how to get the solution of the problem using the data in the dictionary (Only applicable to some problems).

Matrix Multiplication Recall that multiplying p x q matrix with q x r matrix takes p.q.r element wise multiplications. Matrix multiplication is associative i.e. –(A x B) x C = A x (B x C)

Matrix Multiplication Order Problem. What is the best way to multiply M 1, M 2, M 3, …, M n (when n > 2) –Where M i has dimensions d i-1 x d i –We saw an example in the last class. What is the minimum number of multiplications required to compute the product? What is the running time?

How many parenthesizations? For n matrices? O(n) ? O(n^3)? O(2^n) ? even worse? What is the recursive relation that gives us the count? i matricesn-i matrices

Recursive relation P(n) = 1 for n = 1. This shows that there are exponential number of paranthesizations.

The sub-problems What are the sub-problems when we want to optimize how we want to multiply?

Naïve Top-Down Approach

Running Time? Exponential or Polynomial? Any easy lower bounds?

Top-Down Dynamic Program

The cost function If L + 1 < h If L+1 = h then m[l,h] = 0.

Bottom-Up Dynamic Programming Is Depth First Search Unique? How should we get rid of the recursion in this case (Remember how we did it for Fibonacci?)

The Bottom Up code int bestCost(int left, int right, int *darray){ for (i = right; i >= left; --i){ // rows for(j = left; j <= right; ++j){ // columns if((i >= j) || (j – i == 1)) continue; bc = INF; for(k = i+1; k < j; ++k){ int cost = bcMatrix[i][k] + bcMatrix [k][j] darray[i] * darray[k] * darray[j]; if(cost < bc) bc = cost; } bcMatrix[i][j] = bc; } return bcMatrix[left][right]; }

Knapsack Subproblems : –B[n,k] = There exists a subset of (s[1],s[2],…,s[n]) that exactly fills up a knapsack of size k. [It’s a boolean variable] –B[n,k] = B[n-1,k] or B[n-1, k-s[n]] –How many total such subproblems? O(nk)?

Knapsack: Recursive bool B(int n, int k) if( n == 0 && k == 0) return true; if( n == 0 && k > 0) return false; if ( B(n-1,k) || ((k-s[n] >= 0) && B(n-1,k-s[n]))) return true; return false;

Edit Distance Given : Strings s[1..n] and t[1..m] Find fewest number of edits needed to convert s to t where ‘edit’ is : –Deleting a character –Inserting a character –Changing a character a.k.a Levenshtein distance

Edit Distance Applications –Spell Checking –Genomics –Morphing/Vision

Edit Distance Example –algorithm to logarithm? –algorithm –lgorithm (delete) –logorithm (Insert) –logarithm (Replace) Edit Distance = 3

Edit Distance Subproblems? –What do we want? Small number of Subproblems (Polynomial?) Should be able to recover the solution easily from other smaller subproblems. Edit distance d(i,j) = Edit distance between s[1..i] and t[1..j] ? Does it satisfy what we want?

Edit Distance How can we solve the subproblem (i,j) by looking at smaller subproblems? –Solve s[1..i] to t[1..j] –given the edit distances between s[1..i-1] and t[1..j-1] and … –What if we look at s[i] and t[j]

Edit Distance Recursion Replace Delete s[i] Insert t[j]

Base Cases –To turn empty string to t[1..j], do j inserts –To turn s[1..i] to empty string, do i deletes d(i,0) = i; for I = 1.. m (deletes) d(0,j) = j; for j = 1 to n (inserts) Can we fill up d(i,j) using this table and the recursion given? In what order? What is the space requirement? Running time?

Loop Assignment for i = 1 to n for j = 1 to m d(i,j) = min( d(i-1,j) + 1, d(i,j-1) + 1, d(i-1,j-1)+ ((s[i] == t[j])?0:1) );