Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Types of Algorithms.
Greedy Algorithms Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Dynamic Programming.
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
1 Dynamic Programming Jose Rolim University of Geneva.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming UNC Chapel Hill Z. Guo.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Fundamentals of Algorithms MCS - 2 Lecture # 7
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC5101 Advanced Algorithms Analysis
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
1Computer Sciences Department. 2 Advanced Design and Analysis Techniques TUTORIAL 7.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Dynamic Programming Sequence of decisions. Problem state.
Lecture 5 Dynamic Programming
Advanced Design and Analysis Techniques
Types of Algorithms.
Lecture 5 Dynamic Programming
Unit-5 Dynamic Programming
Types of Algorithms.
Dynamic Programming.
Analysis of Algorithms CS 477/677
Chapter 15: Dynamic Programming II
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming.
DYNAMIC PROGRAMMING.
Types of Algorithms.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Presentation transcript:

Computer Sciences Department1

 Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its parent  The "beginning" node is called the root (no parent)  A node without children is called a leaf  Property 2: a unique path exists from the root to every other node 3 Heap tree (Properties) Computer Sciences Department binary tree (Properties) +  max-heap or min-heap property

 Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its parent  The "beginning" node is called the root (no parent)  A node without children is called a leaf  Property 2: a unique path exists from the root to every other node 4 Binary search tree (Properties) Computer Sciences Department binary tree (Property) +

Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 5Computer Sciences Department

 Problem Formulation  Examples  The Basic Problem  Principle of optimality  Important techniques:  dynamic programming (Chapter 15),  greedy algorithms (Chapter 16) 6 Objectives Computer Sciences Department

 This part covers three important techniques for the design and analysis of efficient algorithms:  dynamic programming (Chapter 15),  greedy algorithms (Chapter 16) 7 Techniques -1 Computer Sciences Department

 Earlier parts have presented other widely applicable techniques, such as  divide-and-conquer,  randomization, and  the solution of recurrences. 8 Techniques - 2 Computer Sciences Department

 Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems.  It is used in many areas such as Computer science “theory, graphics, AI, systems, ….etc”  Dynamic programming typically applies to optimization problems in which a set of choices must be made in order to arrive at an optimal solution.  Dynamic programming is effective when a given subproblem may arise from more than one partial set of choices; the key technique is to store the solution to each such subproblem in case it should reappear. 9 Dynamic programming Computer Sciences Department

 Like dynamic-programming algorithms, greedy algorithms typically apply to optimization problems in which a set of choices must be made in order to arrive at an optimal solution. The idea of a greedy algorithm is to make each choice in a locally optimal manner. 10 Greedy algorithms Computer Sciences Department

 Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems.  Divide and- conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. 11 Dynamic programming -1 Computer Sciences Department

 Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems.  A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. 12 Dynamic programming -2 Computer Sciences Department

 Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value. 13 Dynamic programming -2 Computer Sciences Department

 The development of a dynamic-programming algorithm can be broken into a sequence of four steps. 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information. 14 The development of a dynamic- programming algorithm Computer Sciences Department

Assembly-line scheduling (self study) Computer Sciences Department Note: lecturer should explain the idea of 15.1, in general (you have many ways to get the solution one of them is an optimal).

16 Step 1: The structure of the fastest way through the factory (self study) Computer Sciences Department Note: lecturer should explain the idea of 15.1, in general (you have many ways to get the solution one of them is optimal).

17 Step 2: A recursive solution (self study) Computer Sciences Department Note: lecturer should explain the idea of 15.1, in general (you have many ways to get the solution one of them is optimal).

18 Step 3: Computing the fastest times (self study) Computer Sciences Department

19 Step 4: Constructing the fastest way through the factory (self study) Computer Sciences Department

Matrix-chain multiplication We can multiply two matrices A and B only if they are compatible: the number of columns of A must equal the number of rows of B. If A is a p × q matrix and B is a q × r matrix, the resulting matrix C is a p × r matrix Computer Sciences Department

21 Counting the number of parenthesizations Computer Sciences Department

 Step 1: The structure of an optimal parenthesization  Step 2: A recursive solution  Step 3: Computing the optimal costs 22Computer Sciences Department

23 Step 3: Computing the optimal costs Computer Sciences Department

24Computer Sciences Department

25 Step 4: Constructing an optimal solution Computer Sciences Department

26 Step 4 (explanation)

Constructing an Optimal Solution Analysis of Algorithms27

Constructing an Optimal Solution Analysis of Algorithms28

Constructing an Optimal Solution Analysis of Algorithms29

Example: Recursive Construction of an Optimal Solution Analysis of Algorithms30

Example: Recursive Construction of an Optimal Solution Analysis of Algorithms31

Example: Recursive Construction of an Optimal Solution Analysis of Algorithms32 return A 6 Optimal Solution = ((A 1 (A 2 A 3 ))((A 4 A 5 )A 6 ))

Elements of dynamic programming (self stady) 15.3 Computer Sciences Department33

 The first step in solving an optimization problem by dynamic programming is to characterize the structure of an optimal solution. Recall that a problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Optimal substructure Computer Sciences Department34

 a good rule of thumb is to try to keep the space as simple as possible, and then to expand it as necessary.  For example:  the space of subproblems that we considered for assembly-line scheduling was the fastest way from entry into the factory through stations S 1, j and S 2, j. This subproblem space worked well, and there was no need to try a more general space of subproblems. Characterize the space of subproblems Computer Sciences Department35

 Optimal substructure varies across problem domains in two ways: 1. how many subproblems are used in an optimal solution to the original problem, and 2. how many choices we have in determining which subproblem(s) to use in an optimal solution. Optimal substructure Computer Sciences Department36

 In assembly-line scheduling, an optimal solution uses just one subproblem, but we must consider two choices in order to determine an optimal solution. To find the fastest way through station S i, j, we use either the fastest way through S 1, j−1 or the fastest way through S 2, j−1 ; whichever we use represents the one subproblem that we must optimally solve. Assembly-line scheduling Computer Sciences Department37

 Informally, the running time of a dynamic- programming algorithm depends on the product of two factors: the number of subproblems overall and how many choices we look at for each subproblem. In assembly-line scheduling, we had Theta (n) subproblems overall, and only two choices to examine for each, yielding a Theta (n) running time. Running time of a dynamic- programming Computer Sciences Department38

 One should be careful not to assume that optimal substructure applies when it does not.  Unweighted shortest path:2 Find a path from u to v consisting of the fewest edges. Such a path must be simple, since removing a cycle from a path produces a path with fewer edges.  Unweighted longest simple path: Find a simple path from u to v consisting of the most edges. We need to include the requirement of simplicity because otherwise we can traverse a cycle as many times as we like to create paths with an arbitrarily large number of edges.  A path is called simple if it does not have any repeated vertices. Be careful Computer Sciences Department39

Not simple Computer Sciences Department40

 Overlapping subproblems  Reconstructing an optimal solution  Memoization  15.4 Longest common subsequence read only Computer Sciences Department41

 Suppose that we are designing a program to translate text from English to French.  For each occurrence of each English word in the text, we need to look up its French equivalent. One way to perform these lookup operations is to build a binary search tree with n English words as keys and French equivalents as satellite data Optimal binary search trees Computer Sciences Department42

Computer Sciences Department43

44Computer Sciences Department

45

Computer Sciences Department46

Computer Sciences Department47

 Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems.  Dynamic programming is effective when a given subproblem may arise from more than one partial set of choices  Steps of Dynamic programming : 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information.  Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems.  Principle of optimality:…….. 48Computer Sciences Department