Instructor Neelima Gupta Instructor: Ms. Neelima Gupta.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

MCA 301: Design and Analysis of Algorithms
 Review: The Greedy Method
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Analysis of Algorithms
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Greedy vs Dynamic Programming Approach
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Polynomial time approximation scheme Lecture 17: Mar 13.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Instructor Neelima Gupta Table of Contents Factor 2 algorithm for Bin Packing Factor 2 algorithm for Minimum Makespan Scheduling Reference:
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
Introduction to NP Instructor: Neelima Gupta 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
CS 3343: Analysis of Algorithms
Seminar on Dynamic Programming.
CS38 Introduction to Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
CS 3343: Analysis of Algorithms
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
MCA 301: Design and Analysis of Algorithms
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Polynomial time approximation scheme
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Instructor Neelima Gupta
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
MCS 312: NP Completeness and Approximation algorthms
Analysis of Algorithms CS 477/677
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Seminar on Dynamic Programming.
Presentation transcript:

Instructor Neelima Gupta

Instructor: Ms. Neelima Gupta

Interval/Job Scheduling –a greedy approach

Weighted Interval Scheduling – Each job i has (s i, f i, p i ) starting time s i, finishing time f i, and profit p i Aim: Find an optimal schedule of job that makes the maximum profit. Greedy approach : choose the job which finishes first…..does not work. So, DP Thanks to Neha (16)

iS i F i P i Thanks to Neha (16)

Weighted Interval Scheduling Time 0 P(1)=10 P(3)=4 P(4)=20 P(2)= P(5)=2 Thanks to Neha (16)

Greedy Approach Time 0 P(1)=10 P(4)=20 P(2)= P(5)=2. Thanks to Neha (16)

Greedy does not work Time 0 P(1)=10 P(4)=20 P(2)= P(5)=2 Optimal schedule Schedule chosen by greedy app Greedy approach takes job 2, 3 and 5 as best schedule and makes profit of 7. While optimal schedule is job 1 and job4 making profit of 30 (10+20). Hence greedy will not work Thanks to Neha (16)

DP Solution for WIS Let m[j]= optimal schedule solution from the first j th jobs, (jobs are sorted in the increasing order of their finishing times) p j =profit of j th job. p[j] =largest index i<j, such that interval i and j are disjoint i.e. i is the rightmost interval that ends before j begins or the last interval compatible with j and is before j. Thanks to : Neha Mishra (roll no:18)

DP Solution for WIS Either j is in the optimal solution or it is not. If it is, then m[j] = p j + profit when considering the jobs before and including the last job compatible with j i.e. m[j] = p j + m[p(j)] If it is not, then m[j] = m[j-1] Thus, m[j]= max(p j + m[p(j)], m[j-1])

Recursive Paradigm Write a recursive function to compute m[n]…afterall we are only interested in that. some of the problems are solved several times leading to exponential time.

INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 Thanks to : Nikita Khanna(19)

INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 P[2]=0

Thanks to : Nikita Khanna(19) INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 P[2]=0 P[3]=1

Thanks to : Nikita Khanna(19) INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 P[2]=0 P[3]=1 P[4]= 0

Thanks to : Nikita Khanna(19) INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 P[2]=0 P[3]=1 P[4]= 0 P[5]=3

Thanks to : Nikita Khanna(19) INDE X V 1 =2 V 2 =4 V 3 =4 V 4 =7 V 5 =2 V 6 =1 P[1]=0 P[2]=0 P[3]=1 P[4]= 0 P[5]=3 P[6]=3

Recursive Paradigm : tree of recursion for WIS

Recursive Paradigm Sometimes some of the problems are solved several times leading to exponential time. For example: Fibonacci numbers. DP Solution: Solve a problem only once and use it as and when required Top-down DP …..Memoization….generally storage cost is large Bottom-Up DP …….Iterative

Principles of DP Recursive Solution (Optimal Substructure Property) Overlapping Subproblems Total Number of Subproblems is polynomial

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2 Max{2,0+4 } 4

Thanks to : Nikita Khanna(19) ) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2 Max{2,0+4 } 4 Max{4,2+4} 6

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2 Max{2,0+4 } 4 Max{4,2+4} 6 Max{6,0+7} 7

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2 Max{2,0+4 } 4 Max{4,2+4} 6 Max{6,0+7} 7 Max{7,6+2} 8

Thanks to : Nikita Khanna(19) m[j] = max{m[j-1], m[p[j] ] + p j } m Max{0,0+2} 2 Max{2,0+4 } 4 Max{4,2+4} 6 Max{6,0+7} 7 Max{7,6+2} 8 Max{8,6+1} 8

Definition:- are the matrix of order respectively. e.g. are the matrix of order (2 x 3), (3 x 4), ( 4 x 5) respectively. = (2 x 3 x 4 )+ (2 x 4 x 5) TOTAL MULTIPLICATION= = 64 = (3 x 4 x 5)+ (2 x 3x 5) =90 So, we have seen that by changing the evaluation sequence cost of operation also changes. Matrix Chain Multiplication THANKS TO:OM JI(26)

Recursive Solution In general,

Overlapping Subproblems m[1………4] m[1],m[2,3,4] m[1,2],m[3,4] m[1,2,3],m[4] m[2],m[3,4] m[2,3],m[4] m[1],m[2,3] m[1,2],m[3] THANKS TO: OM JI(26)

Number of sub-problems M[i……………j] i <= j n choices n choices Number of subproblems= n 2 …..polynomial THANKS TO: OM JI(26)

Matrix dimensions: A 1 : 3 × 5 A 2 : 5 × 4 A 3 : 4 × 2 A 4 : 2 × 7 A 5 : 7 × 3 A 6 : 3 × 8 Example: THANKS TO: OM JI(26)

Problem of parenthesization A1A1 A2A2 A3A3 A4A4 A5A5 A6A6 A1A A2A A3A A4A A5A A6A A 1 *A 2 = 3*5*4 = 60 A 2 *A 3 = 5*4*2 = 40 A 3 *A 4 = 4*2*7 = 56 A 4 *A 5 = 2*7*3 = 42 A 5 *A 6 = 7*3*8 = 168 THANKS TO : OM JI(26)

Problem of parenthesization A 1 _A 3 = A 1 *A 2 *A 3 = min( (A 1.A 2 ).A 3 =60+ 3*4*2 =84 or A 1.(A 2.A 3 )=40+ 3*5*2=70) =70 A 2 _A 4 = A 2 *A 3 *A 4 = min( (A 2.A 3 ).A 4 =40+ 5*2*7=110 or A2.(A3.A4)=56+ 5*4*7 =196) =110 A 3 _A 5 = A 3 *A 4 *A 5 = min( (A 3.A 4 ).A 5 =56+4*7*3=140 or A 3.(A 4.A 5 )=42+ 4*2*3 =66)= 66 A 4 _A 6 = A 4 *A 5 *A 6 = min( (A 4 *A 5 )*A 6 =42+2*3*8=90 or A 4 *(A 5 *A 6 )=168+ 2*7*8 =312 ) =90 A1A1 A2A2 A3A3 A4A4 A5A5 A6A6 A1A A2A A3A A4A A5A A6A THANKS TO: OM JI(26)

Problem of parenthesization A1A1 A2A2 A3A3 A4A4 A5A5 A6A6 A1A K=1 112 K=3 130 K=3 202 K=5 A2A K=3 112 K=3 218 K=3 A3A K=3 154 K=3 A4A K=5 A5A A6A THANKS TO: OM JI(26)

Running Time Ο(n 2 ) entries each takes O(n) time to compute The running time of this procedure is Ο(n 3 ). Thanks to: OM JI(26)

Segmented Least Squares: Multi-way Choices

Given a set P of n points in a plane denoted by (x 1,y 1 ), (x 2,y 2 ),..., (x n,y n ) and line L defined by the equation y = ax +b, error of line L with respect to P is the sum of squares of the distances of these points from L. Error(L, P) = ……Parila….. A line of ‘best fit’

Line of Best Fit Line of Best Fit is the one that minimizes this error. This can be computed in O(n) time using the following formula: Parila The formula is obtained by differentiating the error wrt a and equating to zero and wrt b and equating to zero

Consider the following scenario: Clearly approximating these points with a single line is not a good idea. A set of points that lie approximately on two lines Using two lines clearly gives a better approximation.

(contd.) A set of points that lie approximately on three lines In this case, 3 lines will give us a better approximation.

Segmented Least Square Error In all these cases, we are able to tell the number of lines we must use by looking at the points. In general, we don’t know this number. That is we don’t know what is the minimum number of lines we must use to get a good approximation. Thus our aim becomes to minimize the number of lines that minimize the least square error.

DESIGNING THE ALGORITHM We know that the last point p n belongs to a single segment in the optimal partition and this segment begins at some point, say p i. Thus, if we know the identity of the last segment, then we can recursively solve the problem on the remaining points p i … p i-1 as illustrated below:

WRITING THE RECURRENCE If the last segment in the optimal is p i, …, p n, then the value of optimal is, SLS(n)= SLS(i-1) + c + e in where c is the cost of using the segment and e in is the least square error of this segment. But since we don’t know i, we’ll compute it as follows: SLS(n)= min i { SLS(i-1) + c + e ij } General recurrence: For the sub-problem p 1, …, p j, SLS(j)= min i { SLS(i-1) + c + e ij }

Time Analysis: e ij values can be pre-computed in O(n 3 ) time. Additional Time: n entries, each entry computes the minimum of at most n values each of which can be computed in constant time. Thus a total of O(n 2 ).

FRACTIONAL KNAPSACK PROBLEM Given a set S of n items, with value v i and weight w i and a knapsack with capacity W. Aim: Pick items with maximum total value but with weight at most W. You may choose fractions of items.

GREEDY APPROACH Pick the items in the decreasing order of value per unit weight i.e. highest first.

Example knapsack capacity 50 Item 2 item 3 Item 1 ● v i = 60 v i = 100 v i = 120 v i/ w i = 6 v i/ w i = 5 v i/ w i = Thanks to: Neha Katyal

Example knapsack capacity 50 Item 2 item 3 ● ● 60 v i = 100 v i = 120 v i/ w i = 5 v i/ w i = Thanks to: Neha Katyal

Example knapsack capacity 50 item ● + ● 60 v i = 120 v i/ w i = Thanks to: Neha Katyal

Example knapsack capacity 50 $ = / 30 Thanks to: Neha Katyal

0-1 Kanpsack example to show that the above greedy approach will not work, So, DP

GREEDY APPROACH DOESN’T WORK FOR 0-1 KNAPSACK Counter Example: knapsack Item 2 item 3 Item 1 ● v i = $60 v i = $100 v i = $120 v i/ w i = 6 v i/ w i = 5 v i/ w i = Thanks to: Neha Katyal

Example knapsack Item 2 item 3 ● ● $60 v i = $100 v i = $120 v i/ w i = 5 v i/ w i = Thanks to: Neha Katyal

Example knapsack item 3 $100 ● + ● $60 v i = $120 = $160 v i/ w i = 4 suboptimal Thanks to: Neha Katyal

DP Solution for 0-1KS Let m[I,w] be the optimal value obtained when considering objects upto I and filling a knapsack of capacity w m[0,w] = 0 m[i,0] = 0 m[i,w] = m[i-1,w] if wi > w m[i,w] = max{m[i-1, w-wi] + vi, m[i-1, w]} if wi <= w Running Time : Pseudo-polynomial

Example n = 4 W = 5 Elements (weight, value): (2,3), (3,4), (4,5), (5,6) Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11)

W012345W i01234i01234 (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 1 ; m[1,1] = m[1-1,1] = m[0,1] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 2 ; m[2,1] = m[2-1,1] = m[1,1] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 3 ; m[3,1] = m[3-1,1] = m[2,1] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 4 ; m[4,1] = m[4-1,1] = m[3,1] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 1 ; m[1,2] = max{m[1-1,2], m[1-1,2-2]+3} =max{ 0,0+3} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 2 ; m[2,2] = m[2-1,2] = m[1,2] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 3 ; m[3,2] = m[3-1,2] = m[2,2] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 4 ; m[4,2] = m[4-1,2] = m[3,2] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 1 ; m[1,3] = max{m[1-1,3], m[1-1,3-2]+3} =max{ 0,0+3} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 2 ; m[2,3] = max{m[2-1,3], m[2-1,3-3]+4} =max{ 3,0+4} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 3 ; m[3,3] = m[3-1,3] = m[2,3] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 4 ; m[4,3] = m[4-1,3] = m[3,3] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 1 ; m[1,4] = max{m[1-1,4], m[1-1,4-2]+3} =max{ 0,0+3} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 2 ; m[2,4] = max{m[2-1,4], m[2-1,4-3]+4} =max{ 3,0+4} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 3 ; m[3,4] = max{m[3-1,4], m[3-1,4-4]+5} =max{ 4,0+5} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w<w 4 ; m[4,4] = m[4-1,4] = m[3,4] Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 1 ; m[1,5] = max{m[1-1,5], m[1-1,5-2]+3} =max{ 0,0+3} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 2 ; m[2,5] = max{m[2-1,5], m[2-1,5-3]+4} =max{ 3,3+4} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 3 ; m[3,5] = max{m[3-1,5], m[3-1,5-4]+5} =max{ 7,0+5} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W012345W i01234i01234 As w>=w 4 ; m[4,5] = max{m[4-1,5], m[4-1,5-5]+6} =max{ 7,0+6} Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

W i01234i01234 Thanks to : Neha Katyal (17, MCS '11), Shikha Walia (27, MCS'11) (2,3), (3,4), (4,5), (5,6)

Pseudo-polynomial algorithm An algorithm that is polynomial in the numeric value of the input (which is actually exponential in the size of the input – the number of digits). Thus O(n) time algorithm to test whether n is prime or not is pseudo-polynomial. DP solution to 0-1 Knapsack is pseudo-polynomial as it is polynomial in K, the capacity (one of the inputs) of the Knapsack.

Weakly/Strongly NPC problems An NP complete problem with a known pseudo- polynomial solution is stb weakly NPC. An NPC problem for which it has been proved that it cannot admit a pseudo-polynomial solution unless P= NP is stb strongly NPC.

SUBMITTED BY: BY PULKIT OHRI(29)

84 STRING SIMILARITY: ocurrance occurrence THANKS TO PULKIT ocurrance ccurrenceo - ocurrnce ccurrnceo --a e- o cu rrance ccurrenceo - 6 mismatches, 1 gap 1 mismatch, 1 gap 0 mismatch, 3 gaps

PROBLEM: Given two strings X = x 1 x 2... x n and Y = y 1 y y m GOAL: Find an alignment of minimum cost, where δ is the cost of a gap and ᾳ is the cost of a mismatch. Let OPT(i,j)=min cost of aligning strings X = x 1 x 2... x i and Y = y 1 y 2...y j THANKS TO PULKIT 85

4 cases: Case1: x i is aligned with y j in OPT and they match Pay mismatch for x i y j + min cost of aligning two strings Case 2: OPT does not matches x i. Pay gap for x i + min cost of aligning x 1 x 2... x i-1 and y 1 y 2... y j Case 3: OPT does not matches y j. Pay gap for y j + min cost of aligning x 1 x 2... x i and y 1 y 2... Y j THANKS TO PULKIT 86

87

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e THANKS TO PULKIT 88 0

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e Advance name but not mean.So,pay gap cost δ=2. THANKS TO PULKIT

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e δ=2+2=4 THANKS TO PULKIT

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e δ =4+2=6 THANKS TO PULKIT

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e δ =6+2=8 THANKS TO PULKIT

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e THANKS TO PULKIT

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e Advance both name and mean and pay mismatch cost ᾳ=1 -m… -n … Pay mismatch cost ᾳ=1 THANKS TO PULKIT (1)

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e Advance mean,pay gap cost δ=2+2=4. - n m 0,2,2 THANKS TO PULKIT (4)

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e Advance name,pay gap cost cost δ=2+2=4 - m n 0, 2,2 THANKS TO PULKIT (4)

EG: let δ=2 and ᾳ=0…3 mean name n a e m n a m e So, took the min. 1 THANKS TO PULKIT

Similarly, n a e m - - n a m e RUNNING TIME:  (mn) THANKS TO PULKIT