Download presentation
Presentation is loading. Please wait.
Published byἘλισάβετ Σαμαράς Modified over 5 years ago
1
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Text Book: C L R S 2. Goodrich
2
Greedy Algorithms A greedy algorithm always makes the choice that looks best at the moment. locally optimal choice leading to a globally optimal solution. Greedy algorithms do not always yield optimal solutions Examples: Minimum spanning tree(Kruskal and Prim), shortest path algorithm (Dijkstra)
3
Activity Selection Problem: Schedule several competing activities that requires exclusive use of a common resource Goal: Select a maximum-size set of mutually compatible activities.
4
Activity Selection we have a set S = {a1, a2, …, an} of n proposed activities that wish to use a resource. Each activity ai has a start time si and a finish time fi, where 0 <= si <= fi <= ∞. Activities ai and aj are compatible if if si >= fj or sj >= fi.
5
Activity Selection Let us consider the following set of activities
Some mutually exclusive set of activities are:
6
Activity Selection –solution
We sort the activities in monotonically increasing order of finish time
7
Activity Selection – Iterative solution
8
Activity Selection – Recursive solution
9
Activity Selection – Example2
Schedule the activities (0,35), (10,20), (25,35), (5,15), (15,25), (13,16), (24,27)
10
Steps in Greedy Algorithm design
Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve. Prove that there is always an optimal solution to the original problem that makes the greedy choice, so that the greedy choice is always safe. Demonstrate optimal substructure by showing that, having made the greedy choice, what remains is a subproblem with the property that if we combine an optimal solution to the subproblem with the greedy choice we have made, we arrive at an optimal solution to the original problem.
11
Task Scheduling Given: a set T of n tasks, each having:
A start time, si A finish time, fi (where si < fi) Goal: Perform all the tasks using a minimum number of “machines.” Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 The Greedy Method
12
Task Scheduling Algorithm
Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Run time: O(n log n). Why? Correctness: Suppose there is a better schedule. We can use k-1 machines The algorithm uses k Let i be first task scheduled on machine k Machine i must conflict with k-1 other tasks But that means there is no non-conflicting schedule using k-1 machines Algorithm taskSchedule(T) Input: set T of tasks w/ start time si and finish time fi Output: non-conflicting schedule with minimum number of machines m 0 {no. of machines} while T is not empty remove task i w/ smallest si if there’s a machine j for i then schedule i on machine j else m m + 1 schedule i on machine m The Greedy Method
13
The Fractional Knapsack Problem
Given: A set S of n items, with each item i having bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are allowed to take fractional amounts, then this is the fractional knapsack problem. In this case, we let xi denote the amount we take of item i Objective: maximize Constraint: The Greedy Method
14
Example Given: A set S of n items, with each item i having
bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. 10 ml “knapsack” Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 Items: 1 2 3 4 5 Weight: 4 ml 8 ml 2 ml 6 ml 1 ml Benefit: $12 $32 $40 $30 $50 Value: 3 4 20 5 50 ($ per ml) The Greedy Method
15
The Fractional Knapsack Algorithm
Greedy choice: Keep taking item with highest value (benefit to weight ratio) Since Run time: O(n log n). Why? Correctness: Suppose there is a better solution there is an item i with higher value than a chosen item j (i.e., vi<vj) but xi<wi and xj>0 If we substitute some i with j, we get a better solution How much of i: min{wi-xi, xj} Thus, there is no better solution than the greedy one Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit bi and weight wi; max. weight W Output: amount xi of each item i to maximize benefit with weight at most W for each item i in S xi 0 vi bi / wi {value} w 0 {total weight} while w < W remove item i with highest vi xi min{wi , W - w} w w + min{wi , W - w} The Greedy Method
16
Huffman Coding
17
Tree for fixed length coding
18
Variable length coding
Develop codes in such a way that no codeword is also a prefix of some other codeword Such codes are called prefix codes.
19
Tree for variable length coding
20
Properties of optimal code tree
An optimal code for a file is always represented by a full binary tree Every nonleaf node has two children The tree for an optimal prefix code has exactly |C| leaves and exactly |C|-1 internal nodes
21
Properties of optimal code tree
For each character c in the alphabet C, let the attribute c.freq denote the frequency of c in the file and let dT(c) denote the depth of c’s leaf in the tree. Note that dT(c) is also the length of the codeword for character c. The number of bits required to encode a file is thus
22
Huffman coding
23
Creating Huffman tree
24
Creating Huffman tree
25
Creating Huffman tree
26
Creating Huffman tree
27
Making Change Problem: A dollar amount to reach and a collection of coin amounts to use to get there. Configuration: A dollar amount yet to return to a customer plus the coins already returned Objective function: Minimize number of coins returned. Greedy solution: Always return the largest coin you can Example 1: Coins are valued $.32, $.08, $.01 Has the greedy-choice property, since no amount over $.32 can be made with a minimum number of coins by omitting a $.32 coin (similarly for amounts over $.08, but under $.32). Example 2: Coins are valued $.30, $.20, $.05, $.01 Does not have greedy-choice property, since $.40 is best made with two $.20’s, but the greedy solution will pick three coins (which ones?) The Greedy Method
28
Dynamic programming When developing a dynamic-programming algorithm, we follow a sequence off our steps: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution, typically in a bottom-up fashion. 4. Construct an optimal solution from computed information.
29
Matrix Multiplication
30
Matrix-chain multiplication
Given a sequence (chain) <A1, A2, , An> of n matrices Compute the product A1 . A ,An Minimize the number of scalar multiplication Matrix multiplication is associative A1 – 100x100, A2-100x5, A3- 5x50 A1.(A2.A3) need multiplications (A1.A2).A3 need multiplications
31
Matrix Chain Multiplication
Assume that matrix Ai has dimensions pi-1 x pi for i =1, 2, 3, … n. Let m[i,j] be the minimum number of scalar multiplications needed to compute the matrix Ai . Ai Aj The recursive formulation of optimum solution is
32
Matrix Chain Multiplication
35
30 35 25 15 5 10 20
36
30 35 25 15 5 10 20 * 35 * 15 = 15750
37
30 35 25 15 5 10 20 * 5 * 10 = 4375 * 15 * 10 = 6000
38
30 35 25 15 5 10 20 * 35 * 25 = 36750
39
30 35 25 15 5 10 20 36750 * 15 * 25 = 32375
40
30 35 25 15 5 10 20 36750 32375 * 5 * 25 = 15125
41
30 35 25 15 5 10 20 36750 32375 15125 * 10 * 25 = 21875
42
30 35 25 15 5 10 20 36750 32375 15125 21875 * 20 * 25 = 26875
44
Memoized Matrix Chain
45
Memoized Matrix Chain
46
Longest Common Subsequence
S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCCGAA S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA
47
Longest Common Subsequence
S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCCGAA S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA S3= GTCGTCGGAAGCCGGCCGAA
48
LC Subsequence A subsequence of a given sequence is just the given sequence with zero or more elements left out Given two sequences X and Y, a sequence Z is a common subsequence of X and Y, if Z is a subsequence of both X and Y
49
Theorem – Optimal substructure of an LCS
Let X = < x1,x2,…, xm> and Y = <y1, y2,…,yn> be sequences, and let Z=<z1, z2,…,zk> be any LCS of X and Y. 1. If xm = yn, then zk = xm = yn and Zk_1 is an LCS of Xm_1 and Yn_1. 2. If xm ≠ yn, then zk ≠ xm implies that Z is an LCS of Xm_1 and Y . 3. If xm ≠ yn, then zk ≠ yn implies that Z is an LCS of X and Yn_1 .
50
LCS – A recursive solution
Let c[I,j] be the length of an LCS of X and Y. C[I,j] = = if i=0 or j=0 =c[i-1, j-1] + 1 if I,j > 0 and xi = yj =max ( c[I, j-1], c[i-1, j] ) if I,j > 0 and xi ≠ yj
51
LCS Algorithm – Page 1
52
LCS Algorithm – Page 2
53
Print LCS
54
Example X = A B C B D A B Y = B D C A B A
55
Longest Common substring
Let M[I, j] be the number of letters to the left of (and including) ai complying with the same number of letters to the left (and including) bj
56
Longest Common substring
57
0-1 Knapsack Example: Take W = 10 and Item Weight Value 1 6 $30
$30 $14 $16 $9
58
0-1 Knapsack There are two versions of this problem.
If there are unlimited quantities of each item available, the optimal choice is to pick item 1 and two of item 4 (total: $48). On the other hand, if there is one of each item, then the optimal knapsack contains items 1 and 3 (total: $46).
59
Knapsack with repetition
We can shrink the original problem in two ways: we can either look at smaller knapsack capacities w <= W, or we can look at fewer items (for instance, items 1, 2,… j, for j <= n). Define K(w) = maximum value achievable with a knapsack of capacity w.
60
Knapsack with repetition
Can we express this in terms of smaller subproblems? If the optimal solution to K(w) includes item i, then removing this item from the knapsack leaves an optimal solution to K(w-wi). In other words, K(w) is simply K(w - wi) + vi, for some i. We don't know which I, so we need to try all possibilities.
61
Knapsack with repetition
62
Knapsack without repetition
K(w; j) = maximum value achievable using a knapsack of capacity w and items 1,. . .,j
63
Knapsack without repetition
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.