Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Algorithmic Paradigms Jeff Edmonds York University COSC 2011 Lecture 9 Brute Force: Optimazation Problem Greedy Algorithm: Minimal Spanning Tree Dual.

Similar presentations


Presentation on theme: "1 Algorithmic Paradigms Jeff Edmonds York University COSC 2011 Lecture 9 Brute Force: Optimazation Problem Greedy Algorithm: Minimal Spanning Tree Dual."— Presentation transcript:

1

2 1 Algorithmic Paradigms Jeff Edmonds York University COSC 2011 Lecture 9 Brute Force: Optimazation Problem Greedy Algorithm: Minimal Spanning Tree Dual Hill Climbing: Max Flow / Min Cut Linear Programing: Hotdogs Recursive Back Tracking: Bellman-Ford Dynamic Programing: Bellman-Ford NP-Complete Problems

3 2 Ingredients: Instances: The possible inputs to the problem. Solutions for Instance: Each instance has an exponentially large set of solutions. Cost of Solution: Each solution has an easy to compute cost or value. Specification : The input is one instance. : A valid solution with optimal cost. (minimum or maximum) Optimization Problems

4 3 The Brute Force Algorithm Exponential Time, because exponentially many Try every solution!

5 4 Greedy Algorithms Surprisingly, many important and practical computational problems can be solved this way. Every two year old knows the greedy algorithm. In order to get what you want, just start grabbing what looks best.

6 5 Instances: A set of objects and a relationship between them. Solutions for Instance: A subset of the objects. Or some other choice about each object. Some subsets are not allowed because some objects conflict Greedy Algorithms

7 6 Instances: A set of objects and a relationship between them. Solutions for Instance: A subset of the objects. Or some other choice about each object. Cost of Solution: The number of objects in solution or the sum of the costs of objects Greedy Algorithms

8 7 Instances: A set of objects and a relationship between them. Goal: Find an optimal non-conflicting solution. Greedy Algorithms

9 8 Commit to the object that looks the “best” Must prove that this locally greedy choice does not have negative global consequences. Greedy Algorithms

10 9 Problem: Choose the best m prizes. Greedy Algorithms

11 10 Problem: Choose the best m prizes. Greedy: Start by grabbing the best. Consequences: If you take the lion, you can't take the elephant. But greedy algorithms do not try to predict the future and do not back track. Greedy Algorithms

12 11 Iterative Greedy Algorithm: Loop: grabbing the best, then second best,... if it conflicts with committed objects or fulfills no new requirements. Reject this next best object else Commit to it. Problem: Choose the best m prizes. Greedy Algorithms

13 12 Makes a greedy first choice and then recurses (See Recursive Backtracking Algorithms) Recursive Greedy Algorithm: Problem: Choose the best m prizes. Greedy Algorithms

14 13 We have not gone wrong. There is at least one optimal solution S t that extends the choices A t made so far. Loop Invariant Take the lion because it looks best. Consequences: If you take the lion, you can't take the elephant. Maybe some optimal solutions do not contain the lion. But at least one does. Greedy Algorithms

15 14 Minimal Spanning Tree

16 15 Minimal Spanning Tree Instance: A undirected graph with weights on the edges. s c b a d f ij h g 40 1 2 15 1 6 1 30 3 1 2 1 2 k 2 2 4

17 16 Minimal Spanning Tree s c b a d f ij h g 40 1 2 15 1 6 1 30 3 1 2 1 2 k Instance: A undirected graph with weights on the edges. Solution: A subset of edge A tree (no cycles, not rooted) Spanning Connected nodes still connected. Cost: Sum of edge weights Goal: Find Minimal Spanning Tree 2 2 4

18 17 3 Minimal Spanning Tree s c b a d f ij h g 40 1 2 15 1 4 6 1 30 2 2 1 2 1 2 k Instance: A undirected graph with weights on the edges. Solution: A subset of edge A tree (no cycles) Spanning Connected nodes still connected. Cost: Sum of edge weights Goal: Find Minimal Spanning Tree Greedy Alg: Commit to the edge that looks the “best.” Can’t add because of cycle. Must prove that this is acylic spanning optimal. Done

19 18 O(E) Time = O(E log(E))

20 19 3 Minimal Spanning Tree s c b a d f ij h g 40 1 2 15 1 4 6 1 30 2 2 1 2 1 2 k How does the algorithm detect a cycle? No cycle. Cycle.

21 20 3 Minimal Spanning Tree s c b a d f ij h g 40 1 2 2 1 2 6 1 30 2 2 1 2 1 2 k Cycle detection algorithm: Keep track of sets of nodes in connected components. If edge with one component, then new cycle. If edge bridges two components, then no new cycle. Merge components.

22 21 3 Minimal Spanning Tree s c b a d f ij h g 40 1 2 2 1 2 6 1 30 2 2 1 2 1 2 k Cycle detection algorithm: Keep track of sets of nodes in connected components. If edge with one component, then new cycle. If edge bridges two components, then no new cycle. Merge components. Can’t add because of cycle.

23 22 Minimal Spanning Tree

24 23 Minimal Spanning Tree Algorithm for keeping track of components: Union-Find Data structure. Average Time = Akerman’s -1 (E)  4

25 24 Adaptive Greedy s c b a d f ij h g 40 1 2 15 1 6 1 30 3 1 2 1 2 k Another application: (www) Suppose we don’t know of edges until we find them searching from s. Another Greedy Alg: Expand out from s, committing to best edge connected to component. 2 2 4 Can’t add because of cycle. These edges are never found. Done Is this a greedy algorithm? (The priorities on the edges keep changing.)

26 25 Fixed Priority: Sort the objects from best to worst and loop through them. Adaptive Priority: –Greedy criteria depends on which objects have been committed to so far. –At each step, the next “best” object is chosen according to the current greedy criteria. –Searching or re-sorting takes too much time. –Use a priority queue. Adaptive Greedy

27 26 Adaptive Greedy

28 27 Adaptive Greedy

29 28 Dijkstra's shortest weighted path algorithm can be considered to be a greedy algorithm with an adaptive priority criteria. Adaptive Greedy

30 29 Instance: A Network is a directed graph G Edges represent pipes that carry flow Each edge has a maximum capacity c A source node s out of which flow leaves A sink node t into which flow arrives Goal: Max Flow Network Flow

31 30 Instance: A Network is a directed graph G Edges represent pipes that carry flow Each edge has a maximum capacity c A source node s out of which flow leaves A sink node t into which flow arrives Network Flow

32 31 Solution: The amount of flow F through each edge. Flow F can’t exceed capacity c. Unidirectional flow No leaks, no extra flow. Network Flow

33 32 Solution: The amount of flow F through each edge. Flow F can’t exceed capacity c. Unidirectional flow No leaks, no extra flow. For each node v: flow in = flow out  u F =  w F Except for s and t. Network Flow

34 33 Value of Solution: Flow from s into the network rate(F) =  u F Goal: Max Flow Network Flow

35 34 An Application: Matching SamMary BobBeth JohnSue FredAnn Who loves whom. Who should be matched with whom so as many as possible matched and nobody matched twice? 3 matches Can we do better? 4 matches

36 35 An Application: Matching st c = 1 Total flow out of u  flow into u  1 Boy u matched to at most one girl. 1 c = 1 Total flow into v = flow out of v  1 Girl v matched to at most one boy. 1 u v

37 36 Instance: A Network is a directed graph G Special nodes s and t. Edges represent pipes that carry flow Each edge has a maximum capacity c Min Cut s t

38 37 Instance: A Network is a directed graph G Special nodes s and t. Edges represent pipes that carry flow Each edge has a maximum capacity c Min Cut

39 38 Solution: C = partition of nodes with s  U, t  V. Min Cut s t U V York UC Berkeley = Canada = USA

40 39 Min Cut s t York UC Berkeley UCB Solution: C = partition of nodes with s  U, t  V.

41 40 Min Cut s t York UC Berkeley U V Solution: C = partition of nodes with s  U, t  V.

42 41 Value Solution C= : cap(C) = how much can flow from U to V =  u  U,v  V c Min Cut s t U V u v Goal: Min Cut

43 42 We have a valid solution. (not necessarily optimal) Take a step that goes up. measure progress Value of our solution. Problems: Exit Can't take a step that goes up. Running time? Initially have the “zero Local Max Global Max Can our Network Flow Algorithm get stuck in a local maximum? Make small local changes to your solution to construct a slightly better solution. If you take small step, could be exponential time. Primal-Dual Hill Climbing

44 43 Primal-Dual Hill Climbing Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)] or But  R  L height(R)  height(L) No Gap

45 44 Primal-Dual Hill Climbing No Gap L alg witness that height(L max ) is no smaller. R alg witness that height(L max ) is no bigger. Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)]

46 45 Primal-Dual Hill Climbing No Gap Flow alg witness that network has this flow. Cut alg witness that network has no bigger flow. Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)]

47 46 A combination of pork, grain, and sawdust, … Linear Programing

48 47 Constraints: Amount of moistureAmount of moisture Amount of protein,Amount of protein, … Linear Programing

49 48 Given today’s prices, what is a fast algorithm to find the cheapest hotdog? Linear Programing

50 49 Cost: 29, 8, 1, 2 Amount to add: x 1, x 2, x 3, x 4 pork grainwater sawdust 3x 1 + 4x 2 – 7x 3 + 8x 4 ≤ 12 2x 1 - 8x 2 + 4x 3 - 3x 4 ≤ 24 -8x 1 + 2x 2 – 3x 3 - 9x 4 ≤ 8 x 1 + 2x 2 + 9x 3 - 3x 4 ≤ 31 Constraints: moisturemoisture protein,protein, … 29x 1 + 8x 2 + 1x 3 + 2x 4 Cost of Hotdog: Linear Programing (Abstract Out Essentials)

51 50 29x 1 + 8x 2 + 1x 3 + 2x 4 Subject to: Minimize: 3x 1 + 4x 2 – 7x 3 + 8x 4 ≤ 12 2x 1 - 8x 2 + 4x 3 - 3x 4 ≤ 24 -8x 1 + 2x 2 – 3x 3 - 9x 4 ≤ 8 x 1 + 2x 2 + 9x 3 - 3x 4 ≤ 31 Linear Programing (Abstract Out Essentials)

52 51 For decades people thought that there was no fast algorithm. Then one was found! Theoretical Computer Science finds new algorithms every day. 3x 1 + 4x 2 – 7x 3 + 8x 4 ³ 12 2x 1 - 8x 2 + 4x 3 - 3x 4 ³ 24 -8x 1 + 2x 2 – 3x 3 - 9x 4 ³ 8 x 1 + 2x 2 + 9x 3 - 3x 4 ³ 31 29x 1 + 8x 2 + 1x 3 + 2x 4 Subject to: Minimize:  Linear Programing

53 52 Given an instance of Network Flow: > express it as a Linear Program: The variables: Maximize: Subject to: Flows f for each edge.  : F  c. (Flow can't exceed capacity)  v:  u F =  w F (flow in = flow out) rate(F) =  u F -  v F Linear Programing

54 53

55 54 Primal Dual

56 55 Consider your instance I. Ask a little question (to the little bird) about its optimal solution. Try all answers k. Knowing k about the solution restricts your instance to a subinstance subI. Ask your recursive friend for a optimal solution subsol for it. Construct a solution optS = subsol + k for your instance that is the best of those consistent with the k th bird' s answer. Return the best of these best solutions. Recursive Back Tracking Bellman Ford

57 56 Specification: All Nodes Shortest-Weighted Paths : The input is a graph G (directed or undirected) with edge weights (possibly negative) : For each  u,v , find a shortest path from u to v Stored in a matrix Dist[u,v]. b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 For a recursive algorithm, we must give our friend a smaller subinstance. How can this instance be made smaller? Remove a node? and edge? Recursive Back Tracking Bellman Ford with ≤l edges and integer l. with at most l edge. l=3 l=4

58 57 b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 Recursive Back Tracking Bellman Ford l=4 Consider your instance I =  u,v,l . Ask a little question (to the little bird) about its optimal solution. “What node is in the middle of the path?” She answers node k. I ask one friend subI =  u,k, l / 2  and another subI =  k,v, l / 2  optS = subsol  u,k,l/2  + k + subsol  k,v,l/2  is the best solution for I consistent with the k th bird‘s answer. Try all k and return the best of these best solutions.

59 58 Dynamic Programming Algorithm Given an instance I, Imagine running the recursive alg on it. Determine the complete set of subI ever given to you, your friends, their friends, … Build a table indexed by these subI Fill in the table in order so that nobody waits. Recursive Back Tracking Given graph G, find Dist[uv,l] for l = 1,2,4,8,… b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 l=4,n,n

60 59 b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 l=4 Dynamic Programming Algorithm Loop Invariant: For each  u,v , Dist[u,v,l] = a shortest path from u to v with ≤l edges Exit for l = 2,4,8,16,…,2n % Find Dist[uv,l] from Dist[u,v, l / 2 ] for all u,v  Verticies Dist[u,v,l] = Dist[u,v, l / 2 ] for all k  Verticies Dist[u,v,l] = min( Dist[u,v,l], Dist[u,k, l / 2 ]+Dist[k,v, l / 2 ] )

61 60 b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 l=4 Dynamic Programming Algorithm Loop Invariant: For each  u,v , Dist[u,v,l] = a shortest path from u to v with ≤l edges When l = 1, Dist[b,c,1] = 10 Dist[u,v,1] = ∞ % Smallest Instances for all u,v  Verticies if  u,v   Edges Dist[u,v,1] = weight[u,v] else Dist[u,v,1] = ∞ Dist[u,u,1] = 0 (sometimes useful)

62 61 b d c u k g i v h 40 1 10 2 15 181 2 6 8 1 2 30 3 25 1 2 3 l=4 Dynamic Programming Algorithm Loop Invariant: For each  u,v , Dist[u,v,l] = a shortest path from u to v with ≤l edges Exit When to exit? A simple path never uses a node more than once and so does not have more than l=n-1. for all u,v  Verticies Dist[u,v] = Dist[u,v,n]

63 62 Dynamic Programming Algorithm Dealing with negative cycles. b d c u v -5 1 2 25 3 Dist[u,v,4] = 25+3+2 = 30 Dist[u,v,8] = 25+(3+1-5)+3+2 = 29 Dist[u,v,9] = 25+(3+1-5)  2+3+2 = 28 Dist[u,v,2] = ∞ Dist[u,v,12] = 25+(3+1-5)  3+3+2 = 27 Dist[u,v,303] = 25+(3+1-5)  300+3+2 = 30-300 = -270 Dist[u,v,∞] = 25+(3+1-5)  ∞+3+2 = 30-∞ = -∞ There is a negative cycle if Dist[u,v,2n] < Dist[u,v,2n] % Check for negative cycles for all u,v  Verticies if( Dist[u,v,2n]<Dist[u,v,n] ) Dist[u,v] = ∞

64 63 Dynamic Programming Algorithm Algorithm BellmanFord(G) % Smallest Instances for all u,v  Verticies if  u,v   Edges Dist[u,v,1] = weight[u,v] else Dist[u,v,1] = ∞ for l = 2,4,8,16,…,2n % Find Dist[uv,l] from Dist[u,v, l / 2 ] for all u,v  Verticies Dist[u,v,l] = Dist[u,v, l / 2 ] for all k  Verticies Dist[u,v,l] = min( Dist[u,v,l], Dist[u,k, l / 2 ]+Dist[k,v, l / 2 ] ) % Check for negative cycles for all u,v  Verticies if( Dist[u,v,2n]==Dist[u,v,n] ) Dist[u,v] = Dist[u,v,n] else Dist[u,v] = ∞ Don’t actually need to keep old and new values. Time = O(n 3 logn)

65 64 Dynamic Programming Algorithm Algorithm BellmanFord(G) % Smallest Instances for all u,v  Verticies if  u,v   Edges Dist[u,v] = weight[u,v] else Dist[u,v] = ∞ for l = 2,4,8,16,…,2n % Find Dist[uv,l] from Dist[u,v, l / 2 ] for all u,v  Verticies for all k  Verticies Dist[u,v] = min( Dist[u,v], Dist[u,k]+Dist[k,v] ) % Check for negative cycles for all u,v  Verticies if( changed last iteration ) Dist[u,v] = ∞ Don’t actually need to keep old and new values.

66 NP-Complete Problems Computable Exp Poly Known GCD Matching Halting Jack Edmonds Steve Cook NP exponential time to search poly time to verify given witness Non-Deterministic Polynomial Time Circuit-Sat Problem: Does a circuit have a satisfying assignment. SAT

67 Industry would love a free lunch Given a description of a good plane, automatically find one.Given a description of a good plane, automatically find one. Given a circuit, find a satisfying assignmentGiven a circuit, find a satisfying assignment Given a graph find a bichromatic coloring.Given a graph find a bichromatic coloring. Given course description, find a scheduleGiven course description, find a schedule X X X XX NP-Complete Problems

68 Find the biggest clique, ie subset of nodes that are all connected.

69 NP-Complete Problems Find the LONGEST simple s-t path.

70 NP-Complete Problems Find a partition of the nodes into two sets with most edges between them.

71 Colour each node Use the fewest # of colours. Nodes with lines between them must have different colours. NP-Complete Problems

72 Try all possible colourings Too many to try. A 50 node graph has more colourings than the number of atoms. NP-Complete Problems

73 Is there a fast algorithm? Most people think not. We have not been able to prove that there is not. It is one of the biggest open problems in the field. NP-Complete Problems

74 73 End


Download ppt "1 Algorithmic Paradigms Jeff Edmonds York University COSC 2011 Lecture 9 Brute Force: Optimazation Problem Greedy Algorithm: Minimal Spanning Tree Dual."

Similar presentations


Ads by Google