Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithms (2ILC0) 2nd Quartile,

Similar presentations


Presentation on theme: "Algorithms (2ILC0) 2nd Quartile,"— Presentation transcript:

1 Algorithms (2ILC0) 2nd Quartile, 2016-2017
canvas.tue.nl/courses/406 Lecturer: Bart Jansen (MF 4.099,

2 Organization of the course
Similar to Datastructures 6 homework sets due Thursdays 23:59 (first one due 24-11) 2 tutorials/week (for help in solving homework + discussing solutions) register for a tutorial group on Canvas (tomorrow) go to instructions of your own group to balance the load course administration & assignment grading on Canvas trial run – we might see some glitches video lectures course registration via OASE mandatory Literature: same as Datastructures T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein: Introduction to Algorithms (3rd edition).

3 What is this course about?
Learn general techniques for developing algorithms Wanted: Algorithms that are provably correct and provably efficient Efficient: running time is polynomial in the input size, say 𝑂 𝑛 4 Unfortunately, this is not always possible! We also study methods to show that a problem (probably) cannot be solved by an algorithm that is always correct and fast NP-completeness

4 Contents Part I: Techniques for optimization backtracking exercises greedy algorithms exercises dynamic programming I exercises dynamic programming II exercises Part II: Graph algorithms shortest paths I exercises shortest paths II exercises guest lecture maximum flow exercises matching exercises Part III: Selected topics NP-hardness I exercises NP-hardness II exercises algorithmic puzzles homework set 1 homework set 2 homework set 3 homework set 4 homework set 5 homework set 6

5 Planning

6 Grading scheme Six sets of homework exercises, score 0-10 points Your homework score is calculated as follows: take the sum of the 6 homework scores, subtract the lowest score among sets 1-5, and divide the result by 5 Worst-scoring homework out of the first 5 sets does not count no further possibilities to increase your homework grade On the exam you can score up to 10 points Final grade is determined as follows: Exam ≥5.0: final score is average of homework + exam Exam <5.0: final score is min(5.0, average of HW + exam) If your exam grade is less than 5.0, you fail the course in any case

7 Some requests from me to you
Let me know when something can be improved Microphone levels, website, grading, slides, etc. Feel free to interrupt me at any time (“Bart!”) Do not disturb fellow students during the lecture Nod in agreement if things make sense; frown if they do not Talk to me before or after the lecture if you have feedback or want some more information Try to participate actively and interact during the lectures Let me know when my hand-signs have the timeline right-to-left

8 The Traveling Salesman Problem
xkcd.com/399/

9 Part I: Techniques for optimization

10 Optimization problems
for each instance there are (possibly) multiple valid solutions goal is to find an optimal solution minimization problem: associate cost to every solution, find min-cost solution maximization problem: associate profit to every solution, find max-profit solution

11 Optimization problems: examples
Traveling Salesman Problem input = set of n cities with distances between them valid solution = tour visiting all cities cost = length of tour A B C D E F A B C D E F C 17 15 33 F A D E B

12 Optimization problems: examples
Traveling Salesman Problem input = set of n cities with distances between them valid solution = tour visiting all cities cost = length of tour Knapsack input = n items, each with a weight and a profit, and value W valid solution = subset of items whose total weight is ≤ W profit = total profit of all items in subset solutions: 1,2,6: weight 18, profit 18 2,5: weight 18, profit 21 etcetera item 1 2 3 4 5 6 weight 7 8 11 profit 4.5 10 14 W = 18

13 even hard to find any solution!
Optimization problems: examples Traveling Salesman Problem input = set of n cities with distances between them valid solution = tour visiting all cities cost = length of tour Knapsack input = n items, each with a weight and a profit, and value W valid solution = subset of items whose total weight is ≤ W profit = total profit of all items in subset Linear Programming minimize: c1 x1 + ∙∙∙ + cn xn subject to: a1,1 x1 + ∙∙∙ + a1,n xn ≤ b1 am,1 x1 + ∙∙∙ + am,n xn ≤ bm even hard to find any solution!

14 Techniques for optimization
optimization problems typically involve making choices backtracking: just try all solutions can be applied to almost all problems, but gives very slow algorithms try all options for first choice, for each option, recursively make other choices greedy algorithms: construct solution iteratively, always make choice that seems best can be applied to few problems, but gives fast algorithms only try option that seems best for first choice (greedy choice), recursively make other choices dynamic programming in between: not as fast as greedy, but works for more problems

15 Today: backtracking + how to (slightly?) speed it up
Example 1: Traveling Salesman Problem (TSP) given: n cities and the (non-negative) distances between them Input: matrix Dist [1..n,1..n ], where Dist [i,j ] = distance from i to j goal: find shortest tour visiting all cities and returning to starting city Output: permutation of {1,…,n} such that visiting cities in that order gives min-length tour start w.l.o.g. in city 1 3 6 choices: what is first city to visit ? what is second city to visit ? what is last city to visit ? 1 4 start 5 2 start

16 parameters of algorithm
Backtracking for TSP: first city is city 1 try all remaining cities as next city for each option for next city, recursively try all ways to finish the tour for each recursive call: remember which choices we already made ( = part of the tour we fixed in earlier calls) and which choices we still need to make ( = remaining cities, for which we need to decide visiting order) when all choices have been made: compute length of tour, compare to length of shortest tour found so far parameters of algorithm

17 S = set of remaining cities (initially: S = { 2, …, n } )
Parameters: R = sequence of already visited cities (initially: R = city 1) S = set of remaining cities (initially: S = { 2, …, n } ) Algorithm TSP_BruteForce1 (R, S) 1. if S is empty then minCost ← length of the tour represented by R else minCost ← for each city i in S 5. do Remove i from S, and append i to R. minCost ← min(minCost, TSP_BruteForce1 (R, S) ) Reinsert i in S, and remove i from R. 8. return minCost We want to compute a shortest tour visiting all cities in R U S, under the condition that the tour starts by visiting the cities from R in the given order. all choices have been made 8 try all remaining cities as next city i is next city undo choice recursively compute best way to make remaining choices

18 S = set of remaining cities (initially: S = { 2, …, n } )
Parameters: R = sequence of already visited cities (initially: R = city 1) S = set of remaining cities (initially: S = { 2, …, n } ) Algorithm TSP_BruteForce1 (R, S) 1. if S is empty then minCost ← length of the tour represented by R else minCost ← for each city i in S 5. do Remove i from S, and append i to R. minCost ← min(minCost, TSP_BruteForce1 (R, S) ) Reinsert i in S, and remove i from R. 8. return minCost We want to compute a shortest tour visiting all cities in R U S, under the condition that the tour starts by visiting the cities from R in the given order. all choices have been made Note: this algorithm computes length of optimal tour, not tour itself 8 try all remaining cities as next city i is next city undo choice recursively compute best way to make remaining choices

19 Parameters: R = sequence of already visited cities (initially: R = city 1)
S = set of remaining cities (initially: S = { 2, …, n } ) Algorithm TSP_BruteForce1 (R, S) 1. if S is empty then minCost ← length of the tour represented by R else minCost ← for each city i in S 5. do Remove i from S, and append i to R. minCost ← min(minCost, TSP_BruteForce1 (R, S) ) Reinsert i in S, and remove i from R. 8. return minCost Possible implementation: R and S are linked lists Analysis: nR = size of R, nS = size of S T (nR,nS) = O( (nR+nS) ∙ nS! ) total running time = O( n! ) O(nR) 8 O(1) O(1) T (nR,nS) = nS ∙ ( O(1) + T (nR+1, nS-1) ) with T (nR,0) = O(nR)

20 8. minCost ← min(minCost, TSP_BruteForce1 (A,k+1 ) )
R = sequence of already visited cities (initially: R = city 1) S = set of remaining cities (initially: S = { 2, …, n } ) Algorithm TSP_BruteForce1 (A,k ) n ← length[A] if k = n then minCost ← length of the tour represented by A [1..n] else minCost ← for i ←k+1 to n 6. do Swap A[i ] and A[k+1] newLength ← lengthSoFar + Dist [ A[k], A[k+1] ] minCost ← min(minCost, TSP_BruteForce1 (A,k+1 ) ) Swap A[k+1] and A[i ] 10. return minCost Alternative implementation: ▪ A [1..n] = array of cities, parameter 1 ≤ k ≤ n ▪ A [1..k] contains R, A [ k+1 .. n] contains S improvement: maintain lengthSoFar = length of initial part given by A[1] .. A[k] lengthSoFar lengthSoFar + Dist [ A[n], A[1] ] 8 newLength

21 8. minCost ← min(minCost, TSP_BruteForce1 (A,k+1 ) )
R = sequence of already visited cities (initially: R = city 1) S = set of remaining cities (initially: S = { 2, …, n } ) Algorithm TSP_BruteForce1 (A,k ) n ← length[A] if k = n then minCost ← length of the tour represented by A [1..n] else minCost ← for i ←k+1 to n 6. do Swap A[i ] and A[k+1] newLength ← lengthSoFar + Dist [ A[k], A[k+1] ] minCost ← min(minCost, TSP_BruteForce1 (A,k+1 ) ) Swap A[k+1] and A[i ] 10. return minCost Alternative implementation: ▪ A [1..n] = array of cities, parameter 1 ≤ k ≤ n ▪ A [1..k] contains R, A [ k+1 .. n] contains S improvement: maintain lengthSoFar = length of initial part given by A[1] .. A[k] lengthSoFar lengthSoFar + Dist [ A[n], A[1] ] 8 Old running time: O(n) for all permutations of 2,…n, so O(n!) in total New running time: O((n-1)!) still very very slow newLength

22 A [1..n] = array of cities, parameter 1 ≤ k ≤ n
A [1..k] contains initial part of tour, A [ k+1 .. n] contains remaining cities Algorithm TSP_BruteForce1 (A,k, lengthSoFar, ) n ← length[A] if k = n then minCost ← lengthSoFar + Dist [ A[n], A[1] ] else minCost ← for i ←k+1 to n do Swap A[i ] and A[k+1] newLength ← lengthSoFar + Dist [ A[k], A[k+1] ] minCost ←min(minCost,TSP_BruteForce1(A,k+1 ) ) Swap A[k+1] and A[i ] 9. return minCost pruning: don’t recurse if initial part cannot lead to optimal tour minCost minCost← min (minCost, lengthSoFar + Dist [ A[n], A[1] ] ) 8 if newLength ≥ minCost then skip else minCost newLength

23 Intermezzo: The 𝑛-queens problem on an 𝑛×𝑛 chessboard
place 𝑛 queens such that they cannot attack each other backtracking: not just for optimization problems parameters of the algorithm? Running time?

24 backtracking + pruning (gives branch-and-bound algorithm)
Example 2: 1-dimensional clustering given: X = set of n numbers (points in 1D), parameter k goal: find partitioning of X into k clusters of minimum cost cluster 1 cluster 2 cluster 3 1,3,4,6: average = 3.5, cost = 6 cost of single cluster: sum of distances to cluster average cost of total clustering: sum of costs of its clusters

25 backtracking + pruning (gives branch-and-bound algorithm)
Example 2: 1-dimensional clustering given: X = set of n numbers (points in 1D), parameter k assume points are sorted from left to right goal: find partitioning of X into k clusters of minimum cost cluster 1 cluster 2 cluster 3 choices to be made cluster 1 always starts at point 1 we have a choice where to start clusters 2, …, k

26 parameters of algorithm
Backtracking for 1-dimensional clustering: first cluster starts at point 1 try all remaining points as starting point for next cluster for each option, recursively try all ways to finish the clustering for each recursive call: remember which choices we already made ( = starting points we fixed in earlier calls) and which choices we still need to make ( = starting points for remaining clusters) when all choices have been made: compute cost of clustering, compare to best clustering found so far parameters of algorithm

27 k = desired number of clusters, A [1..k ] = starting points
Parameters: X [1..n ] = sorted set of numbers (points in 1-dimensional space) k = desired number of clusters, A [1..k ] = starting points m = number of starting points that have already been fixed Algorithm Clustering (X,k,A,m) 1. if m = k then minCost ← total cost of clustering represented by A[1..k] else minCost ← for i ← A[m] + 1 to n – (k-m-1) 5. do A [m+1] ← i minCost ← min(minCost, Clustering (X,k,A,m+1) ) A [m+1] ← nil 8. return minCost We want to compute an optimal partitioning of X into k clusters, under the condition that the starting points of clusters 1,..,m are as given by A[1..m]. 8 number of clusterings: ( ) = O( n k ) running time: O( n k+1 ) n k

28 Parameters: X [1..n ] = sorted set of numbers (points in 1-dimensional space)
k = desired number of clusters, A [1..k ] = starting points m = number of starting points that have already been fixed Algorithm Clustering (X,k,A,m, minCost, costSoFar) 1. if m = k then minCost ← min ( minCost, costSoFar + cost last cluster ) else minCost ← for i ← A[m] + 1 to n – (k-m-1) 5. do A [m+1] ← i newCost ← costSoFar + cost m-th cluster if newCost ≥ minCost then skip else minCost ← min(minCost, Clustering (X,k,A,m+1) ) A [m+1] ← nil 8. return minCost Pruning: do not recurse if solution cannot get better than current best 8 minCost, newCost

29 Parameters: X [1..n ] = sorted set of numbers (points in 1-dimensional space)
k = desired number of clusters, A [1..k ] = starting points m = number of starting points that have already been fixed Algorithm Clustering (X,k,A,m, minCost, costSoFar) 1. if m = k then minCost ← min ( minCost, costSoFar + cost last cluster ) else minCost ← for i ← A[m] + 1 to n – (k-m-1) 5. do A [m+1] ← i newCost ← costSoFar + cost m-th cluster if newCost ≥ minCost then skip else minCost ← min(minCost, Clustering (X,k,A,m+1) ) A [m+1] ← nil 8. return minCost NOTE: This problem can actually be solved much more efficiently, for example using dynamic programming (as we will see later). Pruning: do not recurse if solution cannot get better than current best 8 minCost, newCost

30 Intermezzo: Map coloring
give each region a color: adjacent regions must have different colors minimize total number of colors parameters of the algorithm? running time? what if we want to check if 2 colors are enough?

31 Summary backtracking solves optimization problem by generating all solutions this is done best using a recursive algorithm backtracking is typically very slow speed can be improved using pruning, but it is usually hard to prove anything about how much the running time improves (and algorithm usually remains slow) there is a trade-off: more complicated pruning tests may result in fewer recursive calls, but pruning itself becomes more expensive


Download ppt "Algorithms (2ILC0) 2nd Quartile,"

Similar presentations


Ads by Google