E.G.M. PetrakisAlgorithms and Complexity1 The sphere of algorithmic problems.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Design and Analysis of Algorithms Approximation algorithms for NP-complete problems Haidong Xue Summer 2012, at GSU.
Counting the bits Analysis of Algorithms Will it run on a larger problem? When will it fail?
Types of Algorithms.
Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Obtaining a Solution to TSP using Convex Hulls Eric Salmon & Joseph Sewell.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Combinatorial Algorithms
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
Branch and Bound Similar to backtracking in generating a search tree and looking for one or more solutions Different in that the “objective” is constrained.
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
The Theory of NP-Completeness
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Chapter 11: Limitations of Algorithmic Power
Ch 13 – Backtracking + Branch-and-Bound
ASC Program Example Part 3 of Associative Computing Examining the MST code in ASC Primer.
Two Discrete Optimization Problems Problem #2: The Minimum Cost Spanning Tree Problem.
Backtracking.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Data Structures and Algorithms Graphs Minimum Spanning Tree PLSD210.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Programming & Data Structures
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
SPANNING TREES Lecture 21 CS2110 – Spring
IT 60101: Lecture #201 Foundation of Computing Systems Lecture 20 Classic Optimization Problems.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Fundamentals of Algorithms MCS - 2 Lecture # 7
Algorithms  Al-Khwarizmi, arab mathematician, 8 th century  Wrote a book: al-kitab… from which the word Algebra comes  Oldest algorithm: Euclidian algorithm.
CSE 326: Data Structures NP Completeness Ben Lerner Summer 2007.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Minimum Spanning Trees CS 146 Prof. Sin-Min Lee Regina Wang.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
SPANNING TREES Lecture 20 CS2110 – Fall Spanning Trees  Definitions  Minimum spanning trees  3 greedy algorithms (incl. Kruskal’s & Prim’s)
SPANNING TREES Lecture 21 CS2110 – Fall Nate Foster is out of town. NO 3-4pm office hours today!
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Branch and Bound Searching Strategies
Approximation Algorithms
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
2016/3/13Page 1 Semester Review COMP3040 Dept. Computer Science and Technology United International College.
Kruskal’s Algorithm for Computing MSTs Section 9.2.
The Theory of NP-Completeness
Optimization problems such as
Types of Algorithms.
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Types of Algorithms.
Advanced Analysis of Algorithms
Approximation Algorithms
Backtracking and Branch-and-Bound
CSE 373: Data Structures and Algorithms
Types of Algorithms.
The Theory of NP-Completeness
Total running time is O(E lg E).
Spanning Trees Lecture 20 CS2110 – Spring 2015.
Major Design Strategies
Major Design Strategies
Data Structures and Algorithms
Presentation transcript:

E.G.M. PetrakisAlgorithms and Complexity1 The sphere of algorithmic problems

E.G.M. PetrakisAlgorithms and Complexity2  Traveling Salesman Problem (TSP): find the shortest route that passes though each of the nodes in a graph exactly once and returns to the starting node Α Ε D B C Shortest route: ABDECA Length(ABDECA) = = 11 starting point

E.G.M. PetrakisAlgorithms and Complexity3  Exhaustive search: compute all paths and select the the one with the minimum cost  Length(ABCDEA) = = 18  Length(ACBDEA) = = 15 ……  Length(ABDECA) = = 11  BEST PATH  Length(AEBCDA) = = 19 Α Ε D B C (n-1)! paths  Complexity: Ω(2 n ) starting point

E.G.M. PetrakisAlgorithms and Complexity4  Hard Problems: an exponential algorithm that solves the problem is known to exist  E.g., TSP  Is there a better algorithm?  Until when do we try to find a better algorithm?  Prove that the problem as at least as hard as another hard problem for which no better solution has even been found  Then, stop searching for a better solution for the first problem

E.G.M. PetrakisAlgorithms and Complexity5  NP Complete problems (NPC): hard problems for which only exponential algorithms are known to exist  Polynomial solutions might exist but none has found yet!  Examples:  Traveling Salesman Problem (TSP)  Hamiltonian Path Problem  0-1 Knapsack Problem  Properties of NP Completeness:  Non deterministic polynomial  Completeness

E.G.M. PetrakisAlgorithms and Complexity6  Non-deterministic polynomial:a polynomial algorithm doesn’t guarantee optimality  The polynomial algorithm is non deterministic: tries to find a solution using heuristics  E.g., TSP: the next node is the one closer to the current node Α Ε D B C Length(ABEDCA) = = 14 Non optimal starting point

E.G.M. PetrakisAlgorithms and Complexity7  Completeness: if an optimal algorithm exists for one of the them, then a polynomial algorithm can be found for all  It is possible to transform each problem to another using a polynomial algorithm  The complexity for solving a different problem is the complexity of transforming the problem to the original one (takes polynomial time) plus the complexity of solving the original problem (take polynomial time again)!

E.G.M. PetrakisAlgorithms and Complexity8  Hamiltonian Path (HP) problem: is there a path that visits each node of a graph exactly once?  The exhaustive search algorithm checks n! paths  HP is NP Complete  It is easy to transform HP to TSP in polynomial time  Create a full graph G’ having cost 1 in edges that exist in G and cost 2 in edges that don’t belong to G

E.G.M. PetrakisAlgorithms and Complexity9  The transformation of HP to TSP takes polynomial time  Cost of creating G + cost of adding extra edges: O(n)  HP: equivalent to searching for a TSP path on G’ with length n G G’

E.G.M. PetrakisAlgorithms and Complexity10 Algorithms  Types of Heuristic algorithms  Greedy  Local search  Types of Optimal algorithms  Exhaustive Search (ES)  Divide and Conquer (D&C)  Branch and Bound (B&B)  Dynamic Programming (DP)

E.G.M. PetrakisAlgorithms and Complexity11  Greedy Algorithms: whenever they make a selection they choose to increase the cost by the minimum amount  E.g., TSP: the next node is the one closer to the current node Α Ε D B C starting point Greedy algorithm: Length(BECADB) = = 12 Non optimal Optimal: Length(BACEDB) = = 11

E.G.M. PetrakisAlgorithms and Complexity12  In some cases, greedy algorithms find the optimal solution  Knapsack problem: choosing among n objects with different values and equal size, fill a knapsack of capacity C with objects so that the value in the knapsack is maximum  Fill the knapsack with the most valuable objects  Job-scheduling: which is the service order of n customers that minimizes the average waiting time?  Serve customers with the less service times first  Minimum cost spanning tree: next transparency

E.G.M. PetrakisAlgorithms and Complexity13 Minimum Cost Spanning Tree (MCST)  In an undirected graph G with costs, find the set of edges that has minimum total cost and keeps all nodes connected  Application: connect cities by telephone in a way that requires the minimum amount of wire  MCST contains no cycles (it’s a tree)  Exhaustive search takes exponential time (choose among n n-2 trees or among n! edges)  Two standard algorithms (next transparency)

E.G.M. PetrakisAlgorithms and Complexity14  Prim’s algorithm: at each step take the minimum cost edge and connect it to the tree  Kruskal’s algorithm: at each step take the minimum cost edge that creates no cycles in the tree.  Both algorithms are optimal!! Α Ε D B C Α Ε D B C |V|=5 MST Cost=7

E.G.M. PetrakisAlgorithms and Complexity15 Prim’s algorithm: graph G=(V,E),V={1,2,…,n} function Prim(G:graph, MST: set of edges) U: set of edges; u, v: vertices; { T = 0; U = {1}; while (U != V) { (u,v) = min. cost edge: u in U, v in V T = T + {(u,v)}; U = U + {v}; } Complexity: O(n2) why?? The tree contains n – 1 edges

E.G.M. PetrakisAlgorithms and Complexity16 G = (V,E) Step 1 Step 2 Step 3 Step 4 Step

E.G.M. PetrakisAlgorithms and Complexity17 Local Search  Heuristic algorithms that improve a non-optimal solution  Local transformation that improves a solution  E.g., a solution obtained by a greedy algorithm  Apply many times and as long as the solution improves  Apply in difficult (NP problems) like TSP  E.g., apply a greedy algorithm to obtain a solution, improve this solution using local search  The complexity of the transformation must be polynomial

E.G.M. PetrakisAlgorithms and Complexity18 Α Ε D B C starting point greedy  B Α B Ε D C Α Ε D B 1 4 local cost = 5  Α Ε D B 2 2 local cost = 4 ΕB D Α C cost = 11: optimal Local search on TSP:  cost=12 6

E.G.M. PetrakisAlgorithms and Complexity19 Divide and Conquer  The problem is split into smaller sub-problems  Solve the sub-problems  Combine their solutions to obtain the solution of the original problem  The smaller a sub-problem is, the easier it is to solve it  Try to get sub-problems of equal size  D&C is often expressed by recursive algorithms  E.g. Mergesort

E.G.M. PetrakisAlgorithms and Complexity20 Merge sort list mergesort(list L, int n) { if (n == 1) return (L); L 1 = lower half of L; L 2 = upper half of L; return merge (mergesort(L 1,n/2), mergesort(L 2,n/2) ); } n: size of array L (assume L some power of 2) merge: merges the sorted L 1, L 2 in a sorted array

E.G.M. PetrakisAlgorithms and Complexity21 Complexity: O(nlogn)

E.G.M. PetrakisAlgorithms and Complexity22 Dynamic Programming  The original problem is split into smaller sub-problems  Solve the sub-problems  Store their solutions  Reuse these solutions several times when the same partial result is needed more than once in solving the main problem  DP is often based on a recursive formula for solving larger problems in terms of smaller  Similar to Divide and Conquer but recursion in D&C doesn’t reuse partial solutions

E.G.M. PetrakisAlgorithms and Complexity23  DP example (1) Fibonacci numbers  Using recursion the complexity is  Using a DP table the complexity is O(n) F(0)F(1)F(n-1)F(n)…

E.G.M. PetrakisAlgorithms and Complexity Knapsack  0-1 knapsack problem: given a set of objects that vary in size and value, what is maximum value that can be carried in a knapsack of capacity C ??  How should we fill-up the capacity in order to achieve the maximum value?

E.G.M. PetrakisAlgorithms and Complexity25 Exhaustive (1)  The obvious method to solve the 0-1 knapsack problem is by trying all possible combinations of n objects  Each object corresponds to a cell  If it is included its cell becomes 1  If it is left out its cell becomes 0  There are 2 n different combinations … n 0011

E.G.M. PetrakisAlgorithms and Complexity26 Notation  n objects  s 1, s 2, s 3, … s n : capacities  v 1, v 2, v 3, …. v n : values  C: knapsack capacity  Let 0 <= i <= n and A <= C  V(k,A) : maximum value that can be carried in a knapsack of capacity A given that we choose its contents from among the first k objects  V(n,C) : maximum value that can be carried in the original knapsack when we choose from among all objects  V(k,A) = 0 if k = 0 or A <= 0 for any k

E.G.M. PetrakisAlgorithms and Complexity27 DP Formulation  V(k,A) = max{V(k-1,A), V(k-1,A-s k )+v k }: solving the knapsack problem for the next object k, there are two choices:  We can include it or leave it out  If it is left out, we can do better by choosing from among the previous k-1 objects  If it is included, its value v k is added but the capacity of the knapsack in reduced by the capacity s k of the k-th object

E.G.M. PetrakisAlgorithms and Complexity28 Exhaustive (2)  If we try to compute V(n,c) by recursive substitution the complexity is Ω(2 n )  Two values of V(n-1,A) must be determined for different A,  Four values of V(n-2,A) and so on  Number of partial solutions: number of nodes in a full binary tree of depth n: Ω(2 n )  The number of values to be determined doubles at each step

E.G.M. PetrakisAlgorithms and Complexity29 Dynamic Programming  Many of these values are likely to be the same especially when C is small compared with 2 n  DP computes and stores these values in a table of n+1 rows and C columns

E.G.M. PetrakisAlgorithms and Complexity30  Dynamic Programming (DP) : compute and store V(0,A), V(1,Α), … V(k,A), … V(n,C) where 0 <= k <= n and 0 <= A <= C using V(k,A) = max{V(k-1,A), V(k- 1,A-s k )+v k }: V(n,C): solution C 0 V(0,0) V(1,0) V(1,A) n +1

E.G.M. PetrakisAlgorithms and Complexity31 DP on Knapsack  Each partial solution V(k,A) takes constant time to compute  The entire table is filled with values  The total time to fill the table is proportional to its size => the complexity of the algorithm is O(nC)  Faster than exhaustive search if C << 2 n  Which objects are included?  Keep this information in a second table X i (k,A)  X i (k,A) = 1 if object i is included, 0 otherwise

E.G.M. PetrakisAlgorithms and Complexity32 Shortest Path  find the shortest path from a given node to every other node in a graph G  No better algorithm for single ending node  Notation:  G = (V,E) : input graph  C[i,j] : distance between nodes i, j  V : starting node  S : set of nodes for which the shortest path from v has been computed  D(W) : length of shortest path from v to w passing through nodes in S

E.G.M. PetrakisAlgorithms and Complexity starting point: v = 1 stepSWD(2)D(3)D(4)D(5) 1{1}-10oo {1,2} {1,2,4} {1,2,4,3} {1,2,4,3,5}

E.G.M. PetrakisAlgorithms and Complexity34 Dijkstra’s Algorithm  function Dijkstra(G: graph, int v) { S = {1}; for i = 2 to n: D[i] = C[i,j]; while (S != V) { choose w from V-S: D[w] = minimum S = S + {w}; for each v in V–S: D[v] = min{D[v], D[w]+[w,v]}*; } * If D[w]+C[w,v] < D[v] then P[v] = w: keep path in array P  Complexity: O(n 2 )

E.G.M. PetrakisAlgorithms and Complexity35 DP on TSP  The standard algorithm is Ω((n-1)!)  If the same partial paths are used many times then DP can be faster!  Notation:  Number the cities 1,2,…n,  1 is the starting-ending point  d(i,j) : distance from node i to j  D(b,S) : length of shortest path that starts at b visits all cities in S in some order and ends at node 1  S is subset of {1,2,…n} – {1,b}

E.G.M. PetrakisAlgorithms and Complexity36 DP Solution on TSP  TSP needs to compute D(1,{2,…n}) and the order of nodes  DP computes and stores D(b,S) for all b’s  TSP needs to compute all D(b,S) starting from empty S and proceeding until S={2,3,….n}  Assume that we start from b=1  D(b,φ) = d(b,1)

E.G.M. PetrakisAlgorithms and Complexity37 DP Solution on TSP (con’t)  For each one of the n possible b’s there are at most 2 n-1 paths  S is a subset of a set of size n-1  There are n values of b in S  n2 n-1 values of D(b,S)  For each D(b,S) choose the min. from n a’s  The complexity is O(n 2 2 n-1 ) : still exponential but, better than exhaustive search!  D(a, S-{a}) are not recomputed (they are stored)

E.G.M. PetrakisAlgorithms and Complexity38 Branch and Bound  Explores all solutions using constraints (bounds)  Lower Bound: min. possible value of solution  Upper Bound: max. possible value of solution  The problem is split into sub-problems  Each sub-problems is expanded until a solution is obtained as long as its cost doesn’t exceed the bounds  Its cost must be greater than the lower bound  and lower than the upper bound

E.G.M. PetrakisAlgorithms and Complexity39 Upper-Lower Bounds  The upper bound can be set to oo initially  Takes the cost of the first complete solution as soon as one is found  A greedy algorithm can provide the initial upper bound (e.g., in TSP, SP etc)  It is revised in later steps if better solutions are found  The lower bound is not always easy to compute  Depends on the problem  There is a theorem for TSP

E.G.M. PetrakisAlgorithms and Complexity40 s cf ad e g h k tb s s abc s bca be d s b c a bed ef s b c a bed ef f 2 … upper bound by greedy: length(saefkt)=8 9

E.G.M. PetrakisAlgorithms and Complexity41 Basic Branch and Bound Algorithm S = { 1,2,…n } /* initial problem */ U = oo /* upper bound */ while S != empty { K = k  S, S = S – {k}; C = { c 1,c 2,…c m } /* children of k */ Computes partial solutions Zc 1,Zc 2,…Zc m for i = 1 to m { if Zc i >= U kill C else if ( c i = solution ) { U = Zc i best solution = i } else S = S + { c i } }