Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Slides:



Advertisements
Similar presentations
Algorithm Design Methods (I) Fall 2003 CSE, POSTECH.
Advertisements

Algorithm Design Methods Spring 2007 CSE, POSTECH.
Traveling Salesperson Problem
Types of Algorithms.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithms + L. Grewe.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
B ACKTRACK SEARCH ALGORITHM. B ACKTRACKING Suppose you have to make a series of decisions, among various choices, where You don’t have enough information.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Complexity 11-1 Complexity Andrei Bulatov NP-Completeness.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Computational problems, algorithms, runtime, hardness
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Approximation Algorithms
The Theory of NP-Completeness
Chapter 10: Iterative Improvement
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Chapter 11: Limitations of Algorithmic Power
Ch 13 – Backtracking + Branch-and-Bound
Fundamental Techniques
Backtracking.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Lecture 17: Spanning Trees Minimum Spanning Trees.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Analysis of Algorithms
NP-Complete Problems CSC 331: Algorithm Analysis NP-Complete Problems.
MCS312: NP-completeness and Approximation Algorithms
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Lecture 5: Backtracking Depth-First Search N-Queens Problem Hamiltonian Circuits.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
Fundamentals of Algorithms MCS - 2 Lecture # 7
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Greedy Algorithms and Matroids Andreas Klappenecker.
Techniques for Proving NP-Completeness Show that a special case of the problem you are interested in is NP- complete. For example: The problem of finding.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
CSC5101 Advanced Algorithms Analysis
1 CPSC 320: Intermediate Algorithm Design and Analysis July 30, 2014.
Chapter 13 Backtracking Introduction The 3-coloring problem
NPC.
CSC 413/513: Intro to Algorithms
Branch and Bound Searching Strategies
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Approximation Algorithms based on linear programming.
Traveling Salesperson Problem
The minimum cost flow problem
Lecture 22 Complexity and Reductions
Types of Algorithms.
CSCE350 Algorithms and Data Structure
Types of Algorithms.
Lecture 13: Tree Traversals Algorithms on Trees.
Chapter 11 Limitations of Algorithm Power
Types of Algorithms.
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Major Design Strategies
Our old list of problems
Major Design Strategies
Discrete Optimization
Presentation transcript:

Lecture 2: General Problem-Solving Methods

Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction Method

Greedy Method The greedy method consists of an iteration of the following computations: selection procedure - a selection procedure is created to choose the next item to add to a list of locally optimal solutions to sub problems feasibility check - a test is made to see if the current set of choices satisfies some locally optimal criterion. solution check - when a complete set is obtained it is checked to verify that it constitutes a solution for the original problem. A question that should come to mind is: What is meant by locally optimal? This term refers to the amount of work necessary to determine a solution or to measure the level to which a criterion is met. If the computation leading to an optimal solution or the evaluation of a criterion does not involve an exhaustive search then that activity may be considered local.

Minimum Spanning Trees C D F E A G B The minimum spanning tree problem is to find the minimum weight tree embedded in a weighted graph that includes all the vertices. Weighted graph data representations edge list AB 1 AE 2 BC 1 BD 2 BE 5 BF 2 BG 2 CG 4 DE 3 DG 1 EF 1 FG 2 matrix A B C D E F G A B C D E F G Which data representation would you use in an implementation of a minimum spanning tree algorithm? Why?

Divide-and-Conquer As its name implies this method involves dividing a problem into smaller problems that can be more easily solved. While the specifics can vary from one application to another, divide-and-conquer always includes the following three steps in some form: Divide - Typically this steps involves splitting one problem into two problems of approximately 1/2 the size of the original problem. Conquer - The divide step is repeated (usually recursively) until individual problem sizes are small enough to be solved (conquered) directly. Recombine - The solution to the original problem is obtained by combining all the solutions to the sub-problems. Divide and Conquer is not applicable to every problem class. Even when D&C works it may not produce an efficient solution.

Quicksort Example ij pivot value items being swapped new sublists for next pass of quicksort

Backtracking Backtracking is used to solve problems in which a feasible solution is needed rather than an optimal one, such as the solution to a maze or the arrangement of squares in the 15-puzzle. Typically, the solution to a backtracking problem is a sequence of items (or objects) chosen from a set of alternatives that satisfy some criterion. A backtracking algorithm is a scheme for solving a series of sub-problems each of which may have multiple possible solutions and where the solution chosen for one sub-problem can affect the possible solutions of later sub- problems. To solve the overall problem, we find a solution to the first sub-problem and then attempt to recursively solve the other sub-problems based on this first solution. If we cannot, or we want all possible solutions, we backtrack and try the next possible solution to the first sub-problem and so on. Backtracking terminates when there are no more solutions to the first sub- problem or a solution to the overall problem is found.

N-Queens Problem A classic backtracking algorithm is the solution to the N-Queens problem. In this problem you are to place queens (chess pieces) on an NxN chessboard in such a way that no two queens are directly attacking one another. That is no two queens share the same row, column or diagonal on the board. Backtracking Approach - Version 1: Until all queens are placed, choose the first available location and put the next queen in this position. If queens remain to be placed and no space is left, backtrack (by removing the last queens placed and placing it in the next available position).

Dynamic Programming In mathematics and computer science, dynamic programming is a method of solving complex problems by breaking them down into simpler steps. It is applicable to problems that exhibit the properties of overlapping sub problems and optimal substructure. Overlapping sub problems means that the space of sub problems must be small, that is, any recursive algorithm solving the problem will solve the same sub problems over and over, rather than generating new sub problems. Dynamic programming takes account of this fact and solves each sub problem only once. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub problems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits an optimal substructure.

The Binomial Coefficient (a+b) 0 = 1 (a+b) 1 = a+b (a+b) 2 = a 2 +2ab+b 2 (a+b) 3 = a 3 +3a 2 b+3ab 2 +b 3 (a+b) 4 = a 4 +4a 3 b+6a 2 b 2 +4ab 3 +b 4 The binomial theorem gives a closed- form expression for the coefficient of any term in the expansion of a binomial raised to the nth power The binomial coefficient is also the number of combinations of n items taken k at a time, sometimes called n-choose-k.

Graph Traversal Graph traversal refers to the problem of visiting all the nodes in a graph in a particular manner. Graph traversal can be used as a problem-solving method when a problem state can be represented as a graph and its solution can be represented as a path in the graph. When the graph is a tree it can represent the problem space for a wide variety of combinatorial problems. In these cases the solution is usually at one of the leaf-nodes of the tree or is the path to a particular leaf-node. Techniques such a branch-and-bound can be used to reduce the number of operations required to search the graph or tree problem space by eliminating infeasible or unpromising branches.

Traveling Salesperson with Branch-and-Bound AB C D E F G H ABCDEFGHABCDEFGH A B C D E F G H In the most general case the distances between each pair of cities is a positive value with dist(A,B)  dist(B,A). In the matrix representation, the main diagonal values are omitted (i.e. dist(A,A)  0).

Linear Programming (LP) Linear programming (LP) is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear equations. More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Linear programs are problems that can be expressed in canonical form: Maximize: Subject to: where x represents the vector of variables, c and b are vectors of coefficients and A is a matrix of coefficients. The expression to be maximized or minimized is called the objective function c T x The equations Ax<b are the constraints.

The simplex method is a method for solving problems in linear programming. This method, invented by George Dantzig in 1947, tests adjacent vertices of the feasible set (which is a polytope) in sequence so that at each new vertex the objective function improves or is unchanged. The simplex method is very efficient in practice, generally taking 2m to 3m iterations at most (where m is the number of equality constraints), and converging in expected polynomial time for certain distributions of random inputs. However, its worst-case complexity is exponential. The Simplex Method feasible solution set each facet represents a limiting constraint simplex moves along surface to an optimal point

Reduction Method In computability theory and computational complexity theory, a reduction is a transformation of one problem into another problem. Depending on the transformation used this can be used to define complexity classes on a set of problems. Intuitively, problem A is reducible to problem B if solutions to B exist and give solutions to A whenever A has solutions. Thus, solving A cannot be harder than solving B. We write A ≤ B, usually with a subscript on the ≤ to indicate the type of reduction being used.

Using Reduction to Show that Vertex Cover is NP-Complete 3-SATISFIABILITY (3SAT) - Instance: Set U of variables, a collection C of clauses over U such that each clause c in C has size exactly 3. Question: Is there a truth assignment for U satisfying C? VERTEX COVER - Instance: An undirected graph G and an integer K Question: Is there a vertex cover of size K or less for G, i.e., a subset V' of V with the size of V' less than K such that every edge has at least one endpoint in V'. Claim: VERTEX COVER is NP-complete Proof: It was proved in 1971, by Cook, that 3SAT is NP-complete. Next, we know that VERTEX COVER is in NP because we could verify any solution in polynomial time with a simple n 2 examination of all the edges for endpoint inclusion in the given vertex cover.