Major Design Strategies

Slides:



Advertisements
Similar presentations
Algorithm Design Techniques
Advertisements

Traveling Salesperson Problem
Types of Algorithms.
Greedy Algorithms Greed is good. (Some of the time)
CSE 326: Data Structures Lecture #19 Approaches to Graph Exploration Bart Niswonger Summer Quarter 2001.
Algorithms + L. Grewe.
Lecture 12: Revision Lecture Dr John Levine Algorithms and Complexity March 27th 2006.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Chapter 3 The Greedy Method 3.
C++ Programming: Program Design Including Data Structures, Third Edition Chapter 21: Graphs.
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
CS333/ Topic 11 CS333 - Introduction CS333 - Introduction General information Goals.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Monday, 12/2/02 Design Patterns for Optimization Problems Greedy.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Backtracking.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Instructor: Dr. Sahar Shabanah Fall Lectures ST, 9:30 pm-11:00 pm Text book: M. T. Goodrich and R. Tamassia, “Data Structures and Algorithms in.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Fundamentals of Algorithms MCS - 2 Lecture # 7
Algorithms  Al-Khwarizmi, arab mathematician, 8 th century  Wrote a book: al-kitab… from which the word Algebra comes  Oldest algorithm: Euclidian algorithm.
INTRODUCTION. What is an algorithm? What is a Problem?
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 13: Graphs Data Abstraction & Problem Solving with C++
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Course Review Fundamental Structures of Computer Science Margaret Reid-Miller 28 April 2005.
Chapter 20: Graphs. Objectives In this chapter, you will: – Learn about graphs – Become familiar with the basic terminology of graph theory – Discover.
2016/3/13Page 1 Semester Review COMP3040 Dept. Computer Science and Technology United International College.
1 Ch18. The Greedy Methods. 2 BIRD’S-EYE VIEW Enter the world of algorithm-design methods In the remainder of this book, we study the methods for the.
Data Structures and Algorithms
Data Structures Lab Algorithm Animation.
Dynamic Programming Sequence of decisions. Problem state.
CSCE 411 Design and Analysis of Algorithms
Topics Covered after Mid-Term
Lecture on Design and Analysis of Computer Algorithm
Greedy Technique.
Major Design Strategies
Courtsey & Copyright: DESIGN AND ANALYSIS OF ALGORITHMS Courtsey & Copyright:
Lecture 22 Complexity and Reductions
Design and Analysis of Algorithm
Analysis and design of algorithm
Types of Algorithms.
Design and Analysis of Computer Algorithm (CS575-01)
Unit-5 Dynamic Programming
Types of Algorithms.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Advanced Analysis of Algorithms
Shortest Path Algorithms
Principles of Computing – UFCFA3-30-1
CLASSES P AND NP.
Chapter 11 Limitations of Algorithm Power
Branch and Bound Searching Strategies
Lecture 3: Environs and Algorithms
Dynamic Programming General Idea
Backtracking and Branch-and-Bound
Types of Algorithms.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Chapter 14 Graphs © 2011 Pearson Addison-Wesley. All rights reserved.
Department of Computer Science & Engineering
Major Design Strategies
Presentation transcript:

Major Design Strategies

The Greedy method Philosophy: optimizing (maximizing or minimizing) short term gain and hoping for the best without regard for long-term consequences. Algorithms are (positives): simple, easy to code, efficient Algorithms are (negatives): often lead to less than optimal results Eg. Find the shortest path between two vertices in a weighted graph Determine the minimum spanning tree in a weighted graph

Examples Minimize spanning tree Kruskal’s algorithm Prim’s algorithm Shortest paths: Dijkstra Algorithm Bellman-Ford Algorithm (All-shortest paths) Connected components Knapsack problem Huffman codes

Parallel Building up a solution sequentially through a sequence of greedy choices => Leads to a θ(n) algorithm (linear) worst-case complexity. To do better, we must look for ways to build up multiple partial solutions in parallel.

Divide and conquer One of the most powerful strategies. The problem input is divided according to some criteria into a set of smaller inputs to the same problem. The problem is then solved for each of the smaller inputs by: recursively - by further division into smaller inputs OR By invoking a prior solution The solution for the original input is obtained by expressing it in the same form. As a combination of the solution for these smaller inputs.

Examples Selecting the K- th smallest element in a list Finding maximum and minimum element in a list Symbolic algebraic operations on polynomials Multiplication of polynomials Multiplication of large integers Matrix multiplications Discrete Fourier transform Fast Fourier transform – on PRAM, on Butterfly Net Inverse Fourier transform Inverting triangular matrices Inverting general matrices

Dynamic Programming Involves constructing a solution S to a given problem by building it up dynamically from solutions to smaller (or simpler problems) S1, S2,…Sn of the same type. The solution to any given smaller problem Si is itself built up from the solutions to even smaller (simpler) subproblems, etc. We start with the known solutions to the smallest (simplest) problem instances and build from there in a bottom-up fashion. To be able to reconstruct S from S1, S2,…Sn, some additional information is usually required.

Both used recursive division of the problem in smaller subproblems. Combine – function that combines S1, S2,….Sn using the additional information to obtain S => S = Combine (S1, S2, ….. Sn) - Dynamic programming similar to Divide- and Conquer Conquer. uses a Bottom – Up Approach uses a Top – Down Approach Both used recursive division of the problem in smaller subproblems.

Never consider a given subproblem more than once In general avoids generating suboptimal subproblems when the Principle of Optimality holds. => increased efficiency Examples: Optimal chained matrix product M1 (M2M3) M1(M2(M3M4)) . - Optimal Binary Search Trees. All pairs shortest paths. { Dijkstra - consider a single source (applied to all the sources) { Bellman Ford (All pairs) - consider any source. Traveling Salesman Problem.

Optimization Problems and Principle of Optimality Dynamic Programming Optimization Problems and Principle of Optimality dynamic programming is most effective when Principle of Optimality holds. Given: The optimization problem The set of ALL FEASABLE SOLUTIONS: S that satisfy the constraints of the problem An optimal solution S is a solution that optimizes (Minimizes or Maximizes) the objective function. Need to: Optimize over ALL subproblems S1, S2,….Sn such that S = Combine (S1,S2,….Sn). Optimal => This might be intractable (there might be exponentially as many problems) => we can reduce to number of problems to consider if the Principle of Optimality holds.

assume v in G in the path P Given on optimization problem and on associated function combine, the Principle of Optimality holds if the following is always tree: {If S = Combine (S1, S2,….Sn) and S is an optimal solution to the problem, then S1, S2,….Sn are optimal solution to their associated subproblems.} Eg. Find the shortest path in a graph (or diagraph) G, from vertex “a” to vertex “b” assume P is a path from a to b assume v in G in the path P

P = Combine (P1,P2) is the union of the two paths P1 & P2 P1 is a path from a to v P2 is a path from v to b P = Combine (P1,P2) is the union of the two paths P1 & P2 If P is a shortest path from a to b, then P1 is a shortest path from a to v and P2 is shortest path from v to b a b v

The Specialized Principle of Optimality holds if, given any sequence decisions D1,….Dn yielding an optimal solution S to the given problem, the subsequence of decisions D2,….Dn yields an optimal solution S’ to the single-instance derived problem resulting from having made the first decision D1. This is the special case of the Principle where m=1 and (Combine S’) = S. The additional information required to reconstruct S from S’ is the information associated with decision D1. Eg. The problem of finding the shortest path in G from a to b can be viewed as making a sequence of decisions D1,D2,…Dp= b for the vertices in the path, where D1 is the choice of a vertex v adjacent to a => S’ determined by the remaining decision D2,…..Dp must be a shortest path, so that the Specialized Optimality Principle holds.

Dynamic Programming in Parallel Recurrence relation- level by level Concurrency could be exploited at each level All pairs shortest paths – we have linear number of levels => level-by-level parallelization Ω(n). Can the problems be solved in PRAM in polylogarithmic time? using a polynomial number of processors? n2pE on a EREW PRAM => Tn= 0(n) and cost = pTn = n3 -by using the Principle of Optimality => goal. Tn = 0(log n m) using np processors.

BACKTRACKING AND BRANCH AND BOUND - Design strategies applicable to problems whose solutions can be expressed as sequences of decisions. the sequences of decisions can be modeled differently for a given problem => different state space trees. -the state space tree is: - implicit in Backtracking explicitly implemented in Branch & Bound. both, Backtracking and Branch & Bound, utilize. - Objective functions to limit the number of roles in the space tree that need to be examined.

Backtracking depth-fist search - access node E current current nodel being expanded (E-node) - Immediately its first child not yet visited becomes the new E-node Branch & Bound - Searches of the state space tree that generate all the children of the E-node when a node is accessed. {Backtracking: a node can be E-node many times {Branch & Bound: a node can be E-node only one time variations with respect to which node is expanded.

- FiFo LiFo priority queue. (least-cost Branch & Bound) - Backtracking and Branch&Bound inefficient in the worst case. - The choice of the objective function (or bounding function) is essential in making these 2 algorithms more efficient. Utilizing heuristics can lower the search of state space trees. Least cost B & B strategy utilizes a heuristic cost function associated with the nodes of the state space tree where the set of “live” nodes is maintained as a priority queue with respect to this cost function. => The next node to become E-node is the one that is the most promising to lead quickly to a goal.

“Least cost Branch&Bound” – is related to the general heuristic search strategy called A*-search A*-search – can be applied to state space diagraphs and not only to state space trees. A*- used in Artificial Intelligence strategies for playing 2- person games: i.e, ALPHA-BETA Heuristic - look ahead a fixed number of moves - assign a heuristic value to positions reached there. - an estimate for the best move is obtained by working your way back to the correct position using the minimax strategy.