Presentation is loading. Please wait.

Presentation is loading. Please wait.

Major Design Strategies

Similar presentations


Presentation on theme: "Major Design Strategies"— Presentation transcript:

1 Major Design Strategies

2 The Greedy method Philosophy: optimizing (maximizing or minimizing) short term gain and hoping for the best without regard for long-term consequences. Algorithm positives: simple, easy to code, efficient Algorithm negatives: often lead to less than optimal results Eg. Find the shortest path between two vertices in a weighted graph Determine the minimum spanning tree in a weighted graph

3 Examples Minimize spanning tree Kruskal’s algorithm Prim’s algorithm
Bellman-Ford algorithm Connected components Knapsack problem Huffman codes

4 Parallel Building up a solution sequentially through a sequence of greedy choices. Leads to a θ(n) algorithm. (linear) Worst-case complexity. To do better, we must look for ways to build up. Multiple partial solutions in parallel.

5 Divide and conquer One of the most powerful strategies.
The problem input is divided according to five criteria into a set of smaller inputs to the same problem. The problem is then solved for each of the smaller inputs by: recursively - by further division into smaller units Or By invoking a prior solution The solution for the original input is obtained by expressing it in the same form. As a combination of the solution for these smaller parts.

6 Examples Selecting the K- the smallest element in a list
Finding maximum and minimum element in a list Symbolic algebraic operations on polynomials Multiplication of polynomials Multiplication of large integers Matrix multiplications Discrete Fourier transform Fast Fourier transform – on PRAM, on Butterfly Net Inverse Fourier transform Inverting triangular matrices Inverting general matrices

7 Dynamic Programming Involves constructing a solution S to a given problem by building it up dynamically from solutions to smaller (or simpler problems) S1, S2,…Sn of the same type. The solution to any given smaller problem Si is itself built up from the solution to even smaller (simpler) problems, etc. We start with the known solutions to the smallest (simplest) problem instances and build from there in a bottom-up fashion. To be able to reconstruct S from S1, S2,…Sn, some additional information is usually required. We start with the known solutions to the smallest/simplest problem instances and build from these in a bottom-up fashion. To be able to reconstruct S from S1, S2 ,…… Sn some additional information is usually required.

8 Both used recursive division of the problem in smaller subproblems.
Combine – function that combines S1, S2,….Sn using the additional information to obtain S => S = Combine (S1, S2, ….. Sn) - Dynamic programming ---- to Divide – and – Conquer. uses a bottom –up approach uses a top – down approach Conquer Similar Both used recursive division of the problem in smaller subproblems.

9 Never consider a given subproblem more than once
in general avoids generating suboptional subproblems when the Principle of Optimality holds. => increased efficiency Examples: Optional chained matrix product M1 (M2M3) M1(M2(M3M4)) - Optional Binary Search Trees. All pars shorest paths. { Dijkstra & Bellman Ford consider a single source { All pairs – consider any source. * Traveling Salesman Problem.

10 Optimization Problems and Principle of Optimality
Dynamic Programming Optimization Problems and Principle of Optimality *dynamic programming is most effective when Principle of Optimality holds. Given: The optimization problem The set of ALL FEASABLE SOLUTION: S that satisfy the constraints of the problem An optional solution S is a solution that optimizes (Min or Max) the objective function. Need to: Optimize over ALL subproblems S1, S2,….Sn such that S = Combine (S1,S2,….Sn). Optional => This might be intractable (there might be exponentially as many problems) => can reduce to number of problems to consider if the Principle of Optimality holds.

11 -assume P is a path from a to b -assume v in G in the path P
Given on optimization problem and on associated function combine, the Principle of Optimality holds if the following is always tree: {If S = Combine (S1, S2,….Sn) and S is an optimal solution to the problem, then S1, S2,….Sn are optimal solution to their associated subproblems.} Eg. Find the shortest path in a graph (or diagraph) G, from vertex “a” to vertex “b” -assume P is a path from a to b -assume v in G in the path P

12 P = Combine (P1,P2) is the union of the two paths P1 & P2
P1 is a path from a to v P2 is a path from v to b P = Combine (P1,P2) is the union of the two paths P1 & P2 If P is a shortest path from a to b, then P1 is a shortest path from a to v and P2 is shortest path from v to b b a v

13 The Specialized Principle of Optimality holds if, given any sequence decisions D1,….Dn yielding an optional solution S to the given problem, the subsequence of decisions D2,….Dn yields an optimal solution S1 to the single-instance divided problem resulting from having made the first decisoin D1. This is the special case of the Principle where m=1 and (Combine S1) = S. The additional information required to reconstruct S from S1 is the information associated with decision D1. Eg. The problem of find the shortest path in G from a to b can be revied as making a sequence of decisions D1,D2,…Dp= b for the section in the path, where D1 is the choice of a vertex r adjacent to a => S1 determined by the remaining decision D2,…..Dp must be a shortest path, to that the Specialized Optimality Principle holds.

14 Dynamic Programming in Parallel
Recurrence relation- level by level Concurrency could be explorited at each level All pains shortest paths – we have linier number of levels => level-by-level parallelization Ω(n). Can the problems be solved in PRAM in polylogarithmic time using a polynomial number of processors? n2pE on EREW PRAM. => Tn= 0(n) and cost = pTn+n3 -by using the Principle of Optimality. => goal. Tn = 0(lognn) using np processors.

15 BACKTRACKING AND BRANCH AND BOUND
- Design strategies applicable to problems whose solutions can be expressed as sequences of decisions. - the sequences of decisions can be modeled differently for a given problem => different state space trees. -the state space tree is: - implicit in Backtracking explicitly implemented in Branch & Bound. both, Backtracking and Branch & Bound, utilize. - Objective functions to limit the number of roles in the space tree that need to be examined.

16 Backtracking depth-fist search - access mode E current current model being expanded (E-mode) - Immediately its first child not yet visited becomes the new E-mode Branch & Bound - Searches of the state space tree that generate all the children of the E-mode when a mode is accused. Backtracking: a mode can be E-mode many times Branch & Bound: a mode can be E-mode only one time variations with respect to which mode is expanded.

17 - FiFo LiFo priority queue. (least-cost Branch & Bound) - Backtracking and B & B inefficient in the worst case. - The choice of the objective function (or bounding function) is essential in making these 2 alg. More efficient. Utilizing heuristics can lower the search of state space trees. Least cost B & B strategy utilizes a heuristic cost function associated with the nodes of the state space tree where the set of “live” nodes is maintained as a priority queue with respect to this cost function. => The next node to become E-node is the one that is the most promising to lead quickly to a goal.

18 Least cost B & B – is related to the general heuristic search strategy call A*-search
A*-search – can be applied to stat space diagraphs and not only to stat space trees. A*- sold in Artificial Intelligence * strategies for playing 2- person games: i.e, ALPHA-BETA Heuristic - look ahead a fixed number of moves - assign a heuristic value to positions reached there. - an estimate for the best move is obtained by working your way back to the correct position using the minimax strategy.


Download ppt "Major Design Strategies"

Similar presentations


Ads by Google