Download presentation

Presentation is loading. Please wait.

1
**Analysis of Algorithms**

The Greedy Approach

2
**Greedy Algorithms Algorithms work in stages,**

considering one input at a time. At each stage a decision is made regarding whether or not a particular input is in an optimal solution. Inputs are considered to be in an order determined by some selection procedure. If the inclusion of an input into a partially constructed optimal solution will result in an infeasible solution, then this input is not added to the partial solution.

3
Greedy Algorithms Greedy algorithm obtains an optimal solution to a problem by making a sequence of choices. For each decision point in the algorithm, the choice that seems best at the moment is chosen. This heuristic strategy does not always produce an optimal solution. How can one tell if a greedy algorithm will solve a particular optimization problem? No way in general, But there are some key ingredients that are exhibited by most problems that lend themselves to a greedy strategy.

4
**Greedy Algorithm Example**

The sales clerk often encounter the problem of giving change for a purchase. Customers usually don’t want to receive a lot of coins. The goal of sales clerk is not only to give the correct change, but to do so with as few coins as possible. A solution to an instance of change problem is a set of coins that adds up to the required amount. An optimal solution to a problem is such a set of minimum size.

5
**Greedy Algorithm Example**

A greedy approach to the problem could proceed as follows. Initially there are no coins in the change. Sales clerk starts by looking for the largest coin (in value) he can found. I.e. His criterion for deciding which coin is best (locally optimal) is the value of the coin. This is called a selection procedure greedy algorithm.

6
**Greedy Algorithm Example**

Next he sees if adding this coin to the change would make the total value of the change exceed the amount required. This is called the feasibility check in a greedy algorithm. If adding the coin would not make the change exceed the amount required, he adds the coin to the change. Next he checks to see if the value of the change is now equal to the amount required. This is the solution check in the greedy algorithm.

7
**Greedy Algorithm Example**

If they are not equal, he gets another coin using his selection procedure, and repeats the process. He does this until the value of the change equals the amount required or he runs out of coins. In the later case, he is not able to return the exact amount required.

8
**Greedy Algorithm Example**

while there are more coins and the instance is not solved do Grab the largest remaining coin //selection procedure if adding the coin makes the change exceed the amount required then //feasibility check reject the coin else add the coin to the change if the total value of the change equals the amount required then //solution check the instance is solved

9
**Greedy Algorithm Example**

In the feasibility check, when we determine that adding a coin would make the change exceed the amount required, we learn that The set obtained by adding that coin can not be completed to give a solution to the instance. Therefore that set is infeasible and is rejected.

10
**Greedy Algorithms Greedy Choice Property**

A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. In dynamic programming, We make a choice at each step, but the Choice may depend on the solutions to subproblems.

11
**Greedy Algorithms In a greedy algorithm**

We make whatever choice seems best at the moment and then solve the subproblems arising after the choice is made. The choice made by greedy algorithm may depend on choices so far, but it can not depned on any future choices or on the solutions to subproblems. A greedy algorithms starts with a locally optimal choice, and continues making locally optimal choice until a solution is found

12
**Greedy Algorithms Optimal Substructure**

Optimal solution to the problem contains within it optimal solutions to sub-problems. This is a key ingredients of accessing the applicability of dynamic programming as well as greedy algorithms.

13
Minimum Spanning Tree A Spanning Tree for a connected, undirected graph, G = (V, E), is a subgraph of G that is an undirected tree and contains all the vertices of G. In a weighted graph G = (V, E, W), the weight of a subgraph is the sum of the weights of the edges in the subgraph. A minimum spanning tree (MST) for a weighted graph is a spanning tree with minimum weight.

14
**Minimum Spanning Tree Consider the following graph**

B D C 2.0 4.0 1.0 3.0 Consider the following graph The possible spanning trees for this graph are A B D C 2.0 1.0 3.0 A B D C 2.0 3.0 A B D C 2.0 1.0 3.0 MST Weight is 6 MST Weight is 6 Weight is 7

15
Minimum Spanning Tree Minimum spanning trees are useful when we want to find the cheapest way to connect a Set of cities by roads Set of electrical terminals or computers by wires or telephone lines Etc…

16
**Prims’s Algorithm for Minimum Spanning Tree**

Prim’s algorithm begins by selecting an arbitrary starting vertex, and then “branches out” form the past of the tree constructed so far by choosing a new vertex and edge at each iteration. The new edge connects the new vertex to the previous tree. During the course of the algorithm, the vertices may be thought of as divided into three (disjoint) categories as follows: Tree Vertices: in the tree constructed so far Fringe Vertices: Not in the tree, but adjacent to some vertex in the tree. Unseen vertices: all others

17
**Prims’s Algorithm for Minimum Spanning Tree**

The key step in the algorithm is the selection of a vertex from the fringe and an incident edge. Prim’s algorithm always chooses an edge of minimum weight from a tree vertex to a fringe vertex. The general algorithm structure is

18
**Prims’s Algorithm for Minimum Spanning Tree**

Prim MST(G, n) Initialize all the vertices as unseen Select an arbitrary vertex s to start the tree; reclassify it as tree. Reclassify all the vertices adjacent to s as fringe. While there are fringe vertices Select an edge of minimum weight between a tree vertex t and a fringe vertex v. Reclassify v as tree; add edge tv to the tree; Reclassify all unseen vertices adjacent to v as fringe.

19
**Prims’s Algorithm for Minimum Spanning Tree**

2 3 7 A B G F The tree so far Fringe Vertices The tree and fringe after the starting vertex A is selected A B G F I H C E D 2 7 3 6 1 5 4 8

20
**Prims’s Algorithm for Minimum Spanning Tree**

3 7 A C G F The tree so far Fringe Vertices After Selecting an edge and vertex: BG is not shown because AG is a better choice to reach G. B 2 4 A B G F I H C E D 2 7 3 6 1 5 4 8

21
**Prims’s Algorithm for Minimum Spanning Tree**

7 A C G F The tree so far Fringe Vertices After Selecting an edge AG : GB is not shown because vertex B is already include in a tree. B 2 4 3 I H 1 A B G F I H C E D 2 7 3 6 1 5 4 8

22
**Prims’s Algorithm for Minimum Spanning Tree**

B G F I H C E D 2 7 3 6 1 5 4 8 A G F The final Minimum Spanning tree after prim’s algorithm is B 2 5 3 I E 1 D C H

23
Prim’s Algorithm Algorithm prim(G) F=empty for i=2 o n nearest[i]=1;distance[i]=w[1:i] end repeat n-1 times min=∞ for i=2 to n if 0<dist[i]<min min=dist[i], near=i e= edge connecting vertices index by near and nearest[near] add e to f dist[near]=-1 for i= 2 to n if w[i,near]<distance[i] distance[i]=w[i,near], nearest[i]=near

24
**Analysis n-1(2(n-1))=n2 using (adjacency matrix)**

it may be changed if data structure is changed. if implemented via min heap its complexity would be (v-1+E)log(v)=Elog(v) algorithm will perform v-1 deletions of min element from graph, and makes verifications,chnages of element priority in in heap size not greater than V, as deletion will take O(log v) time,

25
**Kruskal's Algorithm Edge based algorithm**

Add the edges one at a time, in increasing weight order The algorithm maintains A – a forest of trees. An edge is accepted it if connects vertices of distinct trees We need a data structure that maintains a partition, i.e.,a collection of disjoint sets MakeSet(S,x): S ¬ S È {{x}} Union(Si,Sj): S ¬ S – {Si,Sj} È {Si È Sj} FindSet(S, x): returns unique Si Î S, where x Î Si

26
Kruskal's Algorithm The algorithm adds the cheapest edge that connects two trees of the forest MST-Kruskal(G,w) A ¬ Æ for each vertex v Î V[G] do Make-Set(v) sort the edges of E by non-decreasing weight w for each edge (u,v) Î E, in order by non-decreasing weight do if Find-Set(u) ¹ Find-Set(v) then A ¬ A È {(u,v)} Union(u,v) return A

27
Kruskal Example

28
Kruskal Example (2)

29
Kruskal Example (3)

30
Kruskal Example (4)

31
Kruskal Running Time A detailed analysis will show O(V) + O(Elog(E)) + O(Elog(V)). We need O(V) operations to build the initial forest with |V| trees each containing one node. The edges are stored in a priority queue and each time the smallest edge is retrieved, hence we need O(Elog(E)) operations to process the edges. Finally, the disjoint set operations are implemented by a tree with V nodes, O(Elog(V));(comparison of each edge is performed in worst case)

32
**Disregarding the lower term O(V) we get O(E (log(V) + log(E))**

Disregarding the lower term O(V) we get O(E (log(V) + log(E)). At the worst case E = O(V2). Hence log(E) = O(log(V2)) = O(2log(V)) = O(log(V). Thus we get complexity O(Elog(V)). On the other hand, V = O(E), hence we can reduce the complexity expression

33
Prim’s Vs Kruskal For sparse trees Kruskal's algorithm is better - since it is guided by the edges. For dense trees Prim's algorithm is better - the process is limited by the number of the processed vertices

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google