Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decrease and Conquer.

Similar presentations


Presentation on theme: "Decrease and Conquer."— Presentation transcript:

1 Decrease and Conquer

2 Basics Decrease-and-conquer algorithm works as follows:
Establish the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. Exploit this relationship either top down (recursively) or bottom up (without a recursion).

3 Decrease & Conquer variations
Decrease by a constant The size of an instance is reduced by the same constant (usually one) at each iteration of the algorithm. Decrease by a constant factor The size of a problem instance is reduced by the same constant factor (usually two) on each iteration of the algorithm. Variable size decrease A size reduction pattern varies from one iteration to another

4 Decrease by Constant

5 Insertion Sort Insertion sort is based on the decrease (by one)- and-conquer approach: Provided that a smaller array A[1..n - 1] is already sorted, now sort the original array A[1..n -]. Find an appropriate position for an element A[n ] among the sorted n - 1 elements and insert it there. Right-to-left scan: Scan the sorted subarray from right to left until the first element smaller than or equal to A[n ] is encountered and then insert A[n ] right after that element.

6 Algorithm InsertionSort (A[1
Algorithm InsertionSort (A[1..n ]) for i ← 2 to n do v ← A[i] j ← i - 1 while j > 0 and A[j] > v do A[j + 1] ← A[j] j ← j - 1 A[j + 1] ← v

7 Efficiency Input size: n Basic operation: key comparison A[j] > v.
The worst case occurs when the input is already an array of strictly decreasing values: W(n)=∑ ∑ 1 = n(n-1)/2=є Ө (n 2 ) Best Case: B(n)=∑ 1=n-1єӨ(n) n i-1 i=2 j=0 n i=2

8 Average Case for given i there can be i slots where x can be inserted, and each slot is equally probable of holding this value, thus each slot has probability 1/i, following are number of comparison for each slot slot number of Comparison i 1 i-1 2 i-2 3 . the reason for comparison at location 1 is i-1 as in this case 1st while condition is false so basic operation is not evaluated.

9 average number of comparisons needed to insert x is
1(1/i)+2(1/i)+………..(i-1)(1/i)+(i-1)(1/i) =1/i(1+2+…..i-1)+(i-1)(1/i) =1/i(i(i-1)/2)+(i-1)(1/i) =(i+1)/2-1/i……………………(eq1) these are number of comparisons require in each for loop iteration A(n)=∑ (i+1)/2 -1/i =1/2∑ (i+1) - ∑ 1/i A(n)=(n+4)(n-1)/4-lgn≈n 2 /4єӨ(n 2 ) n i=2 n i=2 n i=2 i-1/2+i-1/i i2-i+2i-2/2i i2+i-2/2i i+1/2-1/i total number of terms/2(first term + last term) n-1/2(3+n+1) ½ SUMMATION((n-1)(N+4)/2) (n-1)(n+4)/4 1/i= lgn

10 Depth First Search When given a graph, we are often interested in searching the vertices in the graph in some organized way. Depth-first search (DFS) starts visiting a graph at some arbitrary unvisited vertex. When a vertex is visited, a flag is marked to indicate that it has been visited. At each vertex v, DFS recursively visits an unvisited neighbour of v. If there are no unvisited neighbour, recursion step and backs up. It is convinenit to use stack in DFS push vertex on the stack when visited first time pop it when all of it becomes dead end

11 DFS set DFSnumber for all vertices to be -1 count = 0 for each unvisited vertex v (DFSnumber == -1) dfs(v, count) --- dfs(v, int &count) { set DFSnumber[v] = count++ for each w adjacent to v if DFSnumber[w] == -1 dfs(w, count) }

12 Stack Ordering: a16 c f44 d31 b53 e62 a e b f d c b e a d c f

13 DFS Every vertex is traversed once.
For each vertex, we need to process all its neighbours. Adjacency matrix: (n^2) operations Yields two distinct ordering of vertices: preorder: as vertices are first encountered (pushed onto stack) postorder: as vertices become dead-ends (popped off stack) Applications: checking connectivity, finding connected components checking acyclicity searching state-space of problems for solution (AI)

14 Breadth First Search Explore graph moving across to all the neighbors of last visited vertex Similar to level-by-level tree traversals Instead of a stack, breadth-first uses queue Applications: same as DFS, but can also find paths from a vertex to all other vertices with the smallest number of edges

15 BFS BFS(G) count :=0 mark each vertex with 0 for each vertex v∈ V do bfs(v) count := count + 1 mark v with count initialize queue with v while queue is not empty do a := front of queue for each vertex w adjacent to a do if w is marked with 0 mark w with count add w to the end of the queue remove a from the front of the queue

16 a b e a d c f c d e f b

17 Effieceincy Every vertex is traversed once.
For each vertex, we need to process all its neighbours. Adjacency matrix: (n^2) operations Yields single ordering of vertices (order added/deleted from queue is the same) Applications Checking connectivity. Cycle detection. Shortest path (unweighted graph).

18 DFS vs BFS Data structures: stack vs. queue
Implementation: recursion vs. explicit queue manipulation Complexity: same

19 Decrease By constant Factor

20 The Fake Coin Problem The fake coin problem is to detect a single fake coin from a set of coins using a balance scale. Supposing the fake coin is lighter, we can repeatedly compare one half of a set to the other half, eliminating the heavier half. This decreases the number of coins in half each iteration, so only ≈ log2 n weighings are needed.

21 Russian Peasant multiplication
To multiply two pos. integers n · m by this method: m if n = 1 n.m = (n/2) · 2m if n is even (⌊n/2⌋ · 2m + m if n is odd

22 n m 50 65 25 130 12 260 +130 6 520 3 1040 1 2080 +1040 2080+( ) 3250

23 Decrease By variable size

24 Interpolation Search Interpolation search (sometimes referred to as extrapolation search) is an algorithm for searching for a given key value in an indexed array that has been ordered by the values of the key. checking telephone index finding some word dictionary

25 Interpolation Search (Contd…)
work for uniformly distributed data calculation of mid is modified as mid =low+(((x-s[low](high-low))/s(high)- s(low)) Average case: lg(lg(n)) worst Case: O(n)

26 Robust Interpolation Search
gap=floor((high-low) ½) mid =low+(((x-s[low](high-low))/s(high)- s(low)) mid=min((high-gap),max(mid,low+gap)) index used for comparison is at least gap position away from low and high

27 Finding kth Smallest Element
Function selection(low, high,k) if low==high selection=s[low] else partition(low,high,pp) if k==pp selection =s[pp] else if k<pp selection=selection(low,pp-1,k) selection=selection(pp+1,high,k)

28 procedure partition(low,high,pivotpoint) pivotitem=s[low], j=low
for i=low+1 to high if s[i]<pivotitm j=j+1 exchange s[i] & s[j] pivotpoint=j exchange s[low] & s[pivotpoint]

29 Worst case w(n)=w(n-1)+n-1 =w(n-2)+n-2+n-1 =w(n-3)+n-3+n-2+n-1 …..

30 Average Case We assume that all inputs are equal likely to be pivot point ie pp=k Input size in recursive call Number of outcomes yield that input size n 1 pp=2, pp=n-1 and k=1 or k=n 2 2, pp=3, pp=n-2 and k=1,2 or n-1, n 4 3 6 i 2(i) n-1 2(n-1)

31 A[n]=(sum of number of outcomes yield that input size(input size of recursive call)/sum of numbers )+complexity of partition =nA(0)+2(A(1)+4A(2)+6A(3)……+2n-1(An-1)/n+2(1+2+3+…+n- 1))+n-1 A(0)=0 = 2(A(1)+2A(2)+3A(3)……+n-1(An-1))/n+2(n(n-1)/2) +n-1 =2(A(1)+2A(2)+3(A(3)……+n-1(An-1))/n2 +n-1 n2A(n)= 2(A(1)+2A(2)+3(A(3)……+n-1(An-1))+n2( n- 1)……………….eq(1) replacing n with n-1 (n-1)2A(n-1)= 2(A(1)+2A(2)+3(A(3)……+n-2(An-2))+(n-1)2( n- 2)……………….eq(2) subtracting eq2 from eq1 A(n)= (n2-1)A(n-1)/n2+(n-1)(3n-2)/n2 for larger n A(n)≈A(n-1)+3 ≈A(n-2)+3+3 ≈A(n-i)+3i by taking A(0)=0, i=n ≈3nєӨ(n) n2A(n)-(n-1)2A(n-1)=2(n-1)A(n-1)+(n-1)(n2-(n-1)(n-2)) n2A(n) =(n-1)2A(n-1)+2(n-1)A(n-1)+(n-1)(n2-(n2-n-2n+2) n2A(n) =A(n-1)(n2-2n+1+2n-2))+(n-1)(n2-n2+3n-2) =(n2-1)A(n-1)+(n-1)(3n-2)

32 Selection Through Median
Algorithm select(n,S,k) Select=selection(n,S,k,1,n,k) Algorithm selection(n,s,low,high,k) If(high==low) Selection=s[low] Else Partition(n,S,low,high,Pivot point) If k=pivotpoint Selection=s[pivotpoint] Else if k<pivotpoint Selection=selection(n,s,pivotpoint- 1,k) Selection=selection(n,s,pivotpoint+1,high,k)

33 Algorithm partition(n,s,low,highpivotpoint) arraysize=high-low+1 r=ceil(arraysize/2) for i=1 to r first=low+5i-5 last=min(low+5i-1,arraysize) T[i]=median of s[first] throughs[last] Pivotitem=select(r,T,(r+1)/2) J=low For i=lo to high If s[i]=pivotitem exchange s[i]and s[j] mark= j j=j+1 else if s[i]<pivotitem exchange s[i] nad s[j] pivotpoint=j-1 exchange s[mark] and s[poivotpoint].

34 Arrange the n elements into n/5 groups of 5 elements each, ignoring the at most four extra elements. (Constant time to compute bucket, linear time to put into bucket) Find the median of each group. This gives a list M of n/5 medians. (time Ө(n) if we use the same median selection algorithm as this one or hard-code it) Find the median of M. Return this as the partition element. (Call partition selection recursively using M as the input set)

35 number of comparisons required to partition array--n(high-low+1)
number of comparisons required to find median of 5 elements is 6 for each group 4 comparison to find median element and 2 comparisons for left and right elements to be in order so comparison would be 6n/5

36 Recursive call to selection from partition n/5
recursive call to selection2 from selection2 7n/10 if x< median of median half of median are greater than x ½(n/5) so need to be discarded 3 elements in each group including median element are greater than x so 3n/10 elements are discarded and remaining are 7n/10.

37 Thus recursive call would be
w(n)=w(n/5)+w(7n/10)+6n/5+n w(n) is linear in time w(n)<=c(n) w(n/5)+w(7n/10)+6n/5+n<=cn 11n/5+9cn/10<=cn 22n<=10cn-9cn 22n<=cn 22<=c w(n)<=22n Recurence tree

38 Euclid’s Algorithm Euclid’s s algorithm gcd(m,n)= gcd( n, m mod n)
size, number, measured by the second number iterations. is based on repeated application of equality gcd( m, n) Example: gcd(80,44) = gcd(44,36) = gcd(36, 12) = gcd(12,0) = 12 One can prove that the size decreases at least by half after two consecutive iterations

39 m>n, input size is reduced by m mod n in each iteration, there could be two cases Case 1: if n<m/2, m mod n<n<=m/2 Case2: if n>m/2, m mod n= m-n < m/2 size is almost reduced by half so T(n)єӨ(lg n)


Download ppt "Decrease and Conquer."

Similar presentations


Ads by Google