Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithm Course Dr. Aref Rashad February 20131 Algorithms Course..... Dr. Aref Rashad Part: 4 Search Algorithms.

Similar presentations


Presentation on theme: "Algorithm Course Dr. Aref Rashad February 20131 Algorithms Course..... Dr. Aref Rashad Part: 4 Search Algorithms."— Presentation transcript:

1 Algorithm Course Dr. Aref Rashad February 20131 Algorithms Course..... Dr. Aref Rashad Part: 4 Search Algorithms

2 February 2013 Algorithms Course..... Dr. Aref Rashad 2 Search Algorithms A search algorithm is a method of locating a specific item of information in a larger collection of data. Why Search? Everyday life -We always Looking for something – yellow pages, universities, hospitals,…etc. World wide web –different searching mechanisms, Databases –use to search for a record

3 February 2013 Algorithms Course..... Dr. Aref Rashad 3 Sequential search Basic Sequential Search Sorted array search Binary Search Hashing Hashing Functions Recursive structures search Binary Search tree Multidimensional search

4 February 2013 Algorithms Course..... Dr. Aref Rashad 4 Linear Search This is a very simple algorithm. It uses a loop to sequentially step through an array, starting with the first element. It compares each element with the value being searched for(key) and stops when that value is found or the end of the array is reached.

5 February 2013 Algorithms Course..... Dr. Aref Rashad 5 Algorithm Pseudo Code: Found = false; Position = –1; Index = 0 while index < number of elements, found = false if list[index] is equal to search value Found = true Position = index end if Index = Index +1 end while return Position

6 February 2013 Algorithms Course..... Dr. Aref Rashad 6 Linear Search Example

7 February 2013 Algorithms Course..... Dr. Aref Rashad 7 Linear Search Tradeoffs Benefits: Easy algorithm to understand Array can be in any order Disadvantages: Inefficient (slow): for array of N elements, examines N/2 elements on average for value in array, N elements for value not in array

8 February 2013 Algorithms Course..... Dr. Aref Rashad 8 Efficiency of a sequential Search of an Array In best case, you will locate the desired item first in the array You will have made only one comparison So search will be O(1) In worst case you will search the entire array, either desired item will be found at the end of array or not at all In either event you have made n comparisons for an array of n elements Sequential search in worst case is just O(n) In the average case, you will look at about one-half of the elements in the array. Thus is O(n/2), which is just O(n)

9 February 2013 Algorithms Course..... Dr. Aref Rashad 9 Binary Search Requires array elements to be in order 1. Divides the array into three sections: middle element elements on one side of the middle element elements on the other side of the middle element 2. If the middle element is the correct value, done. Otherwise, go to step 1. using only the half of the array that may contain the correct value. 3. Continue steps 1. and 2. until either the value is found or there are no more elements to examine

10 February 2013 Algorithms Course..... Dr. Aref Rashad 10 Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}. Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114 Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114 Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114 Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}. Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114 Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114 Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114 Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114 Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114 Binary Search Example

11 February 2013 Algorithms Course..... Dr. Aref Rashad 11 How a Binary Search Works Always look at the center value. Each time you get to discard half of the remaining list.

12 February 2013 Algorithms Course..... Dr. Aref Rashad 12 Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically in worst case. In practice it means, that algorithm will do at most log 2 (n) iterations, which is a very small number even for big arrays. On every step the size of the searched part is reduced by half. Algorithm stops, when there are no elements to search in. Complexity Analysis

13 February 2013 Algorithms Course..... Dr. Aref Rashad 13 Binary Search Tradeoffs: Benefits: Much more efficient than linear search. For array of n elements, performs at most log 2 (n) comparisons Disadvantages: Requires that array elements be sorted

14 February 2013 Algorithms Course..... Dr. Aref Rashad 14 When compared to linear search, whose worst-case behavior is n iterations, we see that binary search is substantially faster as n grows large.linear search For example, to search a list of one million items takes as many as one million iterations with linear search, but never more than twenty iterations with binary search. However, a binary search can only be performed if the list is in sorted order. Linear vs Binary Search

15 Hashing Important and widely useful technique for implementing dictionaries Constant time per operation (on the average) Worst case time proportional to the size of the set for each operation

16 Basic Idea Use hash function to map keys into positions in a hash table Ideally If element e has key k and h is hash function, then e is stored in position h(k) of table To search for e, compute h(k) to locate position. If no element, dictionary does not contain e.

17 Example: Student Records Hash function maps ID into distinct table positions 0-1000... GradeAgeNameKey A25xxxx951000 B30yyyy951100 951200 Hash function: h(k) = k-951000 To Search for 951100 h(951100) = 951100-951000 = 100 Check Hash table (100) ? 0123 1000 hash table buckets 100

18 Analysis (Ideal Case Unrealistic) O(b) time to initialize hash table (b number of positions or buckets in hash table) O(1) time to perform insert, remove, search Works for implementing dictionaries, but many applications have key ranges that are too large to have 1-1 mapping between buckets and keys! Example: Suppose key can take on values from 0.. 65,535 (2 byte unsigned integers) Expect  1,000 records at any given time Impractical to use hash table with 65,536 slots!

19 Hash Functions If key range too large, use hash table with fewer buckets and a hash function which maps multiple keys to same bucket: h(k 1 ) =  = h(k 2 ): k 1 and k 2 have collision at slot  Popular hash functions: hashing by division h(k) = k%D, where D number of buckets in hash table (% …. MOD …. Reminder of division) Example: hash table with 11 buckets h(k) = k%11 80  3 (80%11= 3), 40  7, 65  10 58  3 collision!

20 Hashing 0 m–1 h(k1)h(k1) h(k4)h(k4) h(k 2 )=h(k 5 ) h(k3)h(k3) U (universe of keys) K (actual keys) k1k1 k2k2 k3k3 k5k5 k4k4 collision

21 Collision Resolution Policies Two classes: – Open hashing, separate chaining – Closed hashing, open addressing Difference has to do with whether collisions are stored outside the table (open hashing) or whether collisions result in storing one of the records at another slot in the table (closed hashing)

22 Methods of Resolution Chaining: Open hashing – Store all elements that hash to the same slot in a linked list. – Store a pointer to the head of the linked list in the hash table slot. Open Addressing: Closed hashing – All elements stored in hash table itself. – When collisions occur, use a systematic (consistent) procedure to store elements in free slots of the table. k2k2 0 m–1 k1k1 k4k4 k5k5 k6k6 k7k7 k3k3 k8k8

23 Open Hashing Each bucket in the hash table is the head of a linked list All elements that hash to a particular bucket are placed on that bucket’s linked list Records within a bucket can be ordered by order of insertion or by key value order

24 Collision Resolution by Chaining 0 m–1 h(k 1 )=h(k 4 ) h(k 2 )=h(k 5 )=h(k 6 ) h(k 3 )=h(k 7 ) U (universe of keys) K (actual keys) k1k1 k2k2 k3k3 k5k5 k4k4 k6k6 k7k7 k8k8 h(k8)h(k8) X X X

25 k2k2 Collision Resolution by Chaining 0 m–1 U (universe of keys) K (actual keys) k1k1 k2k2 k3k3 k5k5 k4k4 k6k6 k7k7 k8k8 k1k1 k4k4 k5k5 k7k7 k3k3 k8k8

26 Open hashing : Analysis Open hashing is most appropriate when the hash table is kept in main memory, implemented with a standard in- memory linked list We hope that number of elements per bucket roughly equal in size, so that the lists will be short If there are n elements in set, then each bucket will have roughly n/D, where D number of buckets in hash table If we can estimate n and choose D to be roughly as large, then the average bucket will have only one or two members

27 Open hashing : Analysis Average time operation: D buckets, n elements  average n/D elements per bucket insert, search, remove operation take O(1+n/D) time each If we can choose D to be about n, constant time

28 Closed Hashing To search for key k: Examine slot h(k). Examining a slot is known as a probe. If slot h(k) contains key k, the search is successful. If the slot contains NIL, the search is unsuccessful. There’s a third possibility: slot h(k) contains a key that is not k. – Compute the index of some other slot, based on k and which probe we are on. – Keep probing until we either find key k or we find a slot holding NIL. Advantages: Avoids pointers; so can use a larger table.

29 Closed Hashing Associated with closed hashing is a rehash strategy: “If we try to place x in bucket h(x) and find it occupied, find alternative location h 1 (x), h 2 (x), etc. Try each in order, if none empty table is full,” In general, collision resolution strategy is to generate a sequence of hash table slots (probe sequence) that can hold the record; test each slot until find empty one (probing)

30 Computing Probe Sequences Auxiliary hash functions : Linear Probing. Quadratic Probing. Double Hashing. S implest rehash strategy is called linear Probing h i (x) = (h(x) + i) % D h 1 (d) = (h(d)+1)% D h 2 (d) = (h(d)+2)% D h 3 (d) = (h(d)+3)% D

31 Example Linear (Closed) Hashing D=8, keys a,b,c,d have hash values h(a)=3, h(b)=0, h(c)=4, h(d)=3 0 2 3 4 5 6 7 1 b a c Where do we insert d? 3 already filled Probe sequence using linear hashing: h 1 (d) = (h(d)+1)%8 = 4%8 = 4 h 2 (d) = (h(d)+2)%8 = 5%8 = 5* h 3 (d) = (h(d)+3)%8 = 6%8 = 6 etc. 7, 0, 1, 2 Wraps around the beginning of the table! d

32 Pseudo-code for Search Hash-Search (T, k) 1. i  0 2. repeat j  h(k, i) 3. if T[j] = k 4. then return j 5. i  i + 1 6. until T[j] = NIL or i = m 7. return NIL Hash-Search (T, k) 1. i  0 2. repeat j  h(k, i) 3. if T[j] = k 4. then return j 5. i  i + 1 6. until T[j] = NIL or i = m 7. return NIL

33 Ex: Linear Probing Example: – h’(x)  x mod 13 – h(x)=(h’(x)+i) mod 13 – Insert keys 18, 41, 22, 44, 59, 32, 31, 73, in this order 0123456789101112 41 18445932223173 0123456789101112

34 Example Insert 1052 0 1 2 3 4 5 6 7 8 9 10 1001 9537 3016 9874 2009 9875 h(k) = k%11 0 1 2 3 4 5 6 7 8 9 10 1001 9537 3016 9874 2009 9875 If next element has home bucket 0,1,2?  go to bucket 3 Only a record with home position 3 will stay. Only records hashing to 4 will end up in 4 (p=1/11); same for 5 and 6 h(1052) = 1052%11 = 7 h1(1052) = (7+1)%11 = 8 h2(1052) = (7+2)%11 = 9 h3(1052) = (7+3)%11 = 10 1052

35 Linear Probing h(k, i) = (h(k)+i) mod m. The initial probe determines the entire probe sequence. – T[h(k)], T[h(k)+1], …, T[m–1], T[0], T[1], …, T[h(k)–1] – Hence, only m distinct probe sequences are possible. Suffers from primary clustering: – Long runs of occupied sequences build up. – Long runs tend to get longer, since an empty slot preceded by i full slots gets filled next with probability (i+1)/m. – Hence, average search and insertion times increase. keyProbe numberAuxiliary hash function

36 Quadratic Probing h(k,i) = (h(k) + c 1 i + c 2 i 2 ) mod m c 1  c 2 The initial probe position is T[h(k)], later probe positions are offset by amounts that depend on a quadratic function of the probe number i. Must constrain c 1, c 2, and m to ensure that we get a full permutation of  0, 1,…, m–1 . Can suffer from secondary clustering: – If two keys have the same initial probe position, then their probe sequences are the same. keyProbe numberAuxiliary hash function

37 Double Hashing h(k,i) = (h 1 (k) + i h 2 (k)) mod m Two auxiliary hash functions. – h 1 gives the initial probe. h 2 gives the remaining probes. Must have h 2 (k) relatively prime to m, so that the probe sequence is a full permutation of  0, 1,…, m–1 . – Choose m to be a power of 2 and have h 2 (k) always return an odd number. Or, – Let m be prime, and have 1 < h 2 (k) < m.  (m 2 ) different probe sequences. – One for each possible combination of h 1 (k) and h 2 (k). – Close to the ideal uniform hashing. keyProbe numberAuxiliary hash functions

38 Performance Analysis - Worst Case Initialization: O(b), b# of buckets Insert and search: O(n), n number of elements in table; all n key values have same home bucket No better than linear list for maintaining dictionary!

39 Performance Analysis - Avg Case Distinguish between successful and unsuccessful searches – Delete = successful search for record to be deleted – Insert = unsuccessful search along its probe sequence Expected cost of hashing is a function of how full the table is: load factor  = n/b Average costs under linear hashing (probing) are: – Insertion: 1/2(1 + 1/(1 -  ) 2 ) – Deletion: 1/2(1 + 1/(1 -  ))


Download ppt "Algorithm Course Dr. Aref Rashad February 20131 Algorithms Course..... Dr. Aref Rashad Part: 4 Search Algorithms."

Similar presentations


Ads by Google