Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 3343: Analysis of Algorithms Lecture 26: String Matching Algorithms.

Similar presentations


Presentation on theme: "CS 3343: Analysis of Algorithms Lecture 26: String Matching Algorithms."— Presentation transcript:

1 CS 3343: Analysis of Algorithms Lecture 26: String Matching Algorithms

2 Definitions Text: a longer string T Pattern: a shorter string P Exact matching: find all occurrence of P in T T P length = m Length = n

3 The naïve algorithm Length = m Length = n

4 Time complexity Worst case: O(mn) Best case: O(m) –aaaaaaaaaaaaaa vs. baaaaaaa Average case? –Alphabet size = k –Assume equal probability –How many chars do you need to compare before find a mismatch? In average: k / (k-1) Therefore average-case complexity: mk / (k-1) For large alphabet, ~ m –Not as bad as you thought, huh?

5 Real strings are not random T: aaaaaaaaaaaaaaaaaaaaaaaaa P: aaaab Plus: O(m) average case is still bad for long strings! Smarter algorithms: O(m + n) in worst case sub-linear in practice how is this possible?

6 How to speedup? Pre-processing T or P Why pre-processing can save us time? –Uncovers the structure of T or P –Determines when we can skip ahead without missing anything –Determines when we can infer the result of character comparisons without actually doing them. ACGTAXACXTAXACGXAX ACGTACA

7 Cost for exact string matching Total cost = cost (preprocessing) + cost(comparison) + cost(output) Constant Minimize Overhead Hope: gain > overhead

8 String matching scenarios One T and one P –Search a word in a document One T and many P all at once –Search a set of words in a document –Spell checking One fixed T, many P –Search a completed genome for a short sequence Two (or many) T’s for common patterns Would you preprocess P or T? Always pre-process the shorter seq, or the one that is repeatedly used

9 Pattern pre-processing algs –Karp – Rabin algorithm Small alphabet and small pattern –Boyer – Moore algorithm The choice of most cases Typically sub-linear time –Knuth-Morris-Pratt algorithm (KMP) –Aho-Corasick algorithm The algorithm for the unix utility fgrep –Suffix tree One of the most useful preprocessing techniques Many applications

10 Algorithm KMP Not the fastest Best known Good for “real-time matching” –i.e. text comes one char at a time –No memory of previous chars Idea –Left-to-right comparison –Shift P more than one char whenever possible

11 Intuitive example 1 Observation: by reasoning on the pattern alone, we can determine that if a mismatch happened when comparing P[8] with T[i], we can shift P by four chars, and compare P[4] with T[i], without missing any possible matches. Number of comparisons saved: 6 abcxabc T abcxabcde P mismatch abcxabc T abcxabcde Naïve approach: abcxabcde ?

12 ? Intuitive example 2 Observation: by reasoning on the pattern alone, we can determine that if a mismatch happened between P[7] and T[j], we can shift P by six chars and compare T[j] with P[1] without missing any possible matches Number of comparisons saved: 7 abcxabc T abcxabcde P mismatch abcxabc T abcxabcde Naïve approach: abcxabcde Should not be a c abcxabcde ?

13 KMP algorithm: pre-processing Key: the reasoning is done without even knowing what string T is. Only the location of mismatch in P must be known. t t’ P t x T y t P y z z Pre-processing: for any position i in P, find P[1..i]’s longest proper suffix, t = P[j..i], such that t matches to a prefix of P, t’, and the next char of t is different from the next char of t’ (i.e., y ≠ z) For each i, let sp(i) = length(t) ij ij

14 KMP algorithm: shift rule t t’ P t x T y t P y z z Shift rule: when a mismatch occurred between P[i+1] and T[k], shift P to the right by i – sp(i) chars and compare x with z. This shift rule can be implicitly represented by creating a failure link between y and z. Meaning: when a mismatch occurred between x on T and P[i+1], resume comparison between x and P[sp(i)+1]. ij ijsp(i)1

15 Failure Link Example P: aataac aataac sp(i) 010020 aaataaat aat aac If a char in T fails to match at pos 6, re-compare it with the char at pos 3 (= 2 + 1)

16 Another example P: abababc abababc Sp(i) 0000040 ababa ababc If a char in T fails to match at pos 7, re-compare it with the char at pos 5 (= 4 + 1) abababab abab

17 KMP Example using Failure Link aataac aataac ^^* T: aacaataaaaataaccttacta aataac.* aataac ^^^^^* aataac..* aataac.^^^^^ Time complexity analysis: Each char in T may be compared up to n times. A lousy analysis gives O(mn) time. More careful analysis: number of comparisons can be broken to two phases: Comparison phase: the first time a char in T is compared to P. Total is exactly m. Shift phase. First comparisons made after a shift. Total is at most m. Time complexity: O(2m) Implicit comparison

18 KMP algorithm using DFA (Deterministic Finite Automata) P: aataac 123450 aa taac 6 a t If the next char in T is t after matching 5 chars, go to state 3 aataac If a char in T fails to match at pos 6, re-compare it with the char at pos 3 a Failure link DFA a All other inputs goes to state 0.

19 DFA Example T: aacaataataataaccttacta Each char in T will be examined exactly once. Therefore, exactly m comparisons are made. But it takes longer to do pre-processing, and needs more space to store the FSA. 1201234534534560001001 123450 aa taac 6 a t a DFA a

20 Difference between Failure Link and DFA Failure link –Preprocessing time and space are O(n), regardless of alphabet size –Comparison time is at most 2m (at least m) DFA –Preprocessing time and space are O(n |  |) May be a problem for very large alphabet size For example, each “char” is a big integer Chinese characters –Comparison time is always m.

21 The set matching problem Find all occurrences of a set of patterns in T First idea: run KMP or BM for each P –O(km + n) k: number of patterns m: length of text n: total length of patterns Better idea: combine all patterns together and search in one run

22 A simpler problem: spell-checking A dictionary contains five words: –potato –poetry –pottery –science –school Given a document, check if any word is (not) in the dictionary –Words in document are separated by special chars. –Relatively easy.

23 Keyword tree for spell checking O(n) time to construct. n: total length of patterns. Search time: O(m). m: length of text Common prefix only need to be compared once. What if there is no space between words? p o t a t o e t r y t e r y s c i e n c e hoo l 1 2 3 4 5 This version of the potato gun was inspired by the Weird Science team out of Illinois

24 Aho-Corasick algorithm Basis of the fgrep algorithm Generalizing KMP –Using failure links Example: given the following 4 patterns: –potato –tattoo –theater –other

25 Keyword tree p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e

26 p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

27 Keyword tree p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e O(mn) m: length of text. n: length of longest pattern potherotathxythopotattooattoo

28 Keyword Tree with a failure link p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

29 Keyword Tree with a failure link p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

30 Keyword Tree with all failure links p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e

31 Example p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

32 Example p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

33 Example p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

34 Example p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

35 Example p o t a t o t e r 0 t h e r 1 2 3 4 a t t o o h a t e potherotathxythopotattooattoo

36 Aho-Corasick algorithm O(n) preprocessing, and O(m+k) searching. –n: total length of patterns. –m: length of text –k is # of occurrence. Can create a DFA similar as in KMP. –Requires more space, –Preprocessing time depends on alphabet size –Search time is constant

37 Suffix Tree All algorithms we talked about so far preprocess pattern(s) –Karp-Rabin: small pattern, small alphabet –Boyer-Moore: fastest in practice. O(m) worst case. –KMP: O(m) –Aho-Corasick: O(m) In some cases we may prefer to pre-process T –Fixed T, varying P Suffix tree: basically a keyword tree of all suffixes

38 Suffix tree T: xabxac Suffixes: 1.xabxac 2.abxac 3.bxac 4.xac 5.ac 6.c a b x a c b x a c c c x a b x a c c 1 2 3 4 5 6 Naïve construction: O(m 2 ) using Aho-Corasick. Smarter: O(m). Very technical. big constant factor Difference from a keyword tree: create an internal node only when there is a branch

39 Suffix tree implementation Explicitly labeling seq end T: xabxa T: xabxa$ a b x a b x a x a b x a 1 2 3 a b x a b x a x a b x a 1 2 3 $ $ $ $ $ 4 5

40 Suffix tree implementation Implicitly labeling edges T: xabxa$ a b x a b x a x a b x a 1 2 3 $ $ $ $ $ 4 5 2:2 3:$ 1 2 3 $ $ 4 5 1:2 3:$

41 Suffix links Similar to failure link in a keyword tree Only link internal nodes having branches x a b c d e f g h i j a b c d e f g h i j xabcf f

42 Suffix tree construction 1:$ 1 1234567890 acatgacatt

43 Suffix tree construction 2:$ 2 1234567890 acatgacatt 1:$ 1

44 Suffix tree construction 2:$ a 4:$ 2 3 1234567890 acatgacatt 2:$ 1

45 Suffix tree construction 2:$ 2 4:$ 4 1234567890 acatgacatt a 4:$ 3 2:$ 1

46 Suffix tree construction 2:$ 2 4:$ 4 1234567890 acatgacatt 5:$ 5 a 4:$ 3 2:$ 1

47 Suffix tree construction 2:$ 2 4:$ 4 1234567890 acatgacatt 5:$ c a t t 5 6 a 4:$ 3 5:$ 1 $

48 Suffix tree construction 5:$ 2 4:$ 4 1234567890 acatgacatt 5:$ 5 c a t t 7 c a t t 6 a 4:$ 3 5:$ 1 $

49 Suffix tree construction 5:$ 2 4:$ 4 1234567890 acatgacatt 5:$ 5 c a t t 7 c a t t 6 a 3 1 t 8 t $

50 Suffix tree construction 5:$ 2 4 1234567890 acatgacatt 5:$ 5 c a t t 7 c a t t 6 a 3 1 t 8 t t t 9 $

51 Suffix tree construction 5:$ 2 4 1234567890 acatgacatt 5:$ 5 c a t t 7 c a t t 6 a 3 1 t 8 t t t 9 10 $ $

52 ST Application 1: pattern matching Find all occurrence of P=xa in T –Find node v in the ST that matches to P –Traverse the subtree rooted at v to get the locations a b x a c b x a c c c x a b x a c c 1 2 3 4 5 6 T: xabxac O(m) to construct ST (large constant factor) O(n) to find v – linear to length of P instead of T! O(k) to get all leaves, k is the number of occurrence. Asymptotic time is the same as KMP. ST wins if T is fixed. KMP wins otherwise.

53 ST Application 2: set matching Find all occurrences of a set of patterns in T –Build a ST from T –Match each P to ST a b x a c b x a c c c x a b x a c c 1 2 3 4 5 6 T: xabxac P: xab O(m) to construct ST (large constant factor) O(n) to find v – linear to total length of P’s O(k) to get all leaves, k is the number of occurrence. Asymptotic time is the same as Aho-Corasick. ST wins if T fixed. AC wins if P’s are fixed. Otherwise depending on relative size.

54 ST application 3: repeats finding Genome contains many repeated DNA sequences Repeat sequence length: Varies from 1 nucleotide to millions –Genes may have multiple copies (50 to 10,000) –Highly repetitive DNA in some non-coding regions 6 to 10bp x 100,000 to 1,000,000 times Problem: find all repeats that are at least k- residues long and appear at least p times in the genome

55 Repeats finding at least k-residues long and appear at least p times in the seq –Phase 1: top-down, count label lengths (L) from root to each node –Phase 2: bottom-up: count # of leaves descended from each internal node (L, N) For each node with L >= k, and N >= p, print all leaves O(m) to traverse tree

56 Maximal repeats finding 1.Right-maximal repeat –S[i+1..i+k] = S[j+1..j+k], –but S[i+k+1] != S[j+k+1] 2.Left-maximal repeat –S[i+1..i+k] = S[j+1..j+k] –But S[i] != S[j] 3.Maximal repeat –S[i+1..i+k] = S[j+1..j+k] –But S[i] != S[j], and S[i+k+1] != S[j+k+1] acatgacatt 1.cat 2.aca 3.acat

57 Maximal repeats finding Find repeats with at least 3 bases and 2 occurrence –right-maximal: cat –Maximal: acat –left-maximal: aca 5:e 2 4 1234567890 acatgacatt 5:e 5 c a t t 7 c a t t 6 a 3 1 t 8 t t t 9 10 $

58 Maximal repeats finding How to find maximal repeat? –A right-maximal repeats with different left chars 5:e 2 4 1234567890 acatgacatt 5:e 5 c a t t 7 c a t t 6 a 3 1 t 8 t t t 9 10 $ Left char = [] gcc aa

59 ST application 4: word enumeration Find all k-mers that occur at least p times –Compute (L, N) for each node L: total label length from root to node N: # leaves –Find nodes v with L>=k, and L(parent) =y –Traverse sub-tree rooted at v to get the locations L<k L>=k, N>=p L = K L=k This can be used in many applications. For example, to find words that appeared frequently in a genome or a document

60 Joint Suffix Tree Build a ST for many than two strings Two strings S 1 and S 2 S* = S 1 & S 2 Build a suffix tree for S* in time O(|S 1 | + |S 2 |) The separator will only appear in the edge ending in a leaf

61 S1 = abcd S2 = abca S* = abcd&abca$ a b c d & a b c a bcd&abcabcd&abca c d & a b c d d & a b c d & a b c d a a a $ 1,1 2,1 1,2 1,3 1,4 2,2 2,3 2,4 useless

62 To Simplify We don’t really need to do anything, since all edge labels were implicit. The right hand side is more convenient to look at a b c d & a b c a bcd&abcabcd&abca c d & a b c d d & a b c d & a b c d a a a $ 1,1 2,1 1,2 1,3 1,4 2,2 2,3 2,4 useless a b c d bcdbcd c d d a a a $ 1,1 2,1 1,2 1,3 1,4 2,2 2,3 2,4

63 Application of JST Longest common substring –For each internal node v, keep a bit vector B –B[1] = 1 if a child of v is a suffix of S1 –Find all internal nodes with B[1] = B[2] = 1 –Report one with the longest label –Can be extended to k sequences. Just use a longer bit vector. a b c d bcdbcd c d d a a a $ 1,1 2,1 1,2 1,3 1,4 2,2 2,3 2,4 Not subsequence

64 Application of JST Given K strings, find all k-mers that appear in at least d strings L< k L >= k B = (1, 0, 1, 1) cardinal(B) >= d 1,x 3,x 4,x

65 Many other applications Reproduce the behavior of Aho-Corasick Recognizing computer virus –A database of known computer viruses –Does a file contain virus? DNA finger printing –A database of people’s DNA sequence –Given a short DNA, which person is it from? … Catch –Large constant factor for space requirement –Large constant factor for construction –Suffix array: trade off time for space

66 Summary One T, one P –Boyer-Moore is the choice –KMP works but not the best One T, many P –Aho-Corasick –Suffix Tree One fixed T, many varying P –Suffix tree Two or more T’s –Suffix tree, joint suffix tree, suffix array Alphabet independent Alphabet dependent

67 Pattern pre-processing algs –Karp – Rabin algorithm Small alphabet and small pattern –Boyer – Moore algorithm The choice of most cases Typically sub-linear time –Knuth-Morris-Pratt algorithm (KMP) –Aho-Corasick algorithm The algorithm for the unix utility fgrep –Suffix tree One of the most useful preprocessing techniques Many applications

68 Karp – Rabin Algorithm Let’s say we are dealing with binary numbers Text: 01010001011001010101001 Pattern: 101100 Convert pattern to integer 101100 = 2^5 + 2^3 + 2^2 = 44

69 Karp – Rabin algorithm Text: 01010001011001010101001 Pattern: 101100 = 44 decimal 10111011001010101001 = 2^5 + 0 + 2^3 + 2^2 + 2^1 = 46 10111011001010101001 = 46 * 2 – 64 + 1 = 29 10111011001010101001 = 29 * 2 - 0 + 1 = 59 10111011001010101001 = 59 * 2 - 64 + 0 = 54 10111011001010101001 = 54 * 2 - 64 + 0 = 44 Θ(m+n)

70 Karp – Rabin algorithm What if the pattern is too long to fit into a single integer? Pattern: 101100. What if each word in our computer has only 4 bits? Basic idea: hashing. 44 % 13 = 5 10111011001010101001 = 46 (% 13 = 7) 10111011001010101001 = 46 * 2 – 64 + 1 = 29 (% 13 = 3) 10111011001010101001 = 29 * 2 - 0 + 1 = 59 (% 13 = 7) 10111011001010101001 = 59 * 2 - 64 + 0 = 54 (% 13 = 2) 10111011001010101001 = 54 * 2 - 64 + 0 = 44 (% 13 = 5) Θ(m+n) expected running time

71 Boyer – Moore algorithm Three ideas: –Right-to-left comparison –Bad character rule –Good suffix rule

72 Boyer – Moore algorithm Right to left comparison x y y Skip some chars without missing any occurrence. But how?

73 Bad character rule 0 1 12345678901234567 T:xpbctbxabpqqaabpq P: tpabxab *^^^^ What would you do now?

74 Bad character rule 0 1 12345678901234567 T:xpbctbxabpqqaabpq P: tpabxab *^^^^ P: tpabxab

75 Bad character rule 0 1 123456789012345678 T:xpbctbxabpqqaabpqz P: tpabxab *^^^^ P: tpabxab *

76 Basic bad character rule charRight-most-position in P a6 b7 p2 t1 x5 tpabxab Pre-processing: O(n)

77 Basic bad character rule charRight-most-position in P a6 b7 p2 t1 x5 T: xpbctbxabpqqaabpqz P: tpabxab *^^^^ P: tpabxab When rightmost T(k) in P is left to i, shift pattern P to align T(k) with the rightmost T(k) in P k i = 3 Shift 3 – 1 = 2

78 Basic bad character rule charRight-most-position in P a6 b7 p2 t1 x5 T: xpbctbxabpqqaabpqz P: tpabxab * P: tpabxab When T(k) is not in P, shift left end of P to align with T(k+1) k i = 7Shift 7 – 0 = 7

79 Basic bad character rule charRight-most-position in P a6 b7 p2 t1 x5 T: xpbctbxabpqqaabpqz P: tpabxab *^^ P: tpabxab When rightmost T(k) in P is right to i, shift pattern P one pos k i = 55 – 6 < 0. so shift 1

80 Extended bad character rule charPosition in P a6, 3 b7, 4 p2 t1 x5 T: xpbctbxabpqqaabpqz P: tpabxab *^^ P: tpabxab Find T(k) in P that is immediately left to i, shift P to align T(k) with that position k i = 55 – 3 = 2. so shift 2 Preprocessing still O(n)

81 Extended bad character rule Best possible: m / n comparisons Works better for large alphabet size In some cases the extended bad character rule is sufficiently good Worst-case: O(mn) What else can we do?

82 0 1 123456789012345678 T:prstabstubabvqxrst P: qcabdabdab *^^ P: qcabdabdab According to extended bad character rule

83 (weak) good suffix rule 0 1 123456789012345678 T:prstabstubabvqxrst P: qcabdabdab *^^ P: qcabdabdab

84 (Weak) good suffix rule t x t y t’ t y Preprocessing: For any suffix t of P, find the rightmost copy of t, denoted by t’. How to find t’ efficiently? T P P

85 (Strong) good suffix rule 0 1 123456789012345678 T:prstabstubabvqxrst P: qcabdabdab *^^

86 (Strong) good suffix rule 0 1 123456789012345678 T:prstabstubabvqxrst P: qcabdabdab *^^ P: qcabdabdab

87 (Strong) good suffix rule 0 1 123456789012345678 T:prstabstubabvqxrst P: qcabdabdab *^^ P: qcabdabdab

88 (Strong) good suffix rule Pre-processing can be done in linear time If P in T, searching may take O(mn) If P not in T, searching in worst-case is O(m+n) t x t y t’ t y In preprocessing: For any suffix t of P, find the rightmost copy of t, t’, such that the char left to t ≠ the char left to t’ T P P z z z ≠ y

89 Example preprocessing qcabdabdab charPositions in P a9, 6, 3 b10, 7, 4 c2 d8,5 q1 q c a b d a b d a b 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 2 0 0 dab cab Bad char rule Good suffix rule Where to shift depends on T Does not depend on T


Download ppt "CS 3343: Analysis of Algorithms Lecture 26: String Matching Algorithms."

Similar presentations


Ads by Google