Presentation is loading. Please wait.

Presentation is loading. Please wait.

Index Compression Adapted from Lectures by Prabhakar Raghavan (Google)

Similar presentations


Presentation on theme: "Index Compression Adapted from Lectures by Prabhakar Raghavan (Google)"— Presentation transcript:

1 Index Compression Adapted from Lectures by Prabhakar Raghavan (Google)
and Christopher Manning (Stanford) DONE SO FAR ======== Token -> Term => Postings Lists (Doc Ids ::REFINED TO:: DocIds to Hit Positions) (Also maintain: cumulative freq in dataset, doc occurrence count, term freq in each doc) Large Dictionary structure for efficient search : B-Trees Tolerant IR: Map queries to sets of terms TO postings lists (robust wrt spelling errors (caused by typing error to sound-based verbalization) + flexible expression of (alternative) multiple terms) =========================================================== Improving performance: Hardware-oriented vs content oriented Dictionary compression to enable bigger dictionary to fit into the main memory Postings compression to reduce the disk to memory transfer times ========================================================== Prasad L07IndexCompression

2 Last lecture – index construction
Key step in indexing – sort This sort was implemented by exploiting disk-based sorting Blocking for fewer disk seeks Hierarchical merge of blocks Distributed indexing using MapReduce Running example of document collection: RCV1 Prasad L07IndexCompression

3 Today Collection statistics in more detail (RCV1)
Dictionary compression Postings compression heap’s law: vocabulary size as a function of collection size : impacts size of dictionary : zipf’s law : distribution of collection frequency of terms : impacts size of postings : enables gap analysis Dictionary compression to enable bigger dictionary to fit into the main memory Postings compression to reduce the disk footprint to speed-up memory transfer Prasad L07IndexCompression

4 Recall Reuters RCV1 symbol statistic value N documents 800,000
L avg. # tokens per doc 200 M terms (= word types) ~400,000 avg. # bytes per token 6 (incl. spaces/punct.) avg. # bytes per token 4.5 (without spaces/punct.) avg. # bytes per term 7.5 non-positional postings (terms) 100,000,000 stopwords are normally really short tokens => terms : after stop words elimination and stemming Total tokens = 160M Terms = 0.4 M Non-positional postings = 100M (> 0.4M as multiple copies of the same term can be occur) Postings per document = Terms per document = 125 (= 100 M / 0.8M) Terms :: Tokens :::: 100 :: 160 :::: 4.5 :: 7.5 Prasad L07IndexCompression

5 Why compression? Keep more stuff in memory (increases speed, caching effect) and disk (scale) enables deployment on smaller/mobile devices Increase data transfer from disk to memory [read compressed data and decompress] is faster than [read uncompressed data] Premise: Decompression algorithms are fast True of the decompression algorithms we use Assuming that one keeps compressed form in memory DMA : during disk to memory transfer the cpu is free, so it can concurrently decompress. Prasad L07IndexCompression

6 Compression in inverted indexes
First, we will consider space for dictionary Make it small enough to keep in main memory Then the postings Reduce disk space needed, decrease time to read from disk Large search engines keep a significant part of postings in memory (Each postings entry is a docID) larger dictionary faster access to postings list Prasad L07IndexCompression

7 Index parameters vs. what we index (details Table 5.1 p80)
size of word types (terms) non-positional postings positional postings dictionary non-positional index positional index Size (K) ∆% cumul % ∆ % Unfiltered 484 109,971 197,879 No numbers 474 -2 100,680 -8 179,158 -9 Case folding 392 -17 -19 96,969 -3 -12 30 stopwords 391 -0 83,390 -14 -24 121,858 -31 -38 150 stopwords 67,002 -30 -39 94,517 -47 -52 stemming 322 -33 63,812 -4 -42 Effectiveness: Case folding 0: Lower casing does not alter the number of occurrences, hence positional index invariant. Non-positional index does change because there are documents containing different case-variants of a word. Stopwords 0: Because we are talking about 30 and 150 stop words, dictionary size does not change much. Both indices affected substantially because lots of entries removed and gaps reduced? Stemming 0: Same reason as “Case folding”. Exercise: give intuitions for all the ‘0’ entries. Why do some zero entries correspond to big deltas in other columns?

8 Lossless vs. lossy compression
Lossless compression: All information is preserved. What we mostly do in IR. Lossy compression: Discard some information Several of the preprocessing steps can be viewed as lossy compression: case folding, stop words, stemming, number elimination. Chap/Lecture 7: Prune postings entries that are unlikely to turn up in the top k list for any query. Almost no loss of quality for top k list. Latent Semantic Indexing and Dimension Reduction techniques are lossy. Prasad L07IndexCompression

9 Vocabulary vs. collection size
Heaps’ Law: M = kTb M is the size of the vocabulary, T is the number of tokens in the collection. Typical values: 30 ≤ k ≤ 100 and b ≈ 0.5. In a log-log plot of vocabulary vs. T, Heaps’ law is a line. Check it out on test document collections in the assignment (What about the variation in individual documents?) Implications: As the document dataset grows, the vocabulary grows monotonically – dictionary will eventually overflow memory. the vocabulary saturates. Prasad L07IndexCompression

10 Heaps’ Law For RCV1, the dashed line
Fig 5.1 p81 For RCV1, the dashed line log10M = 0.49 log10T is the best least squares fit. Thus, M = T0.49 so k = ≈ 44 and b = 0.49. What is the effect of including spelling errors, vs. automatically correcting spelling errors on Heaps’ law? b increases with errors and decreases with corrections Prasad L07IndexCompression

11 Zipf’s law We also study the relative frequencies of terms.
In natural language, there are a few very frequent terms and very many rare terms. Zipf’s law: The ith most frequent term has frequency proportional to 1/i . cfi ∝ 1/i = c/i where c is a normalizing constant cfi is collection frequency: the number of occurrences of the term ti in the collection. Heaps’ Law gives the vocabulary size in collections. Zipf’s law: 2nd most frequent term occurs half as often as the most frequent term. (Is the distribution similar in both the document and the collection in practice?) Prasad L07IndexCompression

12 Zipf consequences If the most frequent term (the) occurs cf1 times
then the second most frequent term (of) occurs cf1/2 times the third most frequent term (and) occurs cf1/3 times … Equivalent: cfi = c/i where c is a normalizing factor, so log cfi = log c - log i Linear relationship between log cfi and log i Use to do gap analysis to estimate the index size (see old “Index Compression” lecture) most frequent : 60 occurrences, 2nd : 30, 3rd : 20, …, 60th : 1 etc So 60 >= rank >= 1 will appear in all documents (GAP = 1), … 120 >= rank >= 61 will appear in alternate documents (GAP = 2), … Prasad L07IndexCompression

13 Ranking vs. collection frequency
Prasad L07IndexCompression

14 Compression First, we will consider space for dictionary and postings
Basic Boolean index only No study of positional indexes, etc. We will devise compression schemes Prasad L07IndexCompression

15 Dictionary Compression
Prasad L07IndexCompression

16 Why compress the dictionary
Must keep in memory Search begins with the dictionary Memory footprint competition Embedded/mobile devices Prasad L07IndexCompression

17 Dictionary storage - first cut
Array of fixed-width entries ~400,000 terms; 28 bytes/term = 11.2 MB. Addr = WORD size (+ M/c word boundary alignment issues) Dictionary search structure 20 bytes 4 bytes each L07IndexCompression

18 Fixed-width terms are wasteful
Most of the bytes in the Term column are wasted – we allot 20 bytes for 1-3 letter terms! And we still can’t handle supercalifragilisticexpialidocious. Written English averages ~4.5 characters/word. Avg. dictionary word in English: ~8 characters How do we use ~8 characters per dictionary term? Short words dominate token counts but not token type (term) average. Exercise: Why is/isn’t this (4.5 char / word) the number to use for estimating the dictionary size? Stop words, which tend to be short, are eliminated from the dictionary. Prasad L07IndexCompression

19 Compressing the term list: Dictionary-as-a-String
Store dictionary as a (long) string of characters: Pointer to next word shows end of current word Hope to save up to 60% of dictionary space. ….systilesyzygeticsyzygialsyzygyszaibelyiteszczecinszomo…. Total string length = 400K x 8B = 3.2MB How do we map words to pointers to do binary search of pointers? We can still do binary search as usual as long as the strings are in sorted order. Only difference is that it requires an additional step of dereferencing from term ptr to string. Average term = 8 bytes Total number of terms = 400,000 To address 3.2 MB requires 22 bits or approx. 3 bytes So total storage = = 11 bytes (40% of 28 bytes) Table size = 400K * 11 B = 4.4MB Total compressed dictionary size = 7.6 MB Total saving = /11 ~ 30% Pointers resolve 3.2M positions: log23.2M = 22bits = 3bytes Prasad L07IndexCompression

20 Space for dictionary as a string
4 bytes per term for Freq. 4 bytes per term for pointer to Postings. 3 bytes per term pointer Avg. 8 bytes per term in term string 400K terms x 19  7.6 MB (against 11.2MB for fixed width)  Now avg. 11  bytes/term,  not 20. 11 bytes per array entry + (Avg.) 8 bytes per string table entry = 19 bytes per term (as opposed to 28) Total savings = 33% (1 – 7.6/11.2) Prasad L07IndexCompression

21 Blocking Store pointers to every kth term string.
Example below: k=4. Need to store term lengths (1 extra byte) ….7systile9syzygetic8syzygial6syzygy11szaibelyite8szczecin9szomo…. For every four terms : allocate 4 bytes extra for string length but save 3 * 3 bytes in term pointer Net saving per term = (+ 4 – 9)/4 = -5 / 4 So average storage per term = 19 ( ) = bytes per term  Save 9 bytes  on 3  pointers. Lose 4 bytes on term lengths.

22 Net Shaved another ~0.5MB; can save more with larger k.
Where we used 3 bytes/pointer without blocking 3 x 4 = 12 bytes for k=4 pointers, now we use 3+4=7 bytes for 4 pointers. Shaved another ~0.5MB; can save more with larger k. 5 bytes saved for every 4 terms Overall saving : 5 / 4 * 400K terms = 0.5 MB Space-time Tradeoff : Need to search within a block at run time (For large k, run-time overhead can be prohibitive – non-responsive search) Why not go with larger k? Prasad L07IndexCompression

23 Exercise Estimate the space usage (and savings compared to 7.6 MB) with blocking, for block sizes of k = 4, 8 and 16. For k = 8. For every block of 8, need to store extra 8 bytes for length For every block of 8, can save 7 * 3 bytes for term pointer Saving (+8 – 21)/8 * 400K = 0.65 MB For k = 16. For every block of 16, need to store extra 16 bytes for length For every block of 16, can save 15 * 3 bytes for term pointer Saving (+16 – 45)/16 * 400K = MB Search trade-off explained in the next overhead Prasad L07IndexCompression

24 Dictionary search without blocking
Assuming each dictionary term is equally likely in query (not really so in practice!), average number of comparisons = (1+2∙2+4∙3+4)/8 ~2.6 Prasad L07IndexCompression

25 Dictionary search with blocking
Binary search down to 4-term block; Then linear search through terms in block. Blocks of 4 (binary tree), avg. = (1+2∙2+2∙3+2∙4+5)/8 = 3 compares Prasad L07IndexCompression

26 Exercise Estimate the impact on search performance (and slowdown compared to k=1) with blocking, for block sizes of k = 4, 8 and 16. logarithmic search time to get to (n/k) “leaves” linear time proportional to k/2 for subsequent search through the “leaves” closed-form solution not obvious Prasad L07IndexCompression

27 Front coding Front-coding:
Sorted words commonly have long common prefix – store differences only (for last k-1 in a block of k) 8automata8automate9automatic10automation 8automat*a1e2ic3ion Extra length beyond automat. Encodes automat Begins to resemble general string compression.

28 RCV1 dictionary compression
Technique Size in MB Fixed width 11.2 String with pointers to every term 7.6 Blocking k=4 7.1 Blocking + front coding 5.9 No free lunch:=> Space-time tradeoff all through Prasad L07IndexCompression

29 Postings compression Prasad L07IndexCompression

30 Postings compression The postings file is much larger than the dictionary, by a factor of at least 10. Key desideratum: store each posting compactly. A posting for our purposes is a docID. For Reuters (800,000 documents), we would use 32 bits per docID when using 4-byte integers. Alternatively, we can use log2 800,000 ≈ 20 bits per docID. Our goal: use a lot less than 20 bits per docID. Prasad L07IndexCompression

31 Postings: two conflicting forces
A term like arachnocentric occurs in maybe one doc out of a million – we would like to store this posting using log2 1M ~ 20 bits. A term like the occurs in virtually every doc, so 20 bits/posting is too expensive. Prefer 0/1 bitmap vector in this case Arachnocentric hits seem to be just IR lectures! Arachnids are a class (Arachnida) of joint-legged invertebrate animals, including spiders, scorpions, harvestmen, ticks, and mites. Represent doc id via numbers or bit-vectors. ================================================ Device an encoding scheme that requires comparable amount of storage even if the postings list are of different lengths =============================================== Prasad L07IndexCompression

32 Postings file entry We store the list of docs containing a term in increasing order of docID. computer: 33,47,154,159,202 … Consequence: it suffices to store gaps. 33,14,107,5,43 … Hope: most gaps can be encoded/stored with far fewer than 20 bits. Posting list => increasing document Ids (gaps list – unsorted) Lots of gaps small gaps vs large gaps but fewer Prasad L07IndexCompression

33 Three postings entries
Prasad L07IndexCompression

34 Variable length encoding
Aim: For arachnocentric, we will use ~20 bits/gap entry. For the, we will use ~1 bit/gap entry. If the average gap for a term is G, we want to use ~log2G bits/gap entry. Key challenge: encode every integer (gap) with ~ as few bits as needed for that integer. Variable length codes achieve this by using short codes for small numbers Even though we store variable length codes on the disk we can always expand it out after moving to main memory for efficient handling (merge, word-boundary access, etc) Prasad L07IndexCompression

35 Variable Byte (VB) codes
For a gap value G, use close to the fewest bytes needed to hold log2 G bits Begin with one byte to store G and dedicate 1 bit in it to be a continuation bit c If G ≤127, binary-encode it in the 7 available bits and set c =1 Else encode G’s lower-order 7 bits and then use additional bytes to encode the higher order bits using the same algorithm At the end, set the continuation bit of the last byte to 1 (c =1) and of the other bytes to 0 (c =0).

36 Example Postings stored as the byte concatenation
docIDs 824 829 215406 gaps 5 214577 VB code For a small gap (5), VB uses a whole byte. Postings stored as the byte concatenation Key property: VB-encoded postings are uniquely prefix-decodable. 2^7 = 128 decimal(824) = binary(110, ) = 6* decimal(214577) = binary(1101, , ) = 13 * 128 * * Read first bit. If 0, use remaining 7 bits as part of MSB or its follower byte but not LSB. O/w, that is the LSB. (Do not include continuation bit for determining the value.) Prasad

37 Other variable codes Instead of bytes, we can also use a different “unit of alignment”: 32 bits (words), 16 bits, 4 bits (nibbles) etc. Variable byte alignment wastes space if you have many small gaps – nibbles do better in such cases. Prasad L07IndexCompression

38 Gamma codes Can compress better with bit-level codes
The Gamma code is the best known of these. Represent a gap G as a pair length and offset offset is G in binary, with the leading bit cut off For example 13 → 1101 → 101 length is the length of offset For 13 (offset 101), this is 3. Encode length in unary code: 1110. Gamma code of 13 is the concatenation of length and offset: Gap = 5 length = 110 offset = 5 – Gamma-code = (110, 01) Gap = 4 length = 110 offset = 4 – Gamma-code = (110, 00) Gap = 9 length = offset = 9 – Gamma-code = (1110, 001) Gap = 2 length = 10 offset = 2 – Gamma-code = (10, 0) Exercise: what is the g code for 1? Exercise: does zero have a g code? Gap = 1 length = 0 offset = 1 – Gamma-code = (0, -) Zero does not have a gamma-code. Prasad L07IndexCompression

39 Gamma code examples number length offset g-code none 1 2 10 10,0 3
none 1 2 10 10,0 3 10,1 4 110 00 110,00 9 1110 001 1110,001 13 101 1110,101 24 11110 1000 11110,1000 511 , 1025 , Gamma-code(0) unnecessary if we are encoding gaps gamma-code(9) = transform(1001) = chop leftmost bit length -> 1110, offset -> 001 gamma-code(9) = 1110,001 decode( ) = decode(1110,101) = 2^ = = 13 decode( ) = decode(11110,1000) = 2^ = = 24 Prasad L07IndexCompression

40 Exercise Given the following sequence of g-coded gaps, reconstruct the postings sequence: Gamma Code : UNARY-LENGTH 0 BINARY-OFFSET (length of UNARY-LENGTH same as length of BINARY-OFFSET) (zero centered) (Binary value of the number is “1 BINARY-OFFSET”) Decoded Gaps: 9, 6, 3, 59 (= ), 7(4+3) Postings: , 15, 18, 77, 84 From these g-codes -- decode and reconstruct gaps, then full postings.

41 Gamma code properties Uniquely prefix-decodable, like VB
All gamma codes have an odd number of bits G is encoded using 2 log G +1 bits Almost within a factor of 2 of best possible Offset ~ log G bits length ~ log G bits middle bit ~ 1 Prasad L07IndexCompression

42 Gamma seldom used in practice
Machines have word boundaries – 8, 16, 32 bits Compressing and manipulating at individual bit-granularity will slow down query processing Variable byte alignment is potentially more efficient Regardless of efficiency, variable byte is conceptually simpler at little additional space cost But these do provide good lower bounds for us … Prasad L07IndexCompression

43 RCV1 compression Data structure Size in MB dictionary, fixed-width
11.2 dictionary, term pointers into string 7.6 with blocking, k = 4 7.1 with blocking & front coding 5.9 collection (text, xml markup etc) 3,600.0 collection (text) 960.0 Term-doc incidence matrix 400,000.0 postings, uncompressed (32-bit words) 400.0 postings, uncompressed (20 bits) 250.0 postings, variable byte encoded 116.0 postings, g-encoded 101.0 Term incidence matrix = 500K * 800 K Compression ratio (text) = 10100/960 = 10% Compression ratio (all) = 10100/3600 = 3% Prasad L07IndexCompression

44 Index compression summary
We can now create an index for highly efficient Boolean retrieval that is very space efficient Only 3% of the total size of the collection Only 10-15% of the total size of the text in the collection However, we’ve ignored positional information Hence, space savings are less for indexes used in practice But techniques substantially the same. Prasad L07IndexCompression

45 Models of Natural Language
Necessary for analysis : Size estimation

46 Text properties/model
How are different words distributed inside each document? Zipf’s Law: The frequency of ith most frequent word is 1/i times that of the most frequent word. 50% of the words are stopwords. How are words distributed across the documents in the collection? Fraction of documents containing a word k times follows binomial distribution. (Pascal Triangle)

47 Binomial Distribution Shape
Binomial (cf. power law) [Curse words are power law distributed! Prasad L07IndexCompression

48 Probability of occurrence of a symbol depends on previous symbol
Probability of occurrence of a symbol depends on previous symbol. (Finite-Context or Markovian Model) The number of distinct words in a document (vocabulary) grows as the square root of the size of the document. (Heap’s Law) The average length of non-stop words is 6 to 7 letters.

49 Index Size Estimation Using Zipf’s Law

50 Corpus size for estimates
Consider N = 1M documents, each with about L=1K terms. Avg 6 bytes/term incl. spaces/punctuation 6GB of data. Say there are m = 500K distinct terms among these.

51 Recall: Don’t build the matrix
A 500K x 1M matrix has half-a-trillion 0’s and 1’s (500 billion). But it has no more than one billion 1’s. matrix is extremely sparse. So we devised the inverted index Devised query processing for it Now let us estimate the size of index

52 Estimating Index Size: Rough analysis based on Zipf’s Law
The i th most frequent term has frequency proportional to 1/i Let this frequency be c/i. Then The k th Harmonic number is Thus c = 1/Hm , which is ~ 1/ln m = 1/ln(500k) ~ 1/13. So the i th most frequent term has frequency roughly 1/13i. Frequency is (unitless) fraction of words, not count. Graph ln x and area under the curve approximates area of rectangles = summation of 1/I : Application of integration (area under the curve) Rationale: 1 unit is divided as 1/13 + 1/26 + 1/39+ … 500,000 terms In a document with 1000 terms: most frequent term appears 76 times, 2nd most frequent term appears 38 times, …

53 Summing reciprocals : an approximation via integration
1+ln(t) > summation > ln(t) Use steps to capture summation Prasad L07IndexCompression

54 Postings analysis (contd.)
Expected number of occurrences of the i th most frequent term in a doc of length L is: Lc/i ≈ L/13i ≈ 76/i for L=1000. Let J = Lc ~ 76. Then the J most frequent terms are likely to occur in every document. Now imagine the term-document incidence matrix with rows sorted in decreasing order of term frequency: Then the most frequent term is likely to occur 76 times in the document with 1K terms 76th most frequent term is likely to occur once in every document So Gap size = 1 in both these cases.

55 Informal Observations
Most frequent term appears approx 76 times in each document. 2nd most frequent term appears approx 38 times in each document. … 76th most frequent term appears approx once in each document. First 76 terms appear at least once in each document. Next 76 terms appear at least once in every two documents. … Prasad L07IndexCompression

56 Rows by decreasing frequency
N docs J most frequent terms. N gaps of ‘1’ each. J next most frequent terms. N/2 gaps of ‘2’ each. m terms J next most frequent terms. N/3 gaps of ‘3’ each. etc.

57 J-row blocks In the i th of these J-row blocks, we have J rows each with N/i gaps of i each. Encoding a gap of i using Gamma codes takes 2log2 i +1 bits. So such a row uses space ~ (2N log2 i )/i bits. For the entire block, (2N J log2 i )/i bits, which in our case is ~ 1.52 x 108 (log2 i )/i bits. Sum this over i from 1 up to m/J = 500K/76 ≈ (Since there are m/J blocks.) 2 * N * J = 2 * 1 M * 76 ** Using Gamma code for encoding gaps ** Sigma_1 to log i / i = integrate 1 to ( ln i / i di ) (1 / ln 2) = [ln i]^2 / 2 ln 2 = (( ln 6500 ) ^ 2 ) / 2 ln 2 = = (8.78)^2 / 2 * 0.69 = 38 / 0.69 = 55

58 Exercise Work out the above sum and show it adds up to about 55 x 150 Mbits, which is about 1GByte. So we’ve taken 6GB of text and produced from it a 1GB index that can handle Boolean queries! Neat! (16.7%) Make sure you understand all the approximations in our probabilistic calculation. Sum adds up to 55 x 150 Mbits = 8.36 Gbits = GB as given in the original lecture


Download ppt "Index Compression Adapted from Lectures by Prabhakar Raghavan (Google)"

Similar presentations


Ads by Google