Finding Similar Items. Set Similarity Problem: Find similar sets. Motivation: Many things can be modeled/represented as sets Applications: –Face Recognition.

Slides:



Advertisements
Similar presentations
Lecture outline Nearest-neighbor search in low dimensions
Advertisements

Similarity and Distance Sketching, Locality Sensitive Hashing
Data Mining of Very Large Data
Near-Duplicates Detection
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
High Dimensional Search Min-Hashing Locality Sensitive Hashing
MMDS Secs Slides adapted from: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, October.
Database Management Systems, R. Ramakrishnan and Johannes Gehrke1 Evaluation of Relational Operations: Other Techniques Chapter 12, Part B.
Randomized / Hashing Algorithms
Min-Hashing, Locality Sensitive Hashing Clustering
Data Structures Data Structures Topic #13. Today’s Agenda Sorting Algorithms: Recursive –mergesort –quicksort As we learn about each sorting algorithm,
Mining Association Rules. Association rules Association rules… –… can predict any attribute and combinations of attributes … are not intended to be used.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Association Rule Mining
My Favorite Algorithms for Large-Scale Data Mining
Near Duplicate Detection
Asssociation Rules Prof. Sin-Min Lee Department of Computer Science.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
Query Optimization 3 Cost Estimation R&G, Chapters 12, 13, 14 Lecture 15.
CS345 Data Mining Crawling the Web. Web Crawling Basics get next url get page extract urls to visit urls visited urls web pages Web Start with a “seed.
1 Near-Neighbor Search Applications Matrix Formulation Minhashing.
Finding Similar Items.
1 Near-Neighbor Search Applications Matrix Formulation Minhashing.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
1 Finding Similar Pairs Divide-Compute-Merge Locality-Sensitive Hashing Applications.
1 Locality-Sensitive Hashing Basic Technique Hamming-LSH Applications.
Finding Near Duplicates (Adapted from slides and material from Rajeev Motwani and Jeff Ullman)
1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing Locality-Sensitive Hashing.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Lecture 6: The Ultimate Authorship Problem: Verification for Short Docs Moshe Koppel and Yaron Winter.
Similarity and Distance Sketching, Locality Sensitive Hashing
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
CHAPTER 09 Compiled by: Dr. Mohammad Omar Alhawarat Sorting & Searching.
1 Applications of LSH (Locality-Sensitive Hashing) Entity Resolution Fingerprints Similar News Articles.
Binomial Coefficients, Inclusion-exclusion principle
Finding Similar Items 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 10: Finding Similar Items Mining.
Copyright © Curt Hill Query Evaluation Translating a query into action.
1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing.
1 Low-Support, High-Correlation Finding rare, but very similar items.
DATA MINING LECTURE 6 Similarity and Distance Sketching, Locality Sensitive Hashing.
Document duplication (exact or approximate) Paolo Ferragina Dipartimento di Informatica Università di Pisa Slides only!
DATA MINING LECTURE 6 Sketching, Min-Hashing, Locality Sensitive Hashing.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
Google News Personalization Big Data reading group November 12, 2007 Presented by Babu Pillai.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Jeffrey D. Ullman Stanford University.  2% of your grade will be for answering other students’ questions on Piazza.  18% for Gradiance.  Piazza code.
CMU SCS : Multimedia Databases and Data Mining Lecture #30: Data Mining - assoc. rules C. Faloutsos.
Randomized / Hashing Algorithms Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman.
MapReduce and the New Software Stack. Outline  Algorithm Using MapReduce  Matrix-Vector Multiplication  Matrix-Vector Multiplication by MapReduce 
DATA MINING LECTURE 6 Sketching, Locality Sensitive Hashing.
Jeffrey D. Ullman Stanford University. 2  Generalized LSH is based on some kind of “distance” between points.  Similar points are “close.”  Example:
Shingling Minhashing Locality-Sensitive Hashing
Fast Pseudo-Random Fingerprints Yoram Bachrach, Microsoft Research Cambridge Ely Porat – Bar Ilan-University.
KNN & Naïve Bayes Hongning Wang
Big Data Infrastructure
CS276A Text Information Retrieval, Mining, and Exploitation
Near Duplicate Detection
Finding Similar Items Jeffrey D. Ullman Application: Similar Documents
Sketching, Locality Sensitive Hashing
Finding Similar Items: Locality Sensitive Hashing
Finding Similar Items: Locality Sensitive Hashing
Shingling Minhashing Locality-Sensitive Hashing
Theory of Locality Sensitive Hashing
Evaluation of Relational Operations
CS5112: Algorithms and Data Structures for Applications
Evaluation of Relational Operations: Other Techniques
Minwise Hashing and Efficient Search
Near Neighbor Search in High Dimensional Data (1)
Three Essential Techniques for Similar Documents
Locality-Sensitive Hashing
Presentation transcript:

Finding Similar Items

Set Similarity Problem: Find similar sets. Motivation: Many things can be modeled/represented as sets Applications: –Face Recognition –Recommender Systems (Collaborative Filtering) –Document Similarity/Clustering –Similar (but infrequent) Items

Application: Face Recognition We have a database of (say) 1 million face images. We want to find the most similar images in the database. Represent faces by (relatively) invariant values, e.g., ratio of nose width to eye width. Many-one problem : given a new face, see if it is close to any of the 1 million old faces. Many-Many problem : which pairs of the 1 million faces are similar.

Simple Solution Represent each face by a vector of 1000 values and score the comparisons. Sort-of OK for many-one problem. Out of the question for the many-many problem (10 6 *10 6 *1000/2 numerical comparisons). We can do better!

Applications - Collaborative Filtering (1) Represent a customer, e.g., of Netflix, by the set of movies they rented. –Similar customers have a relatively large fraction of their choices in common. Motivation: A customer can be pitched an item that a similar customer bought, but which they did not buy.

Applications - Collaborative Filtering (2) Products are similar if they are bought by many of the same customers. –E.g., movies of the same genre are typically rented by similar sets of Netflix customers. Motivation: A customer can be pitched an item that is a similar to an item that he/she already bought. …isn’t this mining frequent pairs… Well, not exactly…

Applications: Similar Items Problem Search for pairs of items that appear together a large fraction of the times that either appears, –even if neither item appears in very many baskets. –Such items are considered "similar" –E.g. caviar and crackers Modeling Each item is a set: the set of baskets in which it appears. –Thus, the problem becomes: Find similar sets!

Applications: Similar Documents (1) Given a body of documents, e.g., Web pages, find pairs of docs that have a lot of text in common, e.g.: –Mirror sites, or approximate mirrors. –Plagiarism, including large quotations. –Repetitions of news articles at news sites. How do you represent a document so it is easy to compare with others? –Special cases are easy, e.g., identical documents, or one document contained verbatim in another. –General case, where many small pieces of one doc appear out of order in another, is hard.

Applications: Similar Documents (2) Represent doc by its set of shingles (or k -grams). A k-shingle (or k-gram) for a document is a sequence of k characters that appears in the document. Example. k=2; doc = abcab. Set of 2-shingles = {ab, bc, ca}. At that point, doc problem becomes finding similar sets.

The Jaccard Measure of Similarity The similarity of sets S and T is the ratio of the sizes of the intersection and union of S and T. –Sim (C 1,C 2 ) = |S  T|/|S  T| = Jaccard similarity. Disjoint sets have a similarity of 0, and the similarity of a set with itself is 1. Another example: similarity of sets {1, 2, 3} and {1, 3, 4, 5} is –2/5.

Solutions 1.Divide-Compute-Merge (DCM) uses external sorting, merging. 2.Locality-Sensitive Hashing (LSH) can be carried out in main memory, but admits some false negatives.

Divide-Compute-Merge Originally designed for “shingles” and docs. –Applicable for other set similarity problems. Based on the merge-sort paradigm.

doc1: s11,s12,…,s1k doc2: s21,s22,…,s2k … DCM Steps s11,doc1 s12,doc1 … s1k,doc1 s21,doc2 … Invert t1,doc11 t1,doc12 … t2,doc21 t2,doc22 … sort on shingleId doc11,doc12,1 doc11,doc13,1 … doc21,doc22,1 … Invert and pair doc11,doc12,1 … doc11,doc13,1 … sort on <docId1, docId2> doc11,doc12,2 doc11,doc13,10 … Merge

DCM Summary 1.Start with the pairs. 2.Sort by shingleId. 3.In a sequential scan, generate triplets for pairs of docs that share a shingle. 4.Sort on. 5.Merge triplets with common docIds to generate triplets of the form. 6.Output document pairs with count > threshold.

Some Optimizations “Invert and Pair” is the most expensive step. Speed it up by eliminating very common shingles. –“the”, “404 not found”, “<A HREF”, etc. Also, eliminate exact-duplicate docs first.

Locality-Sensitive Hashing First represent sets as bit vectors, and organize them as columns of a signature matrix M. Big idea: hash columns of matrix M several times. Arrange that (only) similar columns are likely to hash to the same bucket. Candidate pairs are those that hash at least once to the same bucket.

LSH Roadmap Sets Similar customers Similar products Signatures Buckets containing mostly similar items (sets) Technique: Minhashing Documents Technique: Shingling Technique: Locality-Sensitive Hashing

Minhashing Suppose that the elements of each set are chosen from a "universal" set of n elements e 0, e l,...,e n-1. Pick a random permutation of the n elements. Minhash value of a set S is the first element, in the permuted order, that is a member of S. Example Suppose the universal set is {1, 2, 3, 4, 5} and the permutation we choose is (3,5,4,2,1). –Set {2, 3, 5} hashes to 3.3. –Set {1, 2, 5} hashes to 5.5. –Set {1,2} hashes to 2.2.

Minhash signatures Compute signatures for the sets by using m permutations. –Typically, m would be about 100. Signature of a set S is the list of the minhash values of S, for each of the m permutations, in order. Example Universal set is {1,2,3,4,5}, m = 3, and the permutations are: –  1 = (1,2,3,4,5), –  2 = (5,4,3,2,1), –  3 = (3,5,1,4,2). Signature of S = {2,3,4} is –(2,4,3).

Minhashing and Jaccard Distance Surprising relationship If we choose a permutation at random, the probability that it will produce the same minhash values for two sets is the same as the Jaccard similarity of those sets. Why? (see the next slides) Thus, if we have the signatures of two sets S and T, we can estimate the Jaccard similarity of S and T by the fraction of corresponding minhash values for the two sets that agree.

Four Types of Rows Given columns C 1 and C 2, each representing a set, rows may be classified as: C1C2C1C2 a11a11 b10b10 c01c01 d00d00 Also, a = # rows of type a, etc. Note Sim (C 1, C 2 ) = a /(a +b +c ).

Minhashing–put differently Imagine the rows permuted randomly. Define “hash” function h (C ) = the number of the first (in the permuted order) row in which column C has 1. Use several independent hash functions to create a signature.

Minhashing and Jaccard Distance (2) The probability (over all permutations of the rows) that h (C 1 ) = h (C 2 ) is the same as Sim (C 1, C 2 ). Both are a /(a +b +c )! Why? –Look down columns C 1 and C 2 until we see a 1. –If it’s a type-a row, then h (C 1 ) = h (C 2 ). If a type-b or type-c row, then not. –Type-d row doesn’t play any role for either of them.

Implementing Minhashing Generating a permutation of all the universe of objects is infeasible. Rather, simulate the choice of a random permutation by picking a random hash function h. –Pretend that the permutation that h represents places element e in position h(e). Several elements might wind up in the same position. As long as number of buckets is large, we can break ties as we like, and the simulated permutations will be sufficiently random that the relationship between signatures and similarity still holds.

Algorithm for minhashing To compute the minhash value for a set S = {a 1, a 2,...,a n } using a hash function h, we can execute: V =infinity; FOR i := 1 TO n DO IF h(a i ) < V THEN V = h(a i ); As a result, V will be set to the hash value of the element of S that has the smallest hash value.

Algorithm for set signature If we have m hash functions h 1, h 2,..., h m, then we can compute m minhash values in parallel, as we process each member of S. FOR j := 1 TO m DO V j := infinity; FOR i := 1 TO n DO FOR j := 1 TO m DO IF h j (a i ) < V j THEN V j = h j (a i )

Example S = {1,3,4} T = {2,3,5} h(x) = x mod 5 g(x) = 2x+1 mod 5 h(1) = 1 h(3) = 3 h(4) = 4 g(1) = 3 g(3) = 2 g(4) = 4 sig(S) = 1,3 sig(T) = 5,2 h(2) = 2 h(3) = 3 h(5) = 0 g(2) = 0 g(3) = 2 g(5) = 1

Locality-Sensitive Hashing of Signatures Organize signatures of the various sets as a matrix M, with a column for each set's signature and a row for each hash function. Big idea: hash columns of signature matrix M several times. Arrange that (only) similar columns are likely to hash to the same bucket. Candidate pairs are those that hash at least once to the same bucket.

Partition Into Bands Matrix M r rows per band b bands

Partition Into Bands For each band, hash its portion of each column to a hash table with k buckets. Candidate column pairs are those that hash to the same bucket for at least one band. Matrix M r rows b bands Buckets

Analysis Probability that the signatures agree on one row is s (Jaccard similarity) Probability that they agree on all r rows of a given band is s^r. Probability that they do not agree on all the rows of a band is 1 - s^r Probability that for none of the b bands do they agree in all rows of that band is (1 - s^r)^b Probability that the signatures will agree in all rows of at least one band is 1 - (1 - s^r)^b This function is the probability that the signatures will be compared for similarity.

Example Suppose 100,000 columns. Signatures of 100 integers. Therefore, signatures take 40Mb. But 5,000,000,000 pairs of signatures take a while to compare. Choose 20 bands of 5 integers/band.

Suppose C 1, C 2 are 80% Similar Probability C 1, C 2 agree on one particular band: –(0.8) 5 = Probability C 1, C 2 do not agree on any of the 20 bands: –( ) 20 = –i.e., we miss about 1/3000th of the 80%-similar column pairs. The chance that we do find this pair of signatures together in at least one bucket is ,or

Suppose C 1, C 2 Only 40% Similar Probability C 1, C 2 agree on one particular band: –(0.4) 5 = Probability C 1, C 2 do not agree on any of the 20 bands: –(1-0.01)^20 .80 –i.e., we miss a lot... The chance that we do find this pair of signatures together in at least one bucket is ,or 0.20 (i.e. only 20%).

Analysis of LSH – What We Want Similarity s of two columns Probability of sharing a bucket t No chance if s < t Probability = 1 if s > t

What One Row Gives You Similarity s of two columns Probability of sharing a bucket t Remember: probability of equal hash-values = similarity

What b Bands of r Rows Gives You Similarity s of two columns Probability of sharing a bucket t s rs r All rows of a band are equal 1 - Some row of a band unequal ()b)b No bands identical 1 - At least one band identical t ~ (1/b) 1/r

LSH Summary Tune to get almost all pairs with similar signatures, but eliminate most pairs that do not have similar signatures. Check in main memory that candidate pairs really do have similar signatures. Optional: In another pass through data, check that the remaining candidate pairs really are similar columns.