Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS295: Info Quality & Entity Resolution University of California, Irvine Fall 2010 Course introduction slides Instructor: Dmitri V. Kalashnikov Copyright.

Similar presentations


Presentation on theme: "CS295: Info Quality & Entity Resolution University of California, Irvine Fall 2010 Course introduction slides Instructor: Dmitri V. Kalashnikov Copyright."— Presentation transcript:

1 CS295: Info Quality & Entity Resolution University of California, Irvine Fall 2010 Course introduction slides Instructor: Dmitri V. Kalashnikov Copyright © Dmitri V. Kalashnikov, 2010

2 2 Class Organizational Issues Class Webpage –http://www.ics.uci.edu/~dvk/CS295.html –Will put these slides there Rescheduling of Class Time –Now –Tue Thr 3:30-4:50 PM @ ICS209 –Twice a week –New –Thr 3:00-5:20 PM @ ? –10 min break in the middle –Once a week –Easier on students –Is this time slot OK?

3 3 Class Structure Student Presentation-Based Class –Students will present publications –Papers cover recent trends (not comprehensive) –Prepare slides –Slides will be collected after presentation –Discussion Final grade –Quality of slides –Quality of presentations –Participation & attendance –We are a small size class, so please attend! –No exams!

4 4 Tentative Syllabus

5 5 Presentation Topics –All papers are split into “topics” –Covering a topic on a day Student Presentations –Each student will choose a day/topic to present –A student presents only during 1 day in quarter! –If it is possible –To reduce workload –A student will present 2(?) papers on that day –Please start preparing early!

6 6 Tentative List of Publications

7 7 How to present a paper Present “high-level” idea –Main idea of the paper [should be clear] Present technical depth of techniques 1)Cover techniques in detail 2)Try to analyze the paper [if you can] –Discuss what you like about the paper –Criticize the technique –Do you see flaws/weaknesses in the proposed methodology? –Do you think the techniques can be improved? –Do you think authors should have included additional info/algo? Present experiments –Explain datasets/setup/graphs (analyze results) –Criticize experiments [if you can] –Large enough data? More experiments needed? Unexplained trends? etc

8 8 Who wants to present first?

9 9 Talk Overview Class Organizational issues   Intro to Data Quality & Entity Resolution

10 10 Data Processing Flow Data  Organizations & People collect large amounts of data  Many types of data –Textual –Semi structured –Multimodal Data Analysis Decisions Analysis  Data is analyzed for a variety of purposes –Automated analysis: Data Mining –Human in the loop: OLAP –Ad hoc  Analysis for Decision Making –Business Decision Making –Etc

11 11 Quality of decisions depends on quality of data Quality of data is critical $1 Billion market –Estimated by Forrester Group Data Quality –Very old research area –But no comprehensive textbook exists yet! Quality of Data Quality of Analysis Quality of Decisions

12 12 Example of Analysis on Bad Data: CiteSeer CiteSeer: Top-k most cited authors DBLPDBLP Unexpected Entries Unexpected Entries –Lets check two people in DBLP –“A. Gupta” –“L. Zhang” Analysis on bad data can lead to incorrect results. Fix errors before analysis. More than 80% of researchers working on data mining projects spend more than 40% of their project time on cleaning and preparation of data.

13 13 *Why* Data Quality Issues Arise? Types of DQ Problems –Ambiguity –Uncertainty –Erroneous data values –Missing Values –Duplication –etc

14 14 Example of Ambiguity –Ambiguity –Categorical data –“Location: Washington” –D.C.? State? Something else?

15 15 Example of Uncertainty –Uncertainty –Numeric data –“John’s salary is between $50K and $80K” –Query: find all people with salary > $70K

16 16 Example of Erroneous Data –Erroneous data values –

17 17 Example of Missing Values –Missing Values –

18 18 Example of Duplication –Duplication – –Same? Different?

19 19 Inherent Problems vs Errors in Preprocessing –Inherent Problems with Data –The dataset contains errors (like in prev. slide) – –Errors in Preprocessing –The dataset might not contain errors –But preprocessing algo (e.g., extraction) fails –Text: “John Smith lives in Irvine CA at 100 main st, his salary is $25K” –Extractor: <Person: Irvine, Location: CA, Salary $100K, Address: null>

20 20 *When* Data Quality Issues Arise? Past. –Manual entering of data –Prime reason in the past [Winkler, etc] –People make mistakes while typing in info –E.g. Census data –Tools to prevent entering bad data in the fist place –E.g., field redundancy –Trying hard to avoid the problem altogether –Sometimes it is not possible –E.g. missing data in census –Tools to detect problems with data –If-then-else and other types of rules to fix problems –Applied by people inconsistently –Even though strict guidelines existed –Asking people to fix problems is not always a good idea!

21 21 *When* DQ Issues Arise? Present. –Automated generation of DB content –Prime reason for DQ issues nowadays –Analyzing unstructured or semi-structured raw data –Text / Web –Extraction –Merging DBs or Data sources –Duplicate information –Inconsistent information –Missing data –Inherent problems with well structured data –As in the shown examples

22 22 Data Flaw wrt Data Quality Raw Data Analysis Decisions Handle Data Quality Two general ways to deal with DQ problems 1)Resolve them and then apply analysis on clean data –Classic Data Quality approach 2)Account for them in the analysis on dirty data –E.g. put data into probabilistic DBMS –Often not considered as DQ

23 23 Resolve only what is needed! Raw Data Analysis Decisions Handle Data Quality –Data might have many different (types of) problems in it –Solve only those that might impact your analysis –Example Publication DB: –All papers by John Smith –Venues might have errors –The rest is accurate –Task: count papers => Do not fix venues!!!

24 24 Focus of this class: Entity Resolution (ER)  ER a very common Data Quality challenge  Disambiguating uncertain references to objects  Multiple Variations −Record Linkage [winkler:tr99] −Merge/Purge [hernandez:sigmod95] −De-duplication [ananthakrishna:vldb02,sarawagi:kdd02] −Hardening soft databases [cohen:kdd00] −Reference Matching [mccallum:kdd00] −Object identification [tejada:kdd02] −Identity uncertainty [pasula:nips02, mccallum:iiweb03] −Coreference resolution [ng:acl02] −Fuzzy match and fuzzy grouping [@microsoft] −Name Disambiguation [han:jcdl04, li:aaai04] −Reference Disambiguation [km:siam05] −Object Consolidation [mccallum:kdd03wkshp, chen:iqis05] −Reference Reconciliation [dong:sigmod05]  Ironically, some of them are the same (duplication)!

25 25 Entity Resolution: Lookup and Grouping Lookup –List of all objects is given –Match references to objects Grouping –No list of objects is given –Group references that corefer 25

26 26 When ER challenge arises? 26  Merging multiple data sources (even structured) –“J. Smith” in DataBase1 –“John Smith” in DataBase2 –Do they co-refer?  References to people/objects/organization in raw data –Who is “J. Smith” mentioned as an author of a publication?  Location ambiguity –“Washington” (D.C.? WA? Other?)  Automated extraction from text –“He’s got his PhD/BS from UCSD and UCLA respectively.” –PhD: UCSD or UCLA?  Natural Language Processing (NLP) − “John met Jim and then he went to school” − “he”: John or Jim?

27 27 Standard Approach to Entity Resolution –Choosing features to use –For comparing two references –Choosing blocking functions –To avoid comparing all pairs –Choosing similarity function –Outputs how similar are two references –Choosing problem representation –How to represent it internally, e.g. as a graph –Choosing clustering algorithm –How to group references –Choosing quality metric –In experimental work –How to measure the quality of the results

28 28 Inherent Features: Standard Approach s (u,v) = f (u,v) uv J. Smith John Smith Feature 2 Feature 3 js@google.com sm@yahoo.com ? ? ? ? “Similarity function”“Feature-based similarity” Deciding if two reference u and v co-refer Analyzing their features (if s(u,v) > t then u and v are declared to co-refer)

29 Advanced Approach: Information Used + u v uv J. Smith John Smith Feature 2 Feature 3 js@google.com sm@yahoo.com ? ? ? ? Inherent Features Context Features Entity Relationship Graph (Social Network) Web External Data Dataset Public Datasets E.g., DBLP, IMDB Public Datasets E.g., DBLP, IMDB Encyclopedias E.g., Wikipedia Ontologies E.g., DMOZ Ask a person - Not frequently - Might not work well (Condit.) Functional Dependencies & Consistency constraints

30 30 Blocking Functions Comparing each reference pair − N >> 1 references in dataset − Each can co-refer with the remaining N-1 − Complexity N(N-1)/2 is too high… Blocking functions –A fast function that finds potential matches quickly BF2 - one lost - one extra BF1 - one extra Ground TruthNaïve for R1

31 Blocking Functions (contd) Multiple BFs could be used − Better if independent − Use different record fields for blocking Examples − From [Winkler 1984] 1) Fst3 (ZipCode) + Fst4 (NAME) 2) Fst5 (ZipCode) + Fst6 (Street name) 3) 10-digit phone # 4) Fst3(ZipCode) + Fst4(LngstSubstring(NAME)) 5) Fst10(NAME) − BF4 is #1 single − BF1 + BF4is #1 pair − BF1 + BF5 is #2 pair 31

32 BFs: Other Interpretations Dataset is split into smaller Blocks − Matching operations are performed on Blocks − Block is a clique Blocking − Applying somebody else’s technique first − Not only will find candidates − But also will merge many (even most) cases − Will only leave “tough cases” − Apply your technique on these “tough cases” 32

33 Basic Similarity Functions uv J. Smith John Smith Feature 2 Feature 3 js@google.com sm@yahoo.com 0.8 0.2 0.3 0.0 − How to compare attribute values − Lots of metrics, e.g. Edit Distance, Jaro, Ad hoc − Cross attribute comparisons − How to combine attribute similarities − Many methods, e.g. supervised learning, Ad hoc − How to mix it with other types of evidences − Not only inherent features 33 33 s (u,v) = f (u,v)

34 Standardization & Parsing Standardization −Converting attribute values into the same format −For proper comparison Examples − Convert all dates into MM/DD/YYYY format − So that “Jun 1, 2010” matches with “6/01/10” − Convert time into HH:mm:ss format − So that 3:00PM and 15:00 match − Convert Doctor -> Dr.; Professor -> Prof. Parsing −Subdividing into proper fields −“Dr. John Smith” Jr. becomes − 34

35 Example of Similarity Function Edit Distance (1965) − Comparing two strings − The min number of edits … − Insertions − Deletions − Substitutions − … needed to transform on string into another − Ex.: “Smith” vs. “Smithx” one del is needed. − Dynamic programming solution Ex. of Advanced Version − Assign different costs to ins, del, sub − Some errors are more expensive (unlikely) than others − The distance d(s1,s2) is the min cost transformation − Learn (supervised learning) costs from data 35

36 Clustering Lots of methods exists − Really a lot! Basic Methods − Hierarchical − Agglomerative − Decide threshold t − if s(u,v) > t then merge(u,v) − Partitioning Advanced Issues − How to decide the number of clusters K − How to handle negative evidence & constarints − Two step clustering & cluster refinement − Etc, very vast area 36

37 Quality Metrics Purity of clusters − Do clusters contain mixed elements? (~precision) Completeness of clusters − Do clusters contain all of its elements? (~recall) Tradeoff between them − A single metric that combines them (~F-measure) 37

38 Precision, Recall, and F-measure Assume − You perform an operation to find relevant (“+”) items − E.g. Google “UCI” or some other terms − R is the ground truth set, or the set of relevant entries − A is the answer returned by some algorithm Precision − P = |A ∩ R| / |A| − Which fraction of the answer A are correct (“+”) elements Recall − R = |A ∩ R| / |R| − Which fraction of ground truth elements were found (in A) − − F-measure − F = 2/(1/P + 1/R) harmonic mean of precision and recall 38

39 Quality Metric: Pairwise F-measure − − Example − R = {a1, a2, a5, a6, a9, a10} − A = {a1, a3, a5, a7, a9} − A ∩ R = {a1, a5, a9} − Pre = |A ∩ R| / |A| = 3/5 − Rec = |A ∩ R| / |R| = 1/2 − − Pairwise F-measure − “+” are pairs that should be merged − “-” are pairs that should not be merged − Now, given an answer, can compute Pre, Rec, F-measure − A widely used metric in ER − But a bad choice in many circumstances! − What is the good choice? 39

40 40 Web People Search (WePS) Person 1 Person 2 Person 3 Unknown beforehand 2. Top-K Webpages (related to any John Smith) Web domain Very active research area Many problem variations − E.g., context keywords John Smith 1. Query Google with a person name 3. Task: Cluster Webpages (A cluster per person) Person N

41 41 Recall that… Lookup –List of all objects is given –Match references to objects Grouping –No list of objects is given –Group references that corefer WePS is a grouping task

42 42 User Interface User Input Results

43 43 System Architecture Top-K Webpages Person1Person2 Person3 Results Clustering Search Engine Preprocessed Webpages Auxiliary Information Auxiliary Information John Smith Preprocessing - TF/IDF - NE/URL Extraction - ER Graph Postprocessing - Custer Sketches - Cluster Rank - Webpage Rank


Download ppt "CS295: Info Quality & Entity Resolution University of California, Irvine Fall 2010 Course introduction slides Instructor: Dmitri V. Kalashnikov Copyright."

Similar presentations


Ads by Google