Presentation on theme: "Data Quality and Data Cleaning: An Overview"— Presentation transcript:
1 Data Quality and Data Cleaning: An Overview Tamraparni DasuTheodore JohnsonAT&T Labs - Research
2 AcknowledgementsWe would like to thank the following people who contributed to this tutorial and to our book, Exploratory Data Mining and Data Quality (Wiley)Deepak Agarwal, Dave Belanger, Bob Bell, Simon Byers, Corinna Cortes, Ken Church, Christos Faloutsos, Mary Fernandez, Joel Gottlieb, Andrew Hume, Nick Koudas, Elefteris Koutsofios, Bala Krishnamurthy, Ken Lyons, David Poole, Daryl Pregibon, Matthew Roughan, Gregg Vesonder, and Jon Wright.
4 Learning Objectives Data Quality: The nature of the beast DQ is pervasive and expensiveThe problems are messy and unstructuredCannot be pigeonholed into clearly defined problems or disciplinesAs a consequence, there is very little research and few systematic solutions.Purpose of the tutorialDefine a framework for the problem of data quality, place the existing work within this structureDiscuss methods for detecting, defining, measuring and resolving data quality issuesIdentify challenges and opportunitiesMake research directions clear.
5 Overview Data Quality Motivation The meaning of data quality (1) The data quality continuumThe meaning of data quality (2)Data quality metricsData quality processWhere do problems come fromHow can they be resolvedTechnical ToolsProcess managementDatabaseMetadataStatisticsCase StudyResearch directions and references
6 Why DQ? Data quality problems are expensive and pervasive DQ problems cost hundreds of billions of $$$ each year.Lost revenues, credibility, customer retentionResolving data quality problems is often the biggest effort in a data mining study.50%-80% of time in data mining projects spent on DQInterest in streamlining business operations databases to increase operational efficiency (e.g. cycle times), reduce costs, conform to legal requirements
8 Meaning of Data Quality (1) Data do not conform to expectations, opaqueData do not match up to specificationsviolations of data types, schemaglitches e.g. missing, inconsistent and incomplete data, typosThe specs are inaccessible : complex, lack of metadata, hidden rules, no documentation, word of mouth, changing.Many sources and manifestationsManual errors, HW/SW constraints, shortcuts in data processing.
9 Example Can we interpret the data? Data glitches T.Das |55536o8327 |24.95|Y|- |0.0|1000Ted J.| |2000 |N|M|NY|1000Can we interpret the data?What do the fields mean?Units of measurement? Field three is Revenue. In dollars or cents?Data glitchesTypos, multiple formats, missing / default valuesMetadata and domain expertiseField seven is Usage. Is it censored?Field 4 is a censored flag. How to handle censored data?
12 Common Data GlitchesSystemic changes to data which are external to the recorded process.Changes in data layout / data typesInteger becomes string, fields swap positions, etc.Changes in scale / formatDollars vs. eurosTemporary reversion to defaultsFailure of a processing stepMissing and default values“0 represents both missing and default values”Gaps in time seriesEspecially when records represent incremental changes.
13 Conventional Definition of Data Quality AccuracyThe data was recorded correctly.CompletenessAll relevant data was recorded.UniquenessEntities are recorded once.TimelinessThe data is kept up to date.ConsistencyThe data agrees with itself.
14 Problems Not measurable Rigid and static Context independent Vague Accuracy and completeness are difficult, perhaps impossible to measure.Rigid and staticContext independentNo accounting for what is important.Aggregates can tolerate inaccuracies but signatures cannotIncomplete: interpretability, accessibility, metadata, relevance to analysis, etc.?VagueThe conventional definitions provide no guidance towards practical improvements of the data.
15 Finding a modern definition We need a definition of data quality whichReflects the use of the dataLeads to improvements in processesIs measurable (we can define metrics)First, we need a better understanding of how and where data quality problems occurThe data quality continuum
16 The Data Quality Continuum Data/information is not static, it flows in a data collection and usage processData gatheringData deliveryData storageData integrationData retrievalData mining/analysisProblems can and do arise at all of these stagesEnd-to-end, continuous monitoring needed
17 Data Gathering How does the data enter the system? Sources of problems:Manual entryNo uniform standards for content and formatsParallel data entry (duplicates)Approximations, surrogates – SW/HW constraintsMeasurement errors.
18 Solutions Potential Solutions: Preemptive: Retrospective: Process architecture (build in integrity checks)Process management (reward accurate data entry, data sharing, documentation, data stewards)Retrospective:Cleaning focus (duplicate removal, merge/purge, name & address matching, field value standardization)Diagnostic focus (automated detection of glitches).Process tools help preepmtively, DB tools help cleaning and stat tools help analysis/diagnosis
19 Data Delivery Data sent from source to destination hopsDestroying or mutilating information by inappropriate pre-processingInappropriate aggregationNulls converted to default valuesLoss of data:Buffer overflowsTransmission problemsNo checksDid all the files arrive in their entirety?
20 Solutions Build reliable transmission protocols Verification Use a relay serverVerificationChecksums, verification parserDo the uploaded files fit an expected pattern?RelationshipsAre there dependencies between data streams and processing stepsInterface agreementsData quality commitment from the data stream supplier.
21 Data Storage Problems in physical storage Problems in logical storage Can be an issue, but terabytes are cheap.Problems in logical storagePoor metadata.Data feeds are often derived from application programs or legacy data sources.Inappropriate data models.Missing timestamps, incorrect normalization, etc.Ad-hoc modifications.Structure the data to fit the GUI.Hardware / software constraints.Data transmission via Excel spreadsheets, Y2K
22 Solutions Metadata Planning Data exploration Document and publish data specifications in real time.PlanningProvide for worst case scenarios.Size the resources to the data feeds.Data explorationUse data browsing and data mining tools to examine the data.Does it meet the specifications?Has something changed?Departure from expected values and process?
23 Data Integration Combine data sets (acquisitions, across departments). Common source of problemsHeterogeneous data : no common key, different field formatsApproximate matching (e.g., names and addresses)Different definitionsWhat is a customer: an account, an individual, a contract …Time synchronizationDoes the data relate to the same time periods? Are the time windows compatible?Legacy dataIMS, spreadsheets, ad-hoc structuresSociological factorsReluctance to share – loss of power.
24 Solutions Commercial Tools Data browsing and integration Significant body of research in data integrationMany tools for address matching, schema mapping are available.Data browsing and integrationMany hidden problems and meanings : extract metadata.View before and after results : anomalies in integration?Manage people and processesEnd-to-end accountability: data steward?Reward data sharing and data maintenance
25 Data RetrievalExported data sets are often an extract of the actual data. Problems occur because:Source data not properly understood.Need for derived data not understood.Simple mistakes.Inner join vs. outer joinUnderstanding NULL valuesComputational constraintsE.g., too expensive to give a full history, instead use a snapshot.IncompatibilityEBCDIC?
26 Solutions Tools – use appropriate ETL, EDM and XML tools Testing – Test queries to make sure that the result matches with what is expectedPlan ahead – Make sure that the retrieved data can be stored, delivered as per specifications
27 Data Mining and Analysis Data collected for analysis and miningProblems in the analysis.Scale and performanceConfidence bounds?Black boxes and dart boards“Don’t need no analysts”Attachment to modelsInsufficient domain expertiseCasual empiricism
28 Solutions Data exploration Continuous analysis Accountability Determine which models and techniques are appropriate, find data bugs, develop domain expertise.Continuous analysisAre the results stable? How and why do they change?AccountabilityValidate analysisMake the analysis part of the feedback loop to improve data processes
30 Meaning of Data Quality Revisited Conventional definitions: completeness, uniqueness, consistency, accuracy etc. – measurable?Modernize definition of DQ in the context of the DQ continuumDepends on data paradigms (data gathering, storage)Federated dataHigh dimensional dataDescriptive dataLongitudinal dataStreaming dataWeb (scraped) dataNumeric vs. categorical vs. text data
31 DQ Meaning RevisitedDepends on applications (delivery, integration, analysis)Business operationsAggregate analysis, predictionCustomer relations …Data InterpretationKnow all the rules used to generate the dataData SuitabilityUse of proxy dataRelevant data is missingIncreased DQ Increased reliability and usability (directionally correct)
33 Data Quality Constraints Many data quality problems can be captured by static constraints based on the schema.Nulls not allowed, field domains, foreign key constraints, etc.Many others are due to problems in workflow, and can be captured by dynamic constraintsE.g., orders above $200 are processed by Biller 2The constraints follow an ruleA few constraints capture most cases, thousands of constraints to capture the last few cases.Constraints are measurable. Data Quality Metrics?
34 Data Quality Metrics Need to quantify data quality Types of metrics DQ is complex, no set of numbers will be perfectContext and domain dependentShould indicate what is wrong and how to improveShould be measurable, practical and implementableTypes of metricsStatic vs. dynamic constraintsOperational vs. diagnosticMetrics should be directionally correct with an improvement in use of the data.A very large number of metrics are possibleChoose the most important ones depending on context.
35 Examples of Data Quality Metrics Usability and reliability of the dataConformance to schema (static)Evaluate constraints on a snapshot.Conformance to business rules (dynamic)Evaluate constraints on changes in the databaseAcross time or across databases.AccuracyPerform inventory (expensive), or use proxy (track complaints).Accessibility, InterpretabilityGlitches in analysisSuccessful completion of end-to-end processIncrease in automation, others …
37 Technical Approaches Need a multi-disciplinary approach No single approach solves all problemsProcess managementPertains to data process and flowsChecks and controls, auditsDatabaseStorage, access, manipulation and retrievalMetadata / domain expertiseInterpretation and understandingStatisticsAnalysis, diagnosis, model fitting, prediction, decision making …
38 Process Management Business processes which encourage DQ Assign dollars to quality problemsStandardize content and formatsEnter data once, enter it correctly (incentives for sales, customer care)AutomationAssign responsibility: data stewardsContinuous end-to-end data audits and reviewsTransitions between organizations.Data MonitoringData PublishingFeedback loops
39 Data MonitoringData processing systems are often thought of as open-loop systems.Process and forget …Computers don’t make mistakes, do they?Analogy to control systems : feedback loops.Monitor the system to detect difference between actual and intendedFeedback loop to correct the behavior of earlier components
40 ExampleSales, provisioning, and billing for telecommunications serviceMany stages involving handoffs between organizations and databasesSimplified pictureTransition between organizational boundaries is a common cause of problems.Natural feedback loopsCustomer complains if the bill is too highMissing feedback loopsNo complaints if we undercharge.
41 Example Sales Order Customer Customer Care Customer Account InformationBillingProvisioningExisting Data FlowMissing Data Flow
42 Data Monitoring Use data monitoring to add missing feedback loops. Methods:Data tracking / auditingFollow a sample of transactions through the workflow.Reconciliation of incrementally updated databases with original sources.Mandated consistency with a Database of Record (DBOR).Data Publishing
43 Data PublishingMake the contents of a database available in a readily accessible and digestible wayWeb interface (universal client).Data Squashing : Publish aggregates, cubes, samples, parametric representations.Publish the metadata.Close feedback loopsMany people look at data, use different sections for different purposes in different ways, “test” the dataSurprisingly difficult sometimes.Organizational boundaries, loss of control interpreted as loss of power, desire to hide problems.
44 Databases Technology Why use databases? Most data lives in databases Statisticians spend a lot of time on EDA, sanity checks and summarization.Powerful data analysis and query tools.Extensive data import/export/access facilities.Data validation.Integration of data from multiple sources.Most data lives in databases
45 Relational Databases A database is a collection of tables. Each table is a collection of records.Each record contains values of named fields.The values can be NULL,Meaning either “don’t know” or “not applicable”.A key is a field (or set of fields) whose value is unique in every record of a tableIdentifies the thing described by the recordData from different tables is associated by matching on field values (join).Foreign key join: all values of one field are contained in the set of values of another field, which is a key.
46 SalesForce Orders Foreign Key Name ID BaseSalary Commission Joe , %Ted , %Mary , %Sunita , %OrdersID SalesForceID SaleDate DeliveryDate/3/2002 8/9/2002/8/2002 8/23/2002/15/2002 NULL/24/2002 9/4/2002Foreign Key
47 SQL (Structured Query Language) How many sales are pending delivery, by salesperson?Specify attributesSpecify data sourcesSelect S.Name, count(*)From SalesForce S, Orders OWhereS.ID = O.SalesForceIDAnd O.DeliveryDate IS NULLGroup By S.NameIntegrateDeliveryDate is NULLmeans that it is pendingCount by salesperson name
48 Database Tools Most DBMS’s provide many data consistency tools Data typesString, date, float, integerDomains (restricted set of field values)Restriction on field values e.g. telephone numberConstraintsColumn ConstraintsNot Null, Unique, Restriction of valuesTable constraintsPrimary and foreign key constraintsPowerful query languageTriggersTimestamps, temporal DBMS
49 Then why is every DB dirty? Consistency constraints are often not usedCost of enforcing the constraintE.g., foreign key constraints, triggers.Loss of flexibilityConstraints not understoodE.g., large, complex databases with rapidly changing requirementsDBA does not know / does not care.Complex, heterogeneous, poorly understood dataMerged, federated, web-scraped DBs.Undetectable problemsIncorrect values, missing dataMetadata not maintainedDatabase is too complex to understand
50 Semantic ComplexityDifferent ideas about exactly what the data represents leads to errors.Example:HR uses the SalesForce table to record the current status of the sales staff. When a person leaves employment, their record is deleted.The CFO uses the SalesForce and Orders tables to compute the volume of sales each quarter.Strangely, their numbers are always too low …Enforcing foreign key join by deletion will drop records from Orders table which is even worse.CFO needs historical view of Salesforce table to get accurate answersDQ problems arise when information is not fully communicated to the users
52 Data Loading The data might be derived from a questionable source. Federated database, Merged databasesText files, log recordsWeb scrapingThe source database might admit a limited set of queriesE.g., query a web page by filling in a few fields.The data might need restructuringField value transformationTransform tables (e.g. denormalize, pivot, fold)Denormalize means do the foreign key joins to put all the relevant information in one record.Example is joining Salesforce to Orders so that you can do the analysis on a single table.
53 ETL Provides tools to Design automation Access data (DB drivers, web page fetch, parse tools)Validate data (ensure constraints)Transform data (e.g. addresses, phone numbers)Transform tables (pivot, etc.)Load dataDesign automationSchema mappingQueries to data sets with limited query interfaces (web queries)Trade off time by preprocessing data for validation/transformation etc. to speed up loading.Design automation tools will automatically generate queries to bridge the gap betweenthe canned queries and custom queries that you want to ask.
54 (example of pivot) unpivot pivot Customer bolt nail rivet glue Customer Part SalesBob boltBob nailBob rivetSue glueSue nailPete boltPete glueCustomer bolt nail rivet glueBobSuePeteDifferent representations – flexibility (e.g. new part) vs. analysis
55 (Example of schema mapping [MHH00]) AddressID AddrMapping 1PersonnelProfessorName SalID Name SalStudentName GPA YrPayRateGenerates queries to get answersMapping 2Rank HrRateWorksOnName Proj Hrs ProjRank
56 Web Scraping Lots of data in the web, but embedded in unwanted data E.g., track sales rank on AmazonProblems:Limited query interfacesFill in forms“Free text” fieldsE.g. addressesInconsistent outputI.e., html tags which mark interesting fields might be different on different pages.Rapid change without notice.
57 Tools Automated generation of web scrapers Excel will load html tablesAutomatic translation of queriesGiven a description of allowable queries on a particular sourceMonitor results to detect quality deteriorationExtraction of data from free-form textE.g. addresses, names, phone numbersAuto-detect field domain
58 Approximate Matching Relate tuples whose fields are “close” Approximate string matchingGenerally, based on edit distance.Fast SQL expression using a q-gram indexApproximate tree matchingFor XMLMuch more expensive than string matchingRecent research in fast approximationsFeature vector matchingSimilarity searchMany techniques discussed in the data mining literature.Ad-hoc matchingLook for a clever trick.
59 Approximate Joins and Duplicate Elimination Perform joins based on incomplete or corrupted information.Approximate join : between two different tablesDuplicate elimination : within the same tableMore general than approximate matching.Correlating information : verification from other sources, e.g. usage correlates with billing.Missing data : Need to use several orthogonal search and scoring criteria.Scoring functions measure similarity of two records;Special transforms e.g, name=first name-last name or last name-first nameOrthogonal search == test independent fields if the primary field is missing
60 Potentially Incorrectly Merged Records UsageB (111-BAD-DATA)A(111-BAD-DATA)Time
62 Database Exploration Tools for finding problems in a database Similar to data quality miningSimple queries are effective: Select Field, count(*) from Table Group by Field Order by Cnt DescHidden NULL values at the head of the list, typos at the end of the listLook at a sample of the data in the table.
63 Database ProfilingSystematically collect summaries of the data in the databaseNumber of rows in each table - completenessNumber of unique, null values of each field - missingSkewness of distribution of field values - outliersData type, length of the field – constraint satisfactionUse free-text field extraction to guess field types (address, name, zip code, etc.)Functional dependencies, keysJoin pathsDoes the database contain what we think it contains?Usually not.
64 Finding Join Paths Correlate information. In large databases, hundreds of tables, thousands of fields.Our experience: field names are very unreliable.Use data types and field characterization to narrow the search space.More sophisticated techniquesmin hash sampling
65 Finding Keys and Functional Dependencies Key: set of fields whose value is unique in every rowFunctional Dependency: A set of fields which determine the value of another fieldE.g., ZipCode determines the value of StateProblems: keys not identified, uniqueness not enforced, hidden keys and functional dependencies.Key finding is expensive: O(fk) Count Distinct queries to find all keys of up to k fields.Fortunately, we can prune the search space if we search only for minimal keys and FDsApproximate keys : almost but not quite unique.Approximate FD : similar ideaZip code determines state – not true
66 Domain ExpertiseData quality gurus: “We found these peculiar records in your database after running sophisticated algorithms!”Domain Experts: “Oh, those apples - we put them in the same baskets as oranges because there are too few apples to bother. Not a big deal. We knew that already.”
67 Why Domain Expertise?DE is important for understanding the data, the problem and interpreting the results“The counter resets to 0 if the number of calls exceeds N”.“The missing values are represented by 0, but the default billed amount is 0 too.”Insufficient DE is a primary cause of poor DQ – data are unusableDE should be documented as metadata
68 Where is the Domain Expertise? Usually in people’s heads – seldom documentedFragmented across organizationsLost during personnel and project transitionsIf undocumented, deteriorates and becomes fuzzy over time
69 Metadata Data about the data Data types, domains, and constraints help, but are often not enoughInterpretation of valuesScale, units of measurement, meaning of labelsInterpretation of tablesFrequency of refresh, associations, view definitionsMost work done for scientific databasesMetadata can include programs for interpreting the data set.
70 XML Data interchange format, based on SGML Tree structured Multiple field values, complex structure, etc.“Self-describing” : schema is part of the recordField attributesDTD : minimal schema in an XML record.<tutorial><title> Data Quality and Data Cleaning: An Overview <\title><Conference area=“database”> JSM <\Conference><author> T. Dasu<bio> Statistician <\bio> <\author><author> T. Johnson<institution> AT&T Labs <\institution> <\author><\tutorial>Data Type Definition DTD
71 What’s Missing? Most metadata relates to static properties Database schemaField interpretationData use and interpretation requires dynamic properties as wellBusiness process rules?80-20 rule
72 Lineage Tracing Record the processing used to create data Coarse grained: record processing of a tableFine grained: record processing of a recordRecord graph of data transformation steps.Used for analysis, debugging, feedback loops
73 Statistical approaches No explicit DQ methodsTraditional statistical data collected from carefully designed experiments, often tied to analysisFour broad categories can be adapted for DQMissing, incomplete, ambiguous or damaged data e.g truncated, censoredSuspicious or abnormal data e.g. outliersTesting for departure from modelsGoodness-of-fit
74 Missing DataMissing data - values, attributes, entire records, entire sectionsMissing values and defaults are indistinguishableTruncation/censoring - not aware, mechanisms not knownProblem: Misleading results, bias.
75 Detecting Missing Data Overtly missing dataMatch data specifications against data - are all the attributes present?Scan individual records - are there gaps?Rough checks : number of files, file sizes, number of records, number of duplicatesCompare estimates (averages, frequencies, medians) with “expected” values and bounds; check at various levels of granularity since aggregates can be misleading.
76 Missing data detection (cont.) Hidden damage to dataValues are truncated or censored - check for spikes and dips in distributions and histogramsMissing values and defaults are indistinguishable - too many missing values? metadata or domain expertise can helpErrors of omission e.g. all calls from a particular area are missing - check if data are missing randomly or are localized in some way
77 Imputing Values to Missing Data In federated data, between 30%-70% of the data points will have at least one missing attributeIf we ignore all records with a missing valueData wastageRemaining data is seriously biasedLack of confidence in resultsUnderstanding pattern of missing data unearths data integrity issues
78 Missing Value Imputation - 1 Standalone imputationMean, median, other point estimatesAssume: Distribution of the non-missing valuesDoes not take into account inter-relationshipsIntroduces biasConvenient, easy to implement
80 Missing Value Imputation –3 Regression methodUse linear regression, sweep left-to-rightX3=a+b*X2+c*X1;X4=d+e*X3+f*X2+g*X1, and so onX3 in the second equation is estimated from the first equation if it is missing
81 Missing Value Imputation - 3 Propensity Scores (nonparametric)Let Yj=1 if Xj is missing, 0 otherwiseEstimate P(Yj =1) based on X1 through X(j-1) using logistic regressionGroup by propensity score P(Yj =1)Within each group, estimate missing Xjs from known Xjs using approximate Bayesian bootstrap.Repeat until all attributes are populated.
82 Missing Value Imputation - 4 Arbitrary missing patternMarkov Chain Monte Carlo (MCMC)Assume multivariate Normal, with Q(1) Simulate missing X, given Q estimated from observed X ; (2) Re-compute Q using filled in XRepeat until stable.Expensive: Used most often to induce monotonicityNote that imputed values are useful in aggregates but can not be trusted individually
83 Censoring and Truncation Well studied in Biostatistics, relevant to duration dataCensored - Measurement is bounded but not precise e.g. Call duration > 20 are recorded as 20Truncated - Data point dropped if it exceeds or falls below a certain bound e.g. customers with less than 2 minutes of calling per month
85 Censoring/Truncation (cont.) If censoring/truncation mechanism not known, analysis can be inaccurate and biasedMetadata should record the existence as well as the nature of censoring/truncation
86 Spikes usually indicate censored time intervals caused by resetting of timestamps to defaults
87 Suspicious Data Consider the data points 3, 4, 7, 4, 8, 3, 9, 5, 7, 6, 92“92” is suspicious - an outlierOutliers are potentially legitimateOften, they are data or model glitchesOr, they could be a data miner’s dream, e.g. highly profitable customers
90 Outliers Outlier – “departure from the expected” Types of outliers – defining “expected”Many approachesError bounds, tolerance limits – control chartsModel based – regression depth, analysis of residualsGeometricDistributionalTime Series outliers
91 Control Charts Quality control of production lots Typically univariate: X-Bar, R, CUSUMDistributional assumptions for charts not based on means e.g. R–chartsMain steps (based on statistical inference)Define “expected” and “departure” e.g. Mean and standard error based on sampling distribution of sample mean (aggregate);Compute aggregate each samplePlot aggregates vs. expected and error bounds“Out of Control” if aggregates fall outside bounds
93 Multivariate Control Charts - 1 Bivariate charts:based on bivariate Normal assumptionscomponent-wise limits lead to Type I, II errorsDepth based control charts (nonparametric):map n-dimensional data to one dimension using depth e.g. MahalanobisBuild control charts for depthCompare against benchmark using depth e.g. Q-Q plots of depth of each data set
95 Multivariate Control Charts - 2 Multiscale process control with wavelets:Detects abnormalities at multiple scales as large wavelet coefficients.Useful for data with heteroscedasticityApplied in chemical process control
96 Model Fitting and Outliers Models summarize general trends in datamore complex than simple aggregatese.g. linear regression, logistic regression focus on attribute relationshipsGoodness of fit tests (DQ for analysis/mining)check suitableness of model to dataverify validity of assumptionsdata rich enough to answer analysis/business question?Data points that do not conform to well fitting models are potential outliers
97 Set Comparison and Outlier Detection “Model” consists of partition based summariesPerform nonparametric statistical tests for a rapid section-wise comparison of two or more massive data setsIf there exists a baseline “good’’ data set, this technique can detect potentially corrupt sections in the test data set
98 Goodness of Fit - 1 Chi-square test Tests for Normality attribute independenceObserved (discrete) distribution= Assumed distribution?Tests for NormalityQ-Q plots (visual)Kolmogorov-Smirnov testKullback-Liebler divergence
99 Goodness of Fit - 2 Analysis of residuals Departure of individual points from modelPatterns in residuals reveal inadequacies of model or violations of assumptionsReveals bias (data are non-linear) and peculiarities in data (variance of one attribute is a function of other attributes)Residual plots
101 Goodness of Fit -3 Regression depth measures the “outlyingness” of a model, not an individual data pointindicates how well a regression plane represents the dataIf a regression plane needs to pass through many points to rotate to the vertical (non-fit) position, it has high regression depth
102 Geometric OutliersDefine outliers as those points at the periphery of the data set.Peeling : define layers of increasing depth, outer layers contain the outlying pointsConvex Hull: peel off successive convex hull points.Depth Contours: layers are the data depth layers.Efficient algorithms for 2-D, 3-D.Computational complexity increases rapidly with dimension.Ω(Nceil(d/2)) complexity for N points, d dimensions
103 Distributional Outliers For each point, compute the maximum distance to its k nearest neighbors.DB(p,D)-outlier : at least fraction p of the points in the database lie at distance greater than D.Fast algorithmsOne is O(dN2), one is O(cd+N)Local Outliers : adjust definition of outlier based on density of nearest data clusters.Note – performance guarantees but no accuracy guarantees
104 Time Series OutliersData is a time series of measurements of a large collection of entities (e.g. customer usage).Vector of measurements define a trajectory for an entity.A trajectory can be glitched, or it can make radical but valid changes.Approach: develop models based on entity’s past behavior (within) and all entity behavior (relative).Find potential glitches:Common glitch trajectoriesDeviations from within and relative behavior.
106 A Case Study in Data Cleaning DQ in Business Operations
107 Data Quality Process Data Gathering Data Loading (ETL) Data Scrub – data profiling, validate data constraintsData Integration – functional dependenciesDevelop Biz Rules and Metrics– interact with domain expertsValidate biz rulesStabilize Biz RulesVerify Biz RulesRecommendationsQuantify ResultsSummarize LearningData Quality Check
108 Case Study Provisioning inventory database The initiative Identify equipment needed to fulfill sales order.False positives : provisioning delayFalse negatives : decline the order, purchase unnecessary equipmentThe initiativeValidate the corporate inventoryBuild a database of record.Has top management support.
109 Task Description OPED : operations database Machine components available in each local warehouseIOWA : information warehouseMachine descriptions: componentsOwner descriptions:name, business, addressSAPDB : Sales and provisioning databaseInventory DB used by sales force when sellingAudit data flow OPED IOWA SAPDB
110 Data Audit –1 Data gathering: Manual extraction of data from IOWA and SAPDBFormat changes at every run, difficult to automate auditsDocumented metadata was insufficientOPED - warehouseid (a primary match key) is corrupted, undocumented workaround process used70 machine types in OPED, only 10 defined and expectedUnclear biz rules for flows between OPED and IOWA, and IOWA and SAPDB“Satellite” databases at local sites not integrated with main databasesInformal and unstandardized means of data exchange e.g., excel spreadsheets, text messages, faxes, paper and pencil
111 Data Audit - 2 Data and process flows: SAPDB contains only 15% of the records in OPED or IOWABusiness rules that govern data flows unclearNumerous workaround processesOverride prohibited machine types through manual interventionPush through incomplete information using dummies “ ”Manual “correction” of keys- “But we cleaned the data!”Permitting multiple entry of same information by multiple sources – duplicate records with minor variations
112 Data Audit - 3 Data interpretation No consensus among experts on definitions and use of machines and componentsMany conference calls with top management supportTiming issuesDownstream systems unaware of delays in upstream systems e.g., sections of data are not updated but no one is aware
113 Data Improvements - 1Satellite databases integrated into main databases (completeness).Address mismatches cleaned up (accuracy).And so was the process which caused the mismatchesMetadata - defined and documented business definitions and rules that govern data flows (interpretability, accessibility)E.g. only certain machine types should flow to SAPDBRemoved duplicates (uniqueness)
114 Data Improvements - 2Automated data gathering and eliminated manual workarounds (increase in automation)Increased usable inventory by ensuring compliance to business rules (usability)Successful end-to-end completion went up from 50% to 98% (conformance to business rules)Automated auditing process (reliability)Regular checks and cleanups
115 DQ Metrics Used Proportion of data that flows through correctly Extent of automationAccess and interpretabilityDocumentationMetadataOthers …
116 Accomplishments DBoR! Repaired ~ 70% of the data Authoritative documentation of metadata and domain expertiseInventory up by 15% (tens of thousands of records, millions of $$)Automated, end-to-end and continuous auditing system in placeIn 100 days!
117 What did we learn? Take nothing for granted Metadata is frequently wrongData transfers never work the first timeManual entry and intervention causes problems; Automate processesRemove the need for manual intervention; Make the regular process reflect practice.Defining data quality metrics is keyDefines and measures the problem.Creates metadata.Organization-wide data qualityNeed to show $$ impact and get blessings of top brassData steward for the end-to-end process.Data publishing to establish feedback loops.
119 SAS Data Quality - Cleanse The SAS Data Quality Solution bundles the following tools:SAS/Warehouse Administrator software: structures operational data. SeeSAS Data Quality — Cleanse software: prevent defective data, reduce redundancies, and standardize data elements. SeedfPower Studio from DataFlux: normalize inconsistent data, improve the merging capability of data from dissimilar sources, and identify critical data quality issues.
120 IBM DataJoinerDB2 DataJoiner is the multidatabase server solution for accessing heterogeneous data — IBM or other relational or non-relational, local or remote — as if it were in a single data store.
122 Challenges in Data Quality Multifaceted natureProblems are introduced at all stages of the process.but especially at organization boundaries.Many types of data and applications.Highly complex and context-dependentThe processes and entities are complex.Many problems in many forms.No silver bulletNeed an array of tools.And the discipline to use them.
123 Data Quality Research Burning issues Data quality mining Advanced browsing / exploratory data miningReducing complexityData quality metricsThe highlighted text represents opportunities for statisticians
124 Data Quality MiningSystematically searching data for inconsistencies, flaws and glitchesOpportunitiesnonparametric goodness-of-fit techniquesmodel based outlier detection, confidence limitsfinding “holes” in dataanomaly analysis for new types of datastreaming, text, image
125 Data Quality Metrics Measuring data quality - scoring? Need a flexible, composite score; weights?Constraint frameworkconstraint satisfaction, proportions and confidence limitsResampling methodsStatistical distances, sampling distributions and confidence bounds
126 Interesting Data Quality Research Recent research that is interesting and important for an aspect of data quality.CAVEATThis list is meant to be an exampleIt is not exhaustive.It contains a sampling of recent researchSubjective (my perspective)
127 BellmanT. Dasu, T. Johnson, S. Muthukrishnan, V. Shkapenyuk, Mining database structure; or, how to build a data quality browser, SIGMOD 2002 pg“Data quality” browserSummarizes, identifies anomaliesFinds potential ways of matching up hundreds of data sets containing thousands of attributesPerform profiling on the databaseCounts, keys, join paths, substring associationsUse to explore large databases.Extract missing metadata.
128 Model CheckingFreeny, A. E. & Nair, V. N. (1994). Methods for assessing distributional assumptions in one and two sample problems, in J. Stanford & S. Vardeman (eds), Probabilistic and Statistical Methods in the Physical Sciences, Academic Press, New York, chapter 7.Empirical methods for assessing goodness-of-fitgraphical methods, formal goodness-of-fit methodscomplete as well as censored data
129 Bayesian Model Averaging Hoeting, J., Madigan D., Raftery, A. and Volinsky, C. (1999) Bayesian Model Averaging, Statistical Science 14,Averaging over models to account for uncertainty in modelsneed to choose class of modelsneed to choose priors for competing modelscomputational difficulties
130 SignaturesC. Cortes and D. Pregibon, “Signature-based methods for data streams”, Data Mining and Knowledge Discovery, July 2001.For characterizing data streamsSignatures capture typical behaviorUpdated over timeOutliers with respect to the signature could potentially represent fraud
131 Contaminated DataPearson, R. K. “Outliers in process modeling and identification”, IEEE Transactions on Control Systems Technology, Volume: 10 Issue: 1 , Jan 2002, Page(s):Methodsidentifying outliers (Hampel limits),missing value imputation,compare results of fixed analysis on similar data subsetsothers
132 Data Depth Multivariate ordering Convex hull peeling, half plane, simplicialPotential for nonparametric partitioning and data explorationIdentifying differences in sets and isolating outliersDifficult to implement in higher dimensionsWork by Huber, Oja, Tukey, Liu & Singh, Rousseeuw et al,Ng et al, others …
133 Depth ContoursS. Krishnan, N. Mustafa, S. Venkatasubramanian, Hardware-Assisted Computation of Depth Contours. SODAParallel computation of depth contours using graphics card hardware.Cheap parallel processorDepth contours :Multidimensional analog of the medianUsed for nonparametric statistics
135 Data Quality Mining : Deviants H.V. Jagadish, N. Koudas, S. Muthukrishnan, Mining Deviants in a Time Series Database, VLDBDeviants : points in a time series which, when removed, yield best accuracy improvement in a histogram.Use deviants to find glitches in time series data.
136 Potters WheelV. Raman, J.M. Hellerstein, Potter's Wheel: An Interactive Data Cleaning System, VLDB 2001 pgETL tool, especially for web scraped data.Two interesting features:Scalable spreadsheet : interactive view of the results of applying a data transformation.Field domain determinationApply domain patterns to fields, see which ones fit best.Report exceptions.
137 ConclusionsNow that processing is cheap and access is easy, the big problem is data quality.Where are the statisticians?Statistical metrics for data qualityRank based methods, nonparametric methods that scaleProvide confidence guaranteesConsiderable research (mostly from process management, CS, DB), but highly fragmentedLots of opportunities for applied research.
139 References Process Management Missing Value Imputation Missing Value ImputationSchafer, J. L. (1997), Analysis of Incomplete Multivariate Data, New York: Chapman and HallLittle, R. J. A. and D. B. Rubin "Statistical Analysis with Missing Data." New York: John Wiley & Sons.Release 8.2 of SAS/STAT - PROCs MI, MIANALYZE“Learning from incomplete data”. Z. Ghahramani and M. I. Jordan. AI Memo 1509, CBCL Paper 108, January 1995, 11 pages.
140 References Censoring / Truncation Control Charts Survival Analysis: Techniques for Censored and Truncated Data”. John P. Klein and Melvin L. Moeschberger"Empirical Processes With Applications to Statistics”. Galen R. Shorack and Jon A. Wellner; Wiley, New York; 1986.Control ChartsA.J. Duncan, Quality Control and Industrial Statistics. Richard D. Irwin, Inc., Ill, 1974.Liu, R. Y. and Singh, K. (1993). A quality index based on data depth and multivariate rank tests. J. Amer. Statist. AssocAradhye, H. B., B. R. Bakshi, R. A. Strauss,and J. F. Davis (2001). Multiscale Statistical Process Control Using Wavelets - Theoretical Analysis and Properties. Technical Report, Ohio State University
141 References Set comparison Goodness of fit Theodore Johnson, Tamraparni Dasu: Comparing Massive High-Dimensional Data Sets. KDD 1998:Venkatesh Ganti, Johannes Gehrke, Raghu Ramakrishnan: A Framework for Measuring Changes in Data Characteristics. PODS 1999,Goodness of fitComputing location depth and regression depth in higher dimensions. Statistics and Computing 8: Rousseeuw P.J. and Struyf ABelsley, D.A., Kuh, E., and Welsch, R.E. (1980), Regression Diagnostics, New York: John Wiley and Sons, Inc.
142 References Geometric Outliers Distributional Outliers Computational Geometry: An Introduction”, Preparata, Shamos, Springer-Verlag 1988“Fast Computation of 2-Dimensional Depth Contours”, T. Johnson, I. Kwok, R. Ng, Proc. Conf. Knowledge Discovery and Data Mining pgDistributional Outliers“Algorithms for Mining Distance-Based Outliers in Large Datasets”, E.M. Knorr, R. Ng, Proc. VLDB Conf. 1998“LOF: Identifying Density-Based Local Outliers”, M.M. Breunig, H.-P. Kriegel, R. Ng, J. Sander, Proc. SIGMOD Conf. 2000Time Series Outliers“Hunting data glitches in massive time series data”, T. Dasu, T. Johnson, MIT Workshop on Information Quality 2000.
143 ReferencesETL“Data Cleaning: Problems and Current Approaches”, E. Rahm, H.H. Do, Data Engineering Bulletin 23(4) 3-13, 2000“Declarative Data Cleaning: Language, Model, and Algorithms”, H. Galhardas, D. Florescu, D. Shasha, E. Simon, C.-A. Saita, Proc. VLDB Conf. 2001“Schema Mapping as Query Discovery”, R.J. Miller, L.M. Haas, M.A. Hernandez, Proc. 26th VLDB Conf. Pg“Answering Queries Using Views: A Survey”, A. Halevy, VLDB Journal, 2001“A Foundation for Multi-dimensional Databases”, M. Gyssens, L.V.S. Lakshmanan, VLDB 1997 pg“SchemaSQL – An Extension to SQL for Multidatabase Interoperability”, L.V.S. Lakshmanan, F. Sadri, S.N. Subramanian, ACM Transactions on Database Systems 26(4)“Don't Scrap It, Wrap It! A Wrapper Architecture for Legacy Data Sources”, M.T. Roth, P.M. Schwarz, Proc. VLDB Conf“Declarative Data Cleaning: Language, Model, and Algorithms”, H. Galhardas, D. Florescu, D. Shasha, E. Simon, C. Saita, Proc. VLDB Conf. Pg
144 References Web Scraping “Automatically Extracting Structure from Free Text Addresses”, V.R. Borkar, K. Deshmukh, S. Sarawagi, Data Engineering Bulletin 23(4) 27-32, 2000“Potters Wheel: An Interactive Data Cleaning System”, V. Raman and J.M. Hellerstein, Proc. VLDB 2001“Accurately and Reliably Extracting Data From the Web”, C.A. Knoblock, K. Lerman, S. Minton, I. Muslea, Data Engineering Bulletin 23(4) 33-41, 2000Approximate String Matching“A Guided Tour to Approximate String Matching”, G. Navarro, ACM Computer Surveys 33(1):31-88, 2001“Using q-grams in a DBMS for Approximate String Processing”, L. Gravano, P.G. Ipeirotis, H.V. Jagadish, N. Koudas, S. Muthukrishnan, L. Pietarinen, D. Srivastava, Data Engineering Bulletin 24(4):28-37,2001.
145 References Other Approximate Matching “Approximate XML Joins”, N. Koudas, D. Srivastava, H.V. Jagadish, S. Guha, T. Yu, SIGMOD 2002“Searching Multimedia Databases by Content”, C. Faloutsos, Klewer, 1996.Approximate Joins and Duplicate Detection“The Merge/Purge Problem for Large Databases”, M. Hernandez, S. Stolfo, Proc. SIGMOD Conf pg“Real-World Data is Dirty: Data Cleansing and the Merge/Purge Problem”, M. Hernandez, S. Stolfo, Data Mining and Knowledge Discovery 2(1)9-37, 1998“Telcordia’s Database Reconciliation and Data Quality Analysis Tool”, F. Caruso, M. Cochinwala, U. Ganapathy, G. Lalk, P. Missier, Proc. VLDB Conf. Pg“Hardening Soft Information Sources”, W.W. Cohen, H. Kautz, D. McAllester, Proc. KDD Conf.,
146 References Data Profiling “Data Profiling and Mapping, The Essential First Step in Data Migration and Integration Projects”, Evoke Software,“TANE: An Efficient Algorithm for Discovering Functional and Approximate Dependencies”, Y. Huhtala, J. K., P. Porkka, H. Toivonen, The Computer Journal 42(2): (1999)“Mining Database Structure; Or, How to Build a Data Quality Browser”, T.Dasu, T. Johnson, S. Muthukrishnan, V. Shkapenyuk, Proc. SIGMOD Conf. 2002“Data-Driven Understanding and Refinement of Schema Mappings”, L.-L. Yan, R. Miller, L.M. Haas, R. Fagin, Proc. SIGMOD Conf. 2001
147 ReferencesMetadata“A Metadata Resource to Promote Data Integration”, L. Seligman, A. Rosenthal, IEEE Metadata Workshop, 1996“Using Semantic Values to Facilitate Interoperability Among Heterogenous Information Sources”, E. Sciore, M. Siegel, A. Rosenthal, ACM Trans. On Database Systems 19(2)“XML Data: From Research to Standards”, D. Florescu, J. Simeon, VLDB 2000 Tutorial,“XML’s Impact on Databases and Data Sharing”, A. Rosenthal, IEEE Computer“Lineage Tracing for General Data Warehouse Transformations”, Y. Cui, J. Widom, Proc. VLDB Conf
148 Please contact me if you have any questions: Thank you!Please contact me if you have any questions:
150 Algorithm (Approximate Matches & Joins, Duplicate Elimination) Partition data setBy hash on computed keyBy sort order on computed keyBy similarity search / approximate match on computed keyPerform scoring within the partitionHash : all pairsSort order, similarity search : target record to retrieved recordsRecord pairs with high scores are matchesUse multiple computed keys / hash functionsDuplicate elimination : duplicate records form an equivalence class.Example of computed key is soundex
151 Effective Algorithm Data Profiling, Finding Keys Eliminate “bad” fieldsFloat data type, mostly NULL, etc.Collect an in-memory samplePerhaps storing a hash of the field valueCompute count distinct on the sampleHigh count : verify by count distinct on database table.Use Tane style level-wise pruningStop after examining 3-way or 4-way keysFalse keys with enough attributes.Tane is an algorithm for pruning
152 Min Hash Sampling Finding Join Paths Special type of sampling which can estimate the resemblance of two setsSize of intersection / size of unionApply to set of values in a field, store the min hash sample in a databaseUse an SQL query to find all fields with high resemblance to a given fieldSmall sample sizes suffice.Problem: fields which join after a small data transformationE.g “SS ” vs. “ ”Solution: collect min hash samples on the qgrams of a fieldAlternative: collect sketches of qgram frequency vectors
154 DBXplorerS. Agrawal, S. Chaudhuri, G. Das, DBXplorer: A System for Keyword-Based Search over Relational Databases, ICDE 2002.Keyword search in a relational database, independent of the schema.Pre-processing to build inverted list indices (profiling).Build join queries for multiple keyword search.
155 Exploratory Data Mining R.T. Ng, L.V.S. Lakshmanan, J. Han, A. Pang, Exploratory Mining and Pruning Optimizations of Constrained Association Rules, SIGMOD 1998 pg 13-24Interactive exploration of data mining (association rule) results through constraint specification.
156 Exploratory Schema Mapping M.A. Hernandez, R.J. Miller, L.M. Haas, Clio: A Semi-Automatic Tool for Schema Mapping, SIGMOD 2001Automatic generation and ranking of schema mapping queriesTool for suggesting field mappingsInteractive display of alternate query results.
157 Data Quality MiningF. Korn, S. Muthukrishnan, Y. Zhu, Monitoring Data Quality Problems in Network Databases, submitted for publicationDefine probably approximately correct constraints for a data feed (network performance data)Range, smoothness, balance, functional dependence, unique keysAutomation of constraint selection and threshold settingRaise alarm when constraints fail.
158 Exploratory Data Mining J.D. Becher, P. Berkhin, E. Freeman, Automating Exploratory Data Analysis for Efficient Data Mining, KDD 2000Use data mining and analysis tools to determine appropriate data models.In this paper, attribute selection for classification.
161 OLAP ExplorationS. Sarawagi, G. Sathe, i3: Intelligent, Interactive Investigation of OLAP data cubes, SIGMOD 2000 pg. 589Suite of tools (operators) to automate the browsing of a data cube.Find “interesting” regions
162 Approximate MatchingL. Gravano, P.G. Ipeirotis, N. Koudas, D. Srivastava, Text Joins in a RDBMS for Web Data Integration, WWW2003Approximate string matching using IR techniquesWeight edit distance by inverse frequency of differing tokensIf “Corp.” appears often, its presence or absence carries little weightDefine an SQL-queryable index