Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining What is to be done before we get to Data Mining?

Similar presentations


Presentation on theme: "Data Mining What is to be done before we get to Data Mining?"— Presentation transcript:

1 Data Mining What is to be done before we get to Data Mining?

2 Agenda Data cleaning Techniques Data Cleaning as a Process

3 Data Cleaning Importance – “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball – “Data cleaning is the number one problem in data warehousing”—DCI survey Data cleaning tasks – Fill in missing values – Identify outliers and smooth out noisy data – Correct inconsistent data – Resolve redundancy caused by data integration It is important to note that, a missing value may not always imply an error. (for example, Null-allow attri. )

4 Missing Data Data is not always available – E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to – equipment malfunction – inconsistent with other recorded data and thus deleted – data not entered due to misunderstanding – certain data may not be considered important at the time of entry – did not register history or changes of the data Missing data may need to be inferred.

5 How to Handle Missing Data? 1- Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not effective when the percentage of missing values per attribute varies considerably. 2- Fill in the missing value manually: tedious + infeasible? 3- Fill in it automatically with – a global constant : e.g., “unknown”, a new class?! – the attribute mean – the attribute mean for all samples belonging to the same class: smarter – the most probable value: inference-based such as Regression, decision tree etc

6 Noisy Data Noise: random error or variance in a measured variable Incorrect attribute values may occur due to – faulty data collection instruments – data entry problems – data transmission problems – technology limitation – inconsistency in naming convention Other data problems which requires data cleaning – duplicate records – incomplete data – inconsistent data

7 How to Handle Noisy Data? 1- Binning – first sort data and partition into (equal-frequency) bins – then one can – smooth by bin means: each value in a bin is replaced by the mean value of the bin – smooth by bin medians: each bin value is replaced by the bin median – smooth by bin boundaries: the minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value. – etc.

8 Binning Methods for Data Smoothing  Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

9 How to Handle Noisy Data? 2- Regression – smooth by fitting the data into regression functions 3- Clustering – detect and remove outliers 4- Combined computer and human inspection – detect suspicious values and check by human (e.g., deal with possible outliers)

10 Data Cleaning and Data Reduction Binning techniques reduce the number of distinct values per attribute Binning may reduce prices for products in to inexpensive, moderate and expensive (Less # of distinct values, human naturally tends to do so) Useful for decision tree induction May be bad in some circumstances, name one?

11 Data Cleaning as a Process Data discrepancy detection is the first task. – Types of discrepancies Errors in data collection Deliberate errors (data providers concealing data) Data decay (outdated data such as change in addresses etc.) Inconsistent data representation and use of codes Data integration errors – Use metadata (e.g., domain, range, dependency, distribution) Data about data Outliers identified and removed – Check field overloading Field overloading is adding data in to defined fields not intended for the purpose originally (eg. 31 bits are used out of 32 bits, we add extra information for 1 bit)

12 Data Cleaning as a Process – Check uniqueness rule, consecutive rule and null rule A unique rule says that each value of the given attribute must be different from all other values for that attribute. A consecutive rule says that there can be no missing values between the lowest and highest values for the attribute, and that all values must also be unique (e.g., as in check numbers). A null rule specifies the use of blanks, question marks, special characters, or other strings that may indicate the null condition (e.g., where a value for a given attribute is not available), and how such values should be handled.

13 Not doneData Cleaning as a Process Using commercial tools for Data Cleaning – Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check etc.) to detect errors and make corrections Techniques used are: parsing and fuzzy matching techniques – Data auditing: by analyzing data to discover rules and relationship to detect violators Techniques used are correlation, clustering and descriptive data summaries to find outliers Data transformation: migration and integration – Data migration tools: allow transformations to be specified Eg. Replace age by birthdate – ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Only specific transforms are allowed which sometimes require custom scripts Two step process involves discrepancy detection and transformation – Process is error-prone and time consuming – Iterative process and problems may be removed after various iterations An incorrect year entry 20004 may only be fixed after correcting all date entries – Latest techniques emphasize interactivity e.g., Potter’s Wheels – http://control.cs.berkeley.edu/abchttp://control.cs.berkeley.edu/abc Declarative languages have been developed for specification of data transformations as extensions to SQL Meta data must be updated to speed up future cleaning


Download ppt "Data Mining What is to be done before we get to Data Mining?"

Similar presentations


Ads by Google