Data Mining What is to be done before we get to Data Mining?

Slides:



Advertisements
Similar presentations
UNIT – 1 Data Preprocessing
Advertisements

UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
DATA PREPROCESSING Why preprocess the data?
1 Copyright by Jiawei Han, modified by Charles Ling for cs411a/538a Data Mining and Data Warehousing v Introduction v Data warehousing and OLAP for data.
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
C6 Databases.
Lecture-19 ETL Detail: Data Cleansing
Konstanz, Jens Gerken ZuiScat An Overview of data quality problems and data cleaning solution approaches Data Cleaning Seminarvortrag: Digital.
Data Mining – Intro.
Introduction to Data Mining with XLMiner

Data Mining: Concepts and Techniques
Data Preprocessing.
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 3 —
6/10/2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning Data.
Chapter 3 Pre-Mining. Content Introduction Proposed New Framework for a Conceptual Data Warehouse Selecting Missing Value Point Estimation Jackknife estimate.
Pre-processing for Data Mining CSE5610 Intelligent Software Systems Semester 1.
Chapter 4 Data Preprocessing
Data Preprocessing.
Peter Brezany and Christian Kloner Institut für Scientific Computing
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
Chapter 1 Data Preprocessing
Chapter 2: Data Preprocessing
CS2032 DATA WAREHOUSING AND DATA MINING
Data Mining Techniques
Ch2 Data Preprocessing part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
September 5, 2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning.
CS685: Special Topics in Data Mining Jinze Liu September 9,
D ATA P REPROCESSING 1. C HAPTER 3: D ATA P REPROCESSING Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization.
Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0.
Outline Introduction Descriptive Data Summarization Data Cleaning Missing value Noise data Data Integration Redundancy Data Transformation.
Descriptive Exploratory Data Analysis III Jagdish S. Gangolly State University of New York at Albany.
ISV Innovation Presented by ISV Innovation Presented by Business Intelligence Fundamentals: Data Cleansing Ola Ekdahl IT Mentors 9/12/08.
Preprocessing for Data Mining Vikram Pudi IIIT Hyderabad.
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2010.
2015年11月6日星期五 2015年11月6日星期五 2015年11月6日星期五 Data Mining: Concepts and Techniques1 Data Preprocessing — Chapter 2 —
1 Data Mining: Concepts and Techniques Data Preprocessing.
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Data Mining Lecture 4. Course Syllabus Course topics: Data Management and Data Collection Techniques for Data Mining Applications (Week3-Week4) –Data.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.

Constraints Lesson 8. Skills Matrix Constraints Domain Integrity: A domain refers to a column in a table. Domain integrity includes data types, rules,
Data Mining Spring 2007 Noisy data Data Discretization using Entropy based and ChiMerge.
Data Cleaning Data Cleaning Importance “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball “Data.
January 17, 2016Data Mining: Concepts and Techniques 1 What Is Data Mining? Data mining (knowledge discovery from data) Extraction of interesting ( non-trivial,
Data Mining: Concepts and Techniques — Chapter 2 —
Managing Data for DSS II. Managing Data for DS Data Warehouse Common characteristics : –Database designed to meet analytical tasks comprising of data.
Data Mining and Decision Support
February 18, 2016Data Mining: Babu Ram Dawadi1 Chapter 3: Data Preprocessing Preprocess Steps Data cleaning Data integration and transformation Data reduction.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
1 Web Mining Faculty of Information Technology Department of Software Engineering and Information Systems PART 4 – Data pre-processing Dr. Rakan Razouk.
Knowledge-based systems Sanaullah Manzoor CS&IT, Lahore Leads University
Data Mining: Data Prepossessing What is to be done before we get to Data Mining?
Pattern Recognition Lecture 20: Data Mining 2 Dr. Richard Spillman Pacific Lutheran University.
Course Outline 1. Pengantar Data Mining 2. Proses Data Mining
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Data Mining: Data Preparation
Noisy Data Noise: random error or variance in a measured variable.
UNIT-2 Data Preprocessing
Data Mining Waqas Haider Bangyal.
Potter’s Wheel: An Interactive Data Cleaning System
Classification and Prediction
Data Preprocessing Modified from
Chapter 1 Data Preprocessing
Lecture 1: Descriptive Statistics and Exploratory
Data Mining Data Preprocessing
Presentation transcript:

Data Mining What is to be done before we get to Data Mining?

Agenda Data cleaning Techniques Data Cleaning as a Process

Data Cleaning Importance – “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball – “Data cleaning is the number one problem in data warehousing”—DCI survey Data cleaning tasks – Fill in missing values – Identify outliers and smooth out noisy data – Correct inconsistent data – Resolve redundancy caused by data integration It is important to note that, a missing value may not always imply an error. (for example, Null-allow attri. )

Missing Data Data is not always available – E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to – equipment malfunction – inconsistent with other recorded data and thus deleted – data not entered due to misunderstanding – certain data may not be considered important at the time of entry – did not register history or changes of the data Missing data may need to be inferred.

How to Handle Missing Data? 1- Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not effective when the percentage of missing values per attribute varies considerably. 2- Fill in the missing value manually: tedious + infeasible? 3- Fill in it automatically with – a global constant : e.g., “unknown”, a new class?! – the attribute mean – the attribute mean for all samples belonging to the same class: smarter – the most probable value: inference-based such as Regression, decision tree etc

Noisy Data Noise: random error or variance in a measured variable Incorrect attribute values may occur due to – faulty data collection instruments – data entry problems – data transmission problems – technology limitation – inconsistency in naming convention Other data problems which requires data cleaning – duplicate records – incomplete data – inconsistent data

How to Handle Noisy Data? 1- Binning – first sort data and partition into (equal-frequency) bins – then one can – smooth by bin means: each value in a bin is replaced by the mean value of the bin – smooth by bin medians: each bin value is replaced by the bin median – smooth by bin boundaries: the minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value. – etc.

Binning Methods for Data Smoothing  Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

How to Handle Noisy Data? 2- Regression – smooth by fitting the data into regression functions 3- Clustering – detect and remove outliers 4- Combined computer and human inspection – detect suspicious values and check by human (e.g., deal with possible outliers)

Data Cleaning and Data Reduction Binning techniques reduce the number of distinct values per attribute Binning may reduce prices for products in to inexpensive, moderate and expensive (Less # of distinct values, human naturally tends to do so) Useful for decision tree induction May be bad in some circumstances, name one?

Data Cleaning as a Process Data discrepancy detection is the first task. – Types of discrepancies Errors in data collection Deliberate errors (data providers concealing data) Data decay (outdated data such as change in addresses etc.) Inconsistent data representation and use of codes Data integration errors – Use metadata (e.g., domain, range, dependency, distribution) Data about data Outliers identified and removed – Check field overloading Field overloading is adding data in to defined fields not intended for the purpose originally (eg. 31 bits are used out of 32 bits, we add extra information for 1 bit)

Data Cleaning as a Process – Check uniqueness rule, consecutive rule and null rule A unique rule says that each value of the given attribute must be different from all other values for that attribute. A consecutive rule says that there can be no missing values between the lowest and highest values for the attribute, and that all values must also be unique (e.g., as in check numbers). A null rule specifies the use of blanks, question marks, special characters, or other strings that may indicate the null condition (e.g., where a value for a given attribute is not available), and how such values should be handled.

Not doneData Cleaning as a Process Using commercial tools for Data Cleaning – Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check etc.) to detect errors and make corrections Techniques used are: parsing and fuzzy matching techniques – Data auditing: by analyzing data to discover rules and relationship to detect violators Techniques used are correlation, clustering and descriptive data summaries to find outliers Data transformation: migration and integration – Data migration tools: allow transformations to be specified Eg. Replace age by birthdate – ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Only specific transforms are allowed which sometimes require custom scripts Two step process involves discrepancy detection and transformation – Process is error-prone and time consuming – Iterative process and problems may be removed after various iterations An incorrect year entry may only be fixed after correcting all date entries – Latest techniques emphasize interactivity e.g., Potter’s Wheels – Declarative languages have been developed for specification of data transformations as extensions to SQL Meta data must be updated to speed up future cleaning