Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Shamil Mustafayev 04/16/2013 1.

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

Configuration management
Ping-Tsun Chang Intelligent Systems Laboratory Computer Science and Information Engineering National Taiwan University Text Mining with Machine Learning.
Text mining Extract from various presentations: Temis, URI-INIST-CNRS, Aster Data …
Search Engines and Information Retrieval
WebMiningResearch ASurvey Web Mining Research: A Survey Raymond Kosala and Hendrik Blockeel ACM SIGKDD, July 2000 Presented by Shan Huang, 4/24/2007.
April 22, Text Mining: Finding Nuggets in Mountains of Textual Data Jochen Doerre, Peter Gerstl, Roland Seiffert IBM Germany, August 1999 Presenter:
NaLIX: A Generic Natural Language Search Environment for XML Data Presented by: Erik Mathisen 02/12/2008.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Automatic Discovery of Technology Trends from Patent Text Youngho Kim, Yingshi Tian, Yoonjae Jeong, Ryu Jihee, Sung-Hyon Myaeng School of Engineering Information.
Text Mining: Finding Nuggets in Mountains of Textual Data Jochen D ö rre, Peter Gerstl, and Roland Seiffert.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
Week 9 Data Mining System (Knowledge Data Discovery)
Web Mining Research: A Survey
WebMiningResearchASurvey Web Mining Research: A Survey Raymond Kosala and Hendrik Blockeel ACM SIGKDD, July 2000 Presented by Shan Huang, 4/24/2007 Revised.
Text Mining: Finding Nuggets in Mountains of Textual Data Jochen Dörre, Peter Gerstl, and Roland Seiffert Presented By: Jake Happs,
Data Mining – Intro.
Text Mining: Finding Nuggets in Mountains of Textual Data Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Huimin Ye.
Text Mining: Finding Nuggets in Mountains of Textual Data Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Drew DeHaas.
Authors:Jochen Dijrre, Peter Gerstl, Roland Seiffert Adapted from slides by: Trevor Crum Presenter: Nicholas Romano Text Mining: Finding Nuggets in Mountains.
Author : Jochen Dijrre, Peter Gerstl, Roland Seiffert Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
TURKISH STATISTICAL INSTITUTE INFORMATION TECHNOLOGIES DEPARTMENT (Muscat, Oman) DATA MINING.
Semantic Web Technologies Lecture # 2 Faculty of Computer Science, IBA.
The 2nd International Conference of e-Learning and Distance Education, 21 to 23 February 2011, Riyadh, Saudi Arabia Prof. Dr. Torky Sultan Faculty of Computers.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence From Data Mining To Knowledge.
Data Mining Chun-Hung Chou
Extracting Key Terms From Noisy and Multi-theme Documents Maria Grineva, Maxim Grinev and Dmitry Lizorkin Institute for System Programming of RAS.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Search Engines and Information Retrieval Chapter 1.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Chapter 7 DATA, TEXT, AND WEB MINING Pages , 311, Sections 7.3, 7.5, 7.6.
Learning Object Metadata Mining Masoud Makrehchi Supervisor: Prof. Mohamed Kamel.
1 1 Slide Introduction to Data Mining and Business Intelligence.
Defining Text Mining Preprocessing Transforming unstructured data stored in document collections into a more explicitly structured intermediate format.
Text Analytics Prof Sunil Wattal.
Copyright © 2006, The McGraw-Hill Companies, Inc. All rights reserved. Decision Support Systems Chapter 10.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Intelligent Database Systems Lab Presenter : WU, MIN-CONG Authors : Jorge Villalon and Rafael A. Calvo 2011, EST Concept Maps as Cognitive Visualizations.
WebMining Web Mining By- Pawan Singh Piyush Arora Pooja Mansharamani Pramod Singh Praveen Kumar 1.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Universit at Dortmund, LS VIII
©2003 Paula Matuszek CSC 9010: Text Mining Applications Document Summarization Dr. Paula Matuszek (610)
Presenter: Shanshan Lu 03/04/2010
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Extracting meaningful labels for WEBSOM text archives Advisor.
Advanced Database Course (ESED5204) Eng. Hanan Alyazji University of Palestine Software Engineering Department.
3-1 Data Mining Kelby Lee. 3-2 Overview ¨ Transaction Database ¨ What is Data Mining ¨ Data Mining Primitives ¨ Data Mining Objectives ¨ Predictive Modeling.
Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Trevor Crum 04/23/2014 *Slides modified from Shamil Mustafayev’s 2013 presentation * 1.
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Externally growing self-organizing maps and its application to database visualization and exploration.
2015/12/121 Extracting Key Terms From Noisy and Multi-theme Documents Maria Grineva, Maxim Grinev and Dmitry Lizorkin Proceeding of the 18th International.
What Is Text Mining? Also known as Text Data Mining Process of examining large collections of unstructured textual resources in order to generate new.
Text Information Management ChengXiang Zhai, Tao Tao, Xuehua Shen, Hui Fang, Azadeh Shakery, Jing Jiang.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Authors: Jochen Doerre, Peter Gerstl, Roland Seiffert Adapted from slides by: Trevor Crum Presenter: Caitlin Baker Text Mining: Finding Nuggets in Mountains.
Multi-Class Sentiment Analysis with Clustering and Score Representation Yan Zhu.
Trends in NL Analysis Jim Critz University of New York in Prague EurOpen.CZ 12 December 2008.
Text Mining: Finding Nuggets in Mountains of Textual Data
Best pTree organization? level-1 gives te, tf (term level)
What Is Cluster Analysis?
Information Organization: Overview
Taking a Tour of Text Analytics
Chapter 7: Text and Web Mining
CATEGORIZATION OF NEWS ARTICLES USING NEURAL TEXT CATEGORIZER
Social Knowledge Mining
Text Mining: Finding Nuggets in Mountains of Textual Data
Text Categorization Assigning documents to a fixed set of categories
Web Mining Research: A Survey
Information Organization: Overview
Text Mining Application Programming Chapter 9 Text Categorization
CSE591: Data Mining by H. Liu
PolyAnalyst™ text mining tool Allstate Insurance example
Presentation transcript:

Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Shamil Mustafayev 04/16/2013 1

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 2

Definition  Text Mining: the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. Also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. 3

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 4

Motivation  A large portion of a company’s data is unstructured or semi-structured 5  Letters  s  Phone transcripts  Contracts  Technical documents  Patents  Web pages  Articles

Typical Applications  Summarizing documents  Discovering/monitoring relations among people, places, organizations, etc  Customer profile analysis  Trend analysis  Documents summarization  Spam Identification  Public health early warning  Event tracks 6

Outline  Definition  Motivation  Methodology  Comparison with Data Mining  Feature Extraction  Clustering and Categorizing  Some Applications  Conclusion & Exam Questions 7

Methodology: Challenges  Information is in unstructured textual form  Natural language interpretation is difficult & complex task! (not fully possible) Google and Watson are a step closer  Text mining deals with huge collections of documents Impossible for human examination 8

Google vs Watson  Google justifies the answer by returning the text documents where it found the evidence.  Google finds documents that are most suitable to a given Keyword. 9  Watson tries to understand the semantics behind a given key phrase or question.  Then Watson will use its huge knowledge base to find the correct answer.  Watson uses more AI

Methodology: Two Aspects  Knowledge Discovery Extraction of codified information ○ Feature Extraction Mining proper; determining some structure  Information Distillation Analysis of feature distribution 10

Two Text Mining Approaches  Extraction Extraction of codified information from single document  Analysis Analysis of the features to detect patterns, trends, etc, over whole collections of documents 11

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 12

Feature Extraction  Recognize and classify “significant” vocabulary items from the text  Categories of vocabulary Proper names – Mrs. Albright or Dheli[sic], India Multiword terms – Joint venture, online document Abbreviations – CPU, CEO Relations – Jack Smith-age-42 Other useful things: numerical forms of numbers, percentages, money, etc 13

Canonical Form Examples  Normalize numbers, money Four = 4, five-hundred dollar = $500  Conversion of date to normal form  Morphological variants Drive, drove, driven = drive  Proper names and other forms Mr. Johnson, Bob Johnson, The author = Bob Johnson 14

Feature Extraction Approach  Linguistically motivated heuristics  Pattern matching  Limited lexical information (part-of- speech)  Avoid analyzing with too much depth Does not use too much lexical information No in-depth syntactic or semantic analysis 15

IBM Intelligent Miner for Text  IBM introduced Intelligent Miner for Text in 1998  SDK with: Feature extraction, clustering, categorization, and more  Traditional components (search engine, etc)  The rest of the paper describes text mining methodology of Intelligent Miner. 16

Advantages to IBM’s approach  Processing is very fast (helps when dealing with huge amounts of data)  Heuristics work reasonably well  Generally applicable to any domain 17

Outline  Definition  Motivation  Methodology  Comparison with Data Mining  Feature Extraction  Clustering and Categorizing  Some Applications  Conclusion & Exam Questions 18

Clustering  Fully automatic process  Documents are grouped according to similarity of their feature vectors  Each cluster is labeled by a listing of the common terms/keywords  Good for getting an overview of a document collection 19

Two Clustering Engines  Hierarchical clustering Orders the clusters into a tree reflecting various levels of similarity  Binary relational clustering Flat clustering Relationships of different strengths between clusters, reflecting similarity 20

Clustering Model 21

Categorization  Assigns documents to preexisting categories  Classes of documents are defined by providing a set of sample documents.  Training phase produces “categorization schema”  Documents can be assigned to more than one category  If confidence is low, document is set aside for human intervention 22

Categorization Model 23

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 24

Applications  Customer Relationship Management application provided by IBM Intelligent Miner for Text called “Customer Relationship Intelligence” “Help companies better understand what their customers want and what they think about the company itself” 25

Customer Intelligence Process  Take as input body of communications with customer  Cluster the documents to identify issues  Characterize the clusters to identify the conditions for problems  Assign new messages to appropriate clusters 26

Customer Intelligence Usage  Knowledge Discovery Clustering used to create a structure that can be interpreted  Information Distillation Refinement and extension of clustering results ○ Interpreting the results ○ Tuning of the clustering process ○ Selecting meaningful clusters 27

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 28

Comparison with Data Mining  Data mining Discover hidden models. tries to generalize all of the data into a single model. marketing, medicine, health care 29  Text mining Discover hidden facts. tries to understand the details, cross reference between individual instances biosciences, customer profile analysis

Outline  Definition  Motivation  Methodology  Feature Extraction  Clustering and Categorizing  Some Applications  Comparison with Data Mining  Conclusion & Exam Questions 30

Conclusion  This paper introduced text mining and how it differs from data mining proper.  Focused on the tasks of feature extraction and clustering/categorization  Presented an overview of the tools/methods of IBM’s Intelligent Miner for Text 31

Exam Question #1  What are the two aspects of Text Mining? Knowledge Discovery: Discovering a common customer complaint in a large collection of documents containing customer feedback. Information Distillation: Filtering future comments into pre-defined categories 32

Exam Question #2  How does the procedure for text mining differ from the procedure for data mining? Adds feature extraction phase Infeasible for humans to select features manually The feature vectors are, in general, highly dimensional and sparse 33

Exam Question #3  In the Nominator program of IBM’s Intelligent Miner for Text, an objective of the design is to enable rapid extraction of names from large amounts of text. How does this decision affect the ability of the program to interpret the semantics of text? Does not perform in-depth syntactic or semantic analysis of the text; the results are fast but only heuristic with regards to actual semantics of the text. 34

35