Introduction to Natural Language Processing Phenotype RCN Meeting Feb 2013.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Chunking: Shallow Parsing Eric Atwell, Language Research Group.
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Information Extraction Lecture 7 – Linear Models (Basic Machine Learning) CIS, LMU München Winter Semester Dr. Alexander Fraser, CIS.
Mining External Resources for Biomedical IE Why, How, What Malvina Nissim
NYU ANLP-00 1 Automatic Discovery of Scenario-Level Patterns for Information Extraction Roman Yangarber Ralph Grishman Pasi Tapanainen Silja Huttunen.
IR & Metadata. Metadata Didn’t we already talk about this? We discussed what metadata is and its types –Data about data –Descriptive metadata is external.
Unsupervised Information Extraction from Unstructured, Ungrammatical Data Sources on the World Wide Web Mathew Michelson and Craig A. Knoblock.
An Overview of Text Mining Rebecca Hwa 4/25/2002 References M. Hearst, “Untangling Text Data Mining,” in the Proceedings of the 37 th Annual Meeting of.
Inducing Information Extraction Systems for New Languages via Cross-Language Projection Ellen Riloff University of Utah Charles Schafer, David Yarowksy.
Basi di dati distribuite Prof. M.T. PAZIENZA a.a
Biomedical Information Extraction. Outline Intro to biomedical information extraction PASTA [Demetriou and Gaizauskas] Biomedical named entities Name.
Introduction to CL Session 1: 7/08/2011. What is computational linguistics? Processing natural language text by computers  for practical applications.
QuASI: Question Answering using Statistics, Semantics, and Inference Marti Hearst, Jerry Feldman, Chris Manning, Srini Narayanan Univ. of California-Berkeley.
1 Information Retrieval and Extraction 資訊檢索與擷取 Chia-Hui Chang, Assistant Professor Dept. of Computer Science & Information Engineering National Central.
Information Retrieval and Extraction 資訊檢索與擷取 Chia-Hui Chang National Central University
Information retrieval Finding relevant data using irrelevant keys Example: database of photographic images sorted by number, date. DBMS: Well structured.
Mining the Medical Literature Chirag Bhatt October 14 th, 2004.
Introduction to Machine Learning Approach Lecture 5.
Overview of Search Engines
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
C OLLECTIVE ANNOTATION OF WIKIPEDIA ENTITIES IN WEB TEXT - Presented by Avinash S Bharadwaj ( )
Lecture 12: 22/6/1435 Natural language processing Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
CIG Conference Norwich September 2006 AUTINDEX 1 AUTINDEX: Automatic Indexing and Classification of Texts Catherine Pease & Paul Schmidt IAI, Saarbrücken.
1 Wikification CSE 6339 (Section 002) Abhijit Tendulkar.
Automatic Lexical Annotation Applied to the SCARLET Ontology Matcher Laura Po and Sonia Bergamaschi DII, University of Modena and Reggio Emilia, Italy.
Towards Improving Classification of Real World Biomedical Articles Kostas Fragos TEI of Athens Christos Skourlas TEI of Athens
GLOSSARY COMPILATION Alex Kotov (akotov2) Hanna Zhong (hzhong) Hoa Nguyen (hnguyen4) Zhenyu Yang (zyang2)
Outline Quick review of GS Current problems with GS Our solutions Future work Discussion …
Using Text Mining and Natural Language Processing for Health Care Claims Processing Cihan ÜNAL
Researcher affiliation extraction from homepages I. Nagy, R. Farkas, M. Jelasity University of Szeged, Hungary.
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
How do we Collect Data for the Ontology? AmphibiaTree 2006 Workshop Saturday 11:30–11:45 J. Leopold.
Automatic Detection of Tags for Political Blogs Khairun-nisa Hassanali Vasileios Hatzivassiloglou The University.
Automatically Generating Gene Summaries from Biomedical Literature (To appear in Proceedings of PSB 2006) X. LING, J. JIANG, X. He, Q.~Z. MEI, C.~X. ZHAI,
Relevance Detection Approach to Gene Annotation Aid to automatic annotation of databases Annotation flow –Extraction of molecular function of a gene from.
Mining Topic-Specific Concepts and Definitions on the Web Bing Liu, etc KDD03 CS591CXZ CS591CXZ Web mining: Lexical relationship mining.
Playing Biology ’ s Name Game: Identifying Protein Names In Scientific Text Daniel Hanisch, Juliane Fluck, Heinz-Theodor Mevissen and Ralf Zimmer Pac Symp.
Seeking Abbreviations From MEDLINE Jeffrey T. Chang Hinrich Schütze Russ B. Altman Presented by: Bo Han.
BioRAT: Extracting Biological Information from Full-length Papers David P.A. Corney, Bernard F. Buxton, William B. Langdon and David T. Jones Bioinformatics.
Chapter 23: Probabilistic Language Models April 13, 2004.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Using Text Mining and Natural Language Processing for.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
Data Mining: Text Mining
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Natural Language Processing.
Automatic Assignment of Biomedical Categories: Toward a Generic Approach Patrick Ruch University Hospitals of Geneva, Medical Informatics Service, Geneva.
Data Mining and Decision Support
Information Extraction for Clinical Data Mining: A Mammography Case Study H. Nassif, R. Woods, E. Burnside, M. Ayvaci, J. Shavlik and D. Page University.
AUTONOMOUS REQUIREMENTS SPECIFICATION PROCESSING USING NATURAL LANGUAGE PROCESSING - Vivek Punjabi.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Overview of Statistical NLP IR Group Meeting March 7, 2006.
Selecting Relevant Documents Assume: –we already have a corpus of documents defined. –goal is to return a subset of those documents. –Individual documents.
A Simple English-to-Punjabi Translation System By : Shailendra Singh.
An Ontology-based Automatic Semantic Annotation Approach for Patent Document Retrieval in Product Innovation Design Feng Wang, Lanfen Lin, Zhou Yang College.
BioCreAtIvE Critical Assessment for Information Extraction in Biology Granada, Spain, March28-March 31, 2004 Task 2: Functional annotation of gene products.
Information Retrieval in Practice
Concept Grounding to Multiple Knowledge Bases via Indirect Supervision
Approaches to Machine Translation
CS 430: Information Discovery
Machine Learning in Natural Language Processing
Presentation 王睿.
Extracting Semantic Concept Relations
Automatic Detection of Causal Relations for Question Answering
Approaches to Machine Translation
CSE 635 Multimedia Information Retrieval
Extracting Recipes from Chemical Academic Papers
CS246: Information Retrieval
Extracting Information from Diverse and Noisy Scanned Document Images
Extracting Why Text Segment from Web Based on Grammar-gram
Presentation transcript:

Introduction to Natural Language Processing Phenotype RCN Meeting Feb 2013

What is Natural Language Processing? Feb. 25, 2013Introduction to NLP2 Siri Optical Character Recognition Speech-to-Text IBM Watson – Jeopardy Translation Spell and Grammar Checks

What is Natural Language Processing? Methods to translate human (natural) language input, allowing computers to derive meaning from.  Very general definition. Context of the Phenotype RCN meeting – Information Extraction (IE) Automatic extraction of structured information from unstructured documents – Text Mining Derive high-quality information from text. Extract features (IE) and use data mining or pattern recognition to find ‘interesting’ facts and relations – BioNLP Text mining applied to texts and literature of the biomedical and molecular biology domain Feb. 25, 2013Introduction to NLP3

Three Questions 1.What do we want from NLP? 2.How can we get Facts? What approaches are there? Requirements and what are the costs? 3.What can you expect? How do we measure quality? Are there limits? Feb. 25, 2013Introduction to NLP4 Outline

1. WHAT DO WE WANT FROM NLP? Do we know what we want? Feb. 25, 2013Introduction to NLP5

What do we want from NLP? Speedup: BioCuration for Phenotypes What is a document talking about? – Named Entity Recognition Prrx1 with GeneID:18933 – Fact extraction A regulates B, Inhibition of B leads to Phenotype C Automatic annotation – Find all facts for phenotype annotation – Only highlight most relevant information Feb. 25, 2013Introduction to NLP6

What do we want to annotate? Documents in the biomedical domain Publications – Abstracts – Full text (PDF/website) Results, Methods, Image/Table captions Supplemental material: Tables Free form text – E.g. existing databases such as OMIM Non electronic documents – Books – Scanned documents Feb. 25, 2013Introduction to NLP7

2. HOW CAN WE GET FACTS? The long road of finding phenotypes in a text Feb. 25, 2013Introduction to NLP8

How can we get Facts? NLP is difficult, because Language is: – Ambiguous homonyms, acronyms, … – Variable spelling, synonyms, sentence structure, … – Complex multiple components, chains, options, … BioNLP: multi step, multi algorithm Every algorithm has been applied to BioNLP Ongoing research area Feb. 25, 2013Introduction to NLP9

Preliminaries Getting the Text 1.Select a corpus/prioritize documents 2.Get the document – Repositories (i.e. PubMedCentral) – Local copy – Scan and OCR (Error rate?) 3.Extract text (PDF, HTML, …) 4.Language detection 5.Document Segmentation Title, Headers, Captions, Literature references Feb. 25, 2013Introduction to NLP10

Parsing Goal: Find sentences and phrases, semantic units 1.Lexical analysis: Define tokens/words 2.Find: Noun phrases, sentences, units Prrx1 knockout mice exhibit malformation of skeletal structures [49]. Heavy vs. light weight approaches – Heavy: Grammars and parse trees (Traditional NLP) Computationally expensive, language dependent Can be high quality Problematic with text fragments and malformed text – Light: Rules Heuristics Chemical formulas and special names can break tokenizer assumptions Feb. 25, 2013Introduction to NLP11

Entity Recognition Match text fragments to entities Multiple approaches – Dictionaries of known entity names Proteins, Genes (Prrx1) Ontology terms: skeleton (UBERON: ) Required: Know synonyms a priori Cannot find new entities, i.e. new ontology term candidates – Rules and patterns Match entities according to a set of rules Mutation short-hand G121A How to create the rules? – Machine learning Feb. 25, 2013Introduction to NLP12

ER – Machine Learning Transform the text into a feature vector F = {Prrx1_N, exhibit_V, knockout_A, knockout_mice_NP, malformation_N, mice_N, skeletal_A, skeletal_structure_NP, structure_N} Supervised, unsupervised, hybrid approaches Required A priori knowledge and/or training data Problems – Training data – Never enough training data – Overfitting Only learn to classify the training data No generalization for new documents Feb. 25, 2013Introduction to NLP13

From Text Matches to Entities A text match is not an named (bio-)entity – Require at least an identifier – Try to find supporting evidence Disambiguation – Multiple candidates for one match Use context to filter Prrx1  55 candidate genes species Mus musculus  PRRX1_MOUSE GeneID:18933 – False positive matches Common (English) words HAS is a short name for ‘Heme A synthase’ Fruit fly genes/proteins Ken and Barbie Feb. 25, 2013Introduction to NLP14

Finding Facts Facts have multiple components Prrx1 knockout mice exhibit malformation of skeletal structures  PRRX1_MOUSE GeneID:18933  gene knock out OBI:  Mus Musculus NCBITaxon:10090  malformed PATO:  skeleton UBERON: Use all the input from the previous steps – Named entities – Assign relations – Disambiguate – Remove redundant or known relations – Rank candidates gene_knock_out(PRRX1_MOUSE) has_phenotype malformed(skeleton) Feb. 25, 2013Introduction to NLP15

3. WHAT CAN YOU EXPECT? Reality Feb. 25, 2013Introduction to NLP16

What can you expect? Every step in the BioNLP process may introduce errors  Many steps  Errors propagate How do we measure quality?  Benchmarks Ideal benchmark – Large and representative test set of documents – Pre-annotated by experts Benchmarking with real word problems – BioCreAtIvE: A critical assessment of text mining methods in molecular biology (Next talk) Feb. 25, 2013Introduction to NLP17

Benchmarks Common quality measures – Precision Fraction of relevant hits – Recall Fraction of all relevant documents – F-score Harmonic mean of precision and recall Is that sufficient? – Factually correct, but irrelevant – Partially correct Incomplete matches Overeager matches – Ranking: Best matches first? Feb. 25, 2013Introduction to NLP18

What can you expect? Upper limits Prrx1 knockout mice exhibit malformation of skeletal structures  PRRX1_MOUSE 0.95  gene knock out 0.8  Mus Musculus 0.98  malformed 0.85  skeleton 0.95  0.95  0.8  0.98 * 0.85 * 0.95  0.60 (On average) 40 of 100 facts will be wrong or missed. Feb. 25, 2013Introduction to NLP19

What are the costs? No out-of-the-box solution – All approaches require some sort of customization, training data or at least feedback – Parsing: Language, heuristics (stop words) – Entity Recognition Dictionaries: Names, synonyms, ontologies, DBs Rules: Hand-curated, training sets Machine Learning: Convert text to features, training sets – Disambiguation: As much information as possible – Facts Define facts Different algorithms for different facts Continuous cycle Feb. 25, 2013Introduction to NLP20

Summary No magic bullet  Many different approaches BioNLP can be very good with specific tasks  Next talks Remember: Errors propagate Only as good as the input and feedback – Abstract vs. Full text – High quality vs. high quantity training data Feb. 25, 2013Introduction to NLP21

THANK YOU. Feb. 25, 2013Introduction to NLP22