BİL711 Natural Language Processing1 Morphology Morphology is the study of the way words are built from smaller meaningful units called morphemes. We can.

Slides:



Advertisements
Similar presentations
Finite-state automata and Morphology
Advertisements

Jing-Shin Chang1 Morphology & Finite-State Transducers Morphology: the study of constituents of words Word = {a set of morphemes, combined in language-dependent.
CS Morphological Parsing CS Parsing Taking a surface input and analyzing its components and underlying structure Morphological parsing:
Natural Language Processing Lecture 3—9/3/2013 Jim Martin.
Computational Morphology. Morphology S.Ananiadou2 Outline What is morphology? –Word structure –Types of morphological operation – Levels of affixation.
Morphology.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
Finite-State Transducers Shallow Processing Techniques for NLP Ling570 October 10, 2011.
Morphology, Phonology & FSTs Shallow Processing Techniques for NLP Ling570 October 12, 2011.
5/16/ ICS 482 Natural Language Processing Words & Transducers-Morphology - 1 Muhammed Al-Mulhem March 1, 2009.
Morphology Chapter 7 Prepared by Alaa Al Mohammadi.
Brief introduction to morphology
Regular Expressions (RE) Used for specifying text search strings. Standarized and used widely (UNIX: vi, perl, grep. Microsoft Word and other text editors…)
6/2/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
6/10/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 3 Giuseppe Carenini.
LIN3022 Natural Language Processing Lecture 3 Albert Gatt 1LIN3022 Natural Language Processing.
Computational language: week 9 Finish finite state machines FSA’s for modelling word structure Declarative language models knowledge representation and.
1 Morphological analysis LING 570 Fei Xia Week 4: 10/15/07 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Morphology See Harald Trost “Morphology”. Chapter 2 of R Mitkov (ed.) The Oxford Handbook of Computational Linguistics, Oxford (2004): OUP D Jurafsky &
Morphological analysis
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
CS 4705 Lecture 3 Morphology: Parsing Words. What is morphology? The study of how words are composed from smaller, meaning-bearing units (morphemes) –Stems:
Finite State Transducers The machine model we will study for morphological parsing is called the finite state transducer (FST) An FST has two tapes –input.
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
Introduction to English Morphology Finite State Transducers
Morphology 1 Introduction Morphology Morphological Analysis (MA)
Chapter 3. Morphology and Finite-State Transducers From: Chapter 3 of An Introduction to Natural Language Processing, Computational Linguistics, and Speech.
Morphology and Finite-State Transducers. Why this chapter? Hunting for singular or plural of the word ‘woodchunks’ was easy, isn’t it? Lets consider words.
Morphology (CS ) By Mugdha Bapat Under the guidance of Prof. Pushpak Bhattacharyya.
Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005.
October 2006Advanced Topics in NLP1 CSA3050: NLP Algorithms Finite State Transducers for Morphological Parsing.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture4 1 August 2007.
Morphological Recognition We take each sub-lexicon of each stem class and we expand each arc (e.g. the reg-noun arc) with all the morphemes that make up.
Introduction Morphology is the study of the way words are built from smaller units: morphemes un-believe-able-ly Two broad classes of morphemes: stems.
Finite State Automata and Tries Sambhav Jain IIIT Hyderabad.
10/8/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
Session 11 Morphology and Finite State Transducers Introduction to Speech Natural and Language Processing (KOM422 ) Credits: 3(3-0)
Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 3 27 July 2005.
Chapter 3: Morphology and Finite State Transducer
Finite State Transducers
Chapter 3: Morphology and Finite State Transducer Heshaam Faili University of Tehran.
Finite State Transducers for Morphological Parsing
Words: Surface Variation and Automata CMSC Natural Language Processing April 3, 2003.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 3 27 July 2007.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
Chapter III morphology by WJQ. Morphology Morphology refers to the study of the internal structure of words, and the rules by which words are formed.
CS 4705 Lecture 3 Morphology. What is morphology? The study of how words are composed of morphemes (the smallest meaning-bearing units of a language)
CSA3050: Natural Language Algorithms Finite State Devices.
Natural Language Processing Chapter 2 : Morphology.
October 2007Natural Language Processing1 CSA3050: Natural Language Algorithms Words and Finite State Machinery.
1/11/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
CSA4050: Advanced Topics in NLP Computational Morphology II Introduction 2 Level Morphology.
October 2004CSA3050 NLP Algorithms1 CSA3050: Natural Language Algorithms Morphological Parsing.
November 2003Computational Morphology VI1 CSA4050 Advanced Topics in NLP Non-Concatenative Morphology – Reduplication – Interdigitation.
A Quick Review of Set Theory A set is a collection of objects. A B D E We can enumerate the “members” or “elements” of finite sets: { A, D, B, E }. There.
Morphology 1 : the Morpheme
Two Level Morphology Alexander Fraser & Liane Guillou CIS, Ludwig-Maximilians-Universität München Computational Morphology.
CIS, Ludwig-Maximilians-Universität München Computational Morphology
Morphology Morphology Morphology Dr. Amal AlSaikhan Morphology.
Speech and Language Processing
Morphology: Parsing Words
Morphology.
CSCI 5832 Natural Language Processing
Speech and Language Processing
CSCI 5832 Natural Language Processing
LING 138/238 SYMBSYS 138 Intro to Computer Speech and Language Processing Dan Jurafsky 11/24/2018 LING 138/238 Autumn 2004.
By Mugdha Bapat Under the guidance of Prof. Pushpak Bhattacharyya
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
Morphological Parsing
Presentation transcript:

BİL711 Natural Language Processing1 Morphology Morphology is the study of the way words are built from smaller meaningful units called morphemes. We can divide morphemes into two broad classes. –Stems – the core meaningful units, the root of the word. –Affixes – add additional meanings and grammatical functions to words. Affixes are further divided into: –Prefixes – precede the stem: do / undo –Suffixes – follow the stem: eat / eats –Infixes – are inserted inside the stem –Circumfixes – precede and follow the stem English doesn’t stack more affixes. But Turkish can have words with a lot of suffixes. Languages, such as Turkish, tend to string affixes together are called agglutinative languages.

BİL711 Natural Language Processing2 Surface and Lexical Forms The surface level of a word represents the actual spelling of that word. –geliyorum eats cats kitabım The lexical level of a word represents a simple concatenation of morphemes making up that word. –gel +PROG +1SG –eat +AOR –cat +PLU –kitap +P1SG Morphological processors try to find correspondences between lexical and surface forms of words. –Morphological recognition – surface to lexical –Morphological generation – lexical to surface

BİL711 Natural Language Processing3 Inflectional and Derivational Morphology There are two broad classes of morphology: –Inflectional morphology –Derivational morphology After a combination with an inflectional morpheme, the meaning and class of the actual stem usually do not change. –eat / eats pencil / pencils –gel / geliyorum masa / masam After a combination with an derivational morpheme, the meaning and the class of the actual stem usually change. –compute / computer do / undo friend / friendly –Uygar / uygarlaşkapı / kapıcı The irregular changes may happen with derivational affixes.

BİL711 Natural Language Processing4 English Inflectional Morphology Nouns have simple inflectional morphology. –plural -- cat / cats –possessive -- John / John’s Verbs have slightly more complex inflectional, but still relatively simple inflectional morphology. –past form -- walk / walked –past participle form -- walk / walked –gerund -- walk / walking –singular third person -- walk / walks Verbs can be categorized as: –main verbs –modal verbs -- can, will, should –primary verbs -- be, have, do Regular and irregular verbs: walk / walked -- go / went

BİL711 Natural Language Processing5 English Derivational Morphology Some English derivational affixes –-ation : transport / transportation –-er : kill / killer –-ness : fuzzy / fuzziness –-al : computation / computational –-able : break / breakable –-less : help / helpless –un : do / undo –re : try / retry

BİL711 Natural Language Processing6 Turkish Inflectional Morphology Some of inflectional suffixes that Turkish nouns can have: –singular/plural : masa / masalar –possessive markers : masam / masan / masası / masamız / masanız / masaları –case markers : ablative : masadan accusative : masayı dative : masaya Some of inflectional suffixes that Turkish verbs can have: –tense : gel / geldi / geliyor / gelmiş / gelecek –second tense : geliyordu / gelmişti / gelecekti –agreement marker : geldim / geldin / geldi / geldik / geldiniz / geldiler There are order among inflectional suffixes (morphotactics ) –masalarımdan -- masa +PLU +P1SG +ABL –geliyordum -- gel +PROG +PAST +1SG

BİL711 Natural Language Processing7 Turkish Derivational Morphology Turkish derivational morphology is very rich. Some of derivational suffixes in Turkish: –-cı : kapı / kapıcı –-laş : uygar / uygarlaş –-mek : gel / gelmek –-cik : mini / minicik –-li : Ankara / Ankaralı

BİL711 Natural Language Processing8 Morphological Parsing Morphological parsing is to find the lexical form of a word from its surface form. –cats -- cat +N +PLU –cat -- cat +N +SG –goose -- goose +N +SG or goose +V –geese -- goose +N +PLU –gooses -- goose +V +3SG –catch -- catch +V –caught -- catch +V +PAST or catch +V +PP –geliyorum -- gel +V +PROG +1SG –masalardan -- masa +N +PLU +ABL There can be more than one lexical level representation for a given word. (ambiguity)

BİL711 Natural Language Processing9 Parts of A Morphological Processor For a morphological processor, we need at least followings: Lexicon : The list of stems and affixes together with basic information about them such as their main categories (noun, verb, adjective, …) and their sub-categories (regular noun, irregular noun, …). Morphotactics : The model of morpheme ordering that explains which classes of morphemes can follow other classes of morphemes inside a word. Orthographic Rules (Spelling Rules) : These spelling rules are used to model changes that occur in a word (normally when two morphemes combine).

BİL711 Natural Language Processing10 Lexicon A lexicon is a repository for words (stems). They are grouped according to their main categories. –noun, verb, adjective, adverb, … They may be also divided into sub-categories. –regular-nouns, irregular-singular nouns, irregular-plural nouns, … The simplest way to create a morphological parser, put all possible words (together with its inflections) into a lexicon. –We do not this because their numbers are huge (theoratically for Turkish, it is infinite)

BİL711 Natural Language Processing11 Morphotactics Which morphemes can follow which morphemes. Lexicon: regular-noun irregular-pl-noun irreg-sg-noun plural fox geesegoose -s cat sheepsheep dog mice mouse Simple English Nominal Inflection (Morphotactic Rules) reg-noun plural (-s) irreg-sg-noun irreg-pl-noun

BİL711 Natural Language Processing12 Combine Lexicon and Morphotactics f o x s c at d og s h e e p g o e e o s e m ous i c e This only says yes or no. Does not give lexical representation. It accepts a wrong word (foxs).

BİL711 Natural Language Processing13 Two-Level Morphology Two-level morphology represents the correspondence between lexical and surface levels. We use a finite-state transducer to find mapping between these two levels. A FST is a two-tape automaton: –Reads from one tape, and writes to other one. For morphological processing, one tape holds lexical representation, the second one holds the surface form of a word. dog+N+PL dogs Lexical Tape Surface Tape (upper tape) (lower tape)

BİL711 Natural Language Processing14 Formal Definition of FST (Mealey Machine) FST is Q x  x q 0 x F x  Q : a finite set of N states q 0, q 1, … q N  : a finite input alphabet of complex symbols. –Each complex symbol is a pair of an input and an output symbol i:o –where i is a member of I (an input alphabet), –and o is a member of O (an output alphabet). –I and O may contain empty string. –So,  is a subset of IxO. q 0 : the start state F : the set of final states -- F is a subset of Q  (q,i:o) : transition function

BİL711 Natural Language Processing15 FST (cont.)  may not contain all possible pairs from I x O. For example: –I = {a, b, c}O={a,b,c, є} –  = {a:a, b:b, c:c, a:є, b: є, c: є} feasible pairs – In two-level morphology terminology, the pairs in  are called as feasible pairs. default pair – Instead of a:a we can use a single character for this default pair. FSAs are isomorphic to regular languages, and FSTs are isomorphic to regular relations (pair of strings of regular languages).

BİL711 Natural Language Processing16 FST Properties FSTs are closed under: union, inversion, and composition. union : The union of two regular relations is also a regular relation. inversion : The inversion of a FST simply switches the input and output labels. –This means that the same FST can be used for both directions of a morphological processor. composition : If T 1 is a FST from I 1 to O 1 and T 2 is a FST from O 1 to O 2, then composition of T 1 and T 2 (T 1 oT 2 ) maps from I 1 to O 2. We use these properties of FSTs in the creation of the FST for a morphological processor.

BİL711 Natural Language Processing17 A FST for Simple English Nominals reg-noun irreg-sg-noun irreg-pl-noun +N: є +S:# +PL:^s# +SG:# +PL:#

BİL711 Natural Language Processing18 FST for stems A FST for stems which maps roots to their root-class reg-noun irreg-pl-noun irreg-sg-noun fox g o:e o:e segoose cat sheepsheep dog m o:i u:є s:c e mouse fox stands for f:f o:o x:x When these two transducers are composed, we have a FST which maps lexical forms to intermediate forms of words for simple English noun inflections. Next thing that we should handle is to design the FSTs for orthographic rules, and combine all these transducers.

BİL711 Natural Language Processing19 Multi-Level Multi-Tape Machines A frequently use FST idiom, called cascade, is to have the output of one FST read in as the input to a subsequent machine. So, to handle spelling we use three tapes: –lexical, intermediate and surface We need one transducer to work between the lexical and intermediate levels, and a second (a bunch of FSTs) to work between intermediate and surface levels to patch up the spelling. +PL+Ngod sgod s #^god lexical intermediate surface

BİL711 Natural Language Processing20 Lexical to Intermediate FST

BİL711 Natural Language Processing21 Orthographic Rules We need FSTs to map intermediate level to surface level. For each spelling rule we will have a FST, and these FSTs run parallel. Some of English Spelling Rules: –consonant doubling -- 1-letter consonant doubled before ing/ed -- beg/begging –E deletion - Silent e dropped before ing and ed -- make/making –E insertion -- e added after s, z, x, ch, sh before s -- watch/watches –Y replacement -- y changes to ie before s, and to i before ed -- try/tries –K insertion -- verbs ending with vowel+c we add k -- panic/panicked We represent these rules using two-level morphology rules: –a => b / c __ d rewrite a as b when it occurs between c and d.

BİL711 Natural Language Processing22 FST for E-Insertion Rule E-insertion rule: є => e / {x,s,z}^ __ s# ^ (morpheme boundary) means ^: є

BİL711 Natural Language Processing23 Generating or Parsing with FST Lexicon and Rules

BİL711 Natural Language Processing24 Accepting Foxes

BİL711 Natural Language Processing25 Intersection We can intersect all rule FSTs to create a single FST. Intersection algorithm just takes the Cartesian product of states. –For each state q i of the first machine and q j of the second machine, we create a new state q ij –For input symbol a, if the first machine would transition to state q n and the second machine would transition to q m the new machine would transition to q nm.

BİL711 Natural Language Processing26 Composition Cascade can turn out to be somewhat pain. –it is hard to manage all tapes –it fails to take advantage of restricting power of the machines So, it is better to compile the cascade into a single large machine. Create a new state (x,y) for every pair of states x є Q 1 and y є Q 2. The transition function of composition will be defined as follows: δ((x,y),i:o) = (v,z) if there exists c such that δ 1 (x,i:c) = v and δ 2 (y,c:o) = z

BİL711 Natural Language Processing27 Intersect Rule FSTs lexical tape LEXICON-FST intermediate tape FST 1 … FST n surface tape => FST R = FST 1 ^ … ^ FST n

BİL711 Natural Language Processing28 Compose Lexicon and Rule FSTs lexical tape LEXICON-FST intermediate tape surface tape FST R = FST 1 ^ … ^ FST n => LEXICON-FST o FST R lexical tape surface level

BİL711 Natural Language Processing29 Porter Stemming Some applications (some informational retrieval applications) do not the whole morphological processor. They only need the stem of the word. A stemming algorithm (Port Stemming algorithm) is a lexicon- free FST. It is just a cascaded rewrite rules. Stemming algorithms are efficient but they may introduce errors because they do not use a lexicon.