Presentation is loading. Please wait.

Presentation is loading. Please wait.

Named-Entity Recognition with Character-Level Models Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D. Manning Stanford University CoNLL-2003: Seventh.

Similar presentations


Presentation on theme: "Named-Entity Recognition with Character-Level Models Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D. Manning Stanford University CoNLL-2003: Seventh."— Presentation transcript:

1 Named-Entity Recognition with Character-Level Models Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D. Manning Stanford University CoNLL-2003: Seventh Conference on Natural Language Learning klein@cs.stanford.edujsmarr@stanford.eduhtnguyen@stanford.edumanning@cs.stanford.edu

2 2 Unknown Words are a Central Challenge for NER Recognizing known named-entities (NEs) is relatively simple and accurate Recognizing novel NEs requires recognizing context and/or word-internal features External context and frequent internal words (e.g. Inc.) are most commonly used features Internal composition of NEs alone provide surprisingly strong evidence for classification (Smarr & Manning, 2002) Staffordshire Abdul-Karim al-Kabariti CentrInvest

3 3 Are Names Self-Describing? NO: names can be opaque/ambiguous Word-Level: Washington occurs as LOC, PER, and ORG Char-Level: –ville suggests LOC, but exceptions like Neville YES: names can be highly distinctive/descriptive Word-Level: National Bank is a bank (i.e. ORG) Char-Level: Cotramoxazole is clearly a drug name Question: Overall, how informative are names alone?

4 4 How Internally Descriptive are Isolated Named Entities? Classification accuracy of pre-segmented CoNLL NEs without context is ~90% Using character n-grams as features instead of words yields 25% error reduction On single-word unknown NEs, word model is at chance; char n-gram model fixes 38% of errors NE Classification Accuracy (%) [not CoNLL task]

5 5 Exploiting Word-Internal Features Many existing systems use some word-internal features (suffix, capitalization, punctuation, etc.) e.g. Mikheev 97, Wacholder et al 97, Bikel et al 97 Features usually language-dependent (e.g. morphology) Our approach: use char n-grams as primary representation Use all substrings as classification features: Char n-grams subsume word features Features are language-independent (assuming its alphabetic) Similar in spirit to Cucerzan and Yarowsky (99), but uses ALL char n-grams vs. just prefix/suffix #Tom# #Tom#, #Tom, Tom#, #To, Tom, om#, #T, To, om, m#, T, o, m

6 6 Character-Feature Based Classifier Model I: Independent classification at each word maxent classifiers, trained using conjugate gradient equal-scale gaussian priors for smoothing trained models with >800K features in ~2 hrs POS tags and contextual features complement n- grams DescriptionAdded FeaturesOverall F 1 (English Dev.) Wordsw0w0 Official Baseline - Char N-Gramsn(w 0 ) POS Tagst0t0 Simple Context w -1, w 0, t -1, t 1 More Contextw -1, w 0, w 0, w 1, t -1, t 0, t 0, w 1

7 7 Character-Based CMM Model II: Joint classifications along the sequence Previous classification decisions are clearly relevant: Grace Road is a single location, not a person + location Include neighboring classification decisions as features Perform joint inference across chain of classifiers Conditional Markov Model (CMM, aka. maxent Markov model) Borthwick 1999, McCallum et al 2000

8 8 Character-Based CMM Final extra features: Letter-type patterns for each word United Xx, 12-month d-x, etc. Conjunction features E.g., previous state and current signature Repeated last words of multi-word names E.g., Jones after having seen Doug Jones … and a few more Description Added Features Overall F 1 (English Dev) More Context w -1, w 0, w 0, w 1, t -1, t 0, t 0, w 1 Simple Sequence s -1, s -1, t -1, t 0 More Sequence s -2, s -1, s -2, s -1, t -1, t 0 Final misc. extra features

9 9 Final Results Drop from English dev to test largely due to inconsistent labeling Lack of capitalization cues in German hurts recall more because maxent classifier is precision-biased when faced with weak evidence

10 10 Conclusions Character substrings are valuable and underexploited model features Named entities are internally quite descriptive 25-30% error reduction vs. word-level models Discriminative maxent models allow productive feature engineering 30% error reduction vs. basic model What distinguishes our approach? More and better features Regularization is crucial for preventing overfitting

11 11 References Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a high­performance learning name­finder. In Proceedings of ANLP­97, pages 194--201. Andrew Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University. Silviu Cucerzan and David Yarowsky. 1999. Language independent named entity recognition combining morphological and contextual evidence. In Joint SIGDAT Conference on EMNLP and VLC. Shai Fine, Yoram Singer, and Naftali Tishby. 1998. The hierarchical hidden markov model: Analysis and applications. Machine Learning, 32:41--62.

12 12 References (cont.) Andrew McCallum, Dayne Freitag, and Fernando Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In ICML­ 2000. Andrei Mikheev. 1997. Automatic rule induction for unknown­word guessing. Computational Linguistics, 23(3):405--423. Adwait Ratnaparkhi. 1996. A maximum entropy model for part­of­speech tagging. In EMNLP 1, pages 133-- 142. Joseph Smarr and Christopher D. Manning. 2002. Classifying unknown proper noun phrases without context. Technical Report dbpubs/2002­46, Stanford University, Stanford, CA. Nina Wacholder, Yael Ravin, and Misook Choi. 1997. Disambiguation of proper names in text. In ANLP 5, pages 202--208.

13 13 CoNLL Named Entity Recognition Task: Predict semantic label of each word in text Foreign NNP I-NP ORG Ministry NNP I-NP ORG spokesman NN I-NP O Shen NNP I-NP PER Guofang NNP I-NP PER told VBD I-VP O Reuters NNP I-NPORG : : O O

14 14 Final Results: English Dev.

15 15 More Results DescriptionAdded FeaturesALLLOCMISCORGPER … More Contextw -1, w 0, w 0, w 1, t -1, t 0, t 0, w 1 83.0 9 89.1 3 83.5 1 71.3 1 85.8 9 Simple Sequence s -1, s -1, t -1, t 0 85.4 4 90.0 9 80.9 5 76.489.6 6 More Sequences -2, s -1, s -2, s -1, t -1, t 0 87.2 1 90.7 6 81.0 1 81.7 1 90.8 Final misc. error-driven 92.2 7 94.3 9 87.188.4 4 95.4 1 DescriptionAdded FeaturesALLLOCMISCORGPER Wordsw0w0 52.2 9 41.0 3 70.1 8 60.4 3 60.1 4 Official Baseline-71.1 8 80.5 2 83.5 2 66.4 3 55.2 N-Gramsn(w 0 )73.1 0 80.9 5 71.6 7 59.0 6 77.2 3 Tagst0t0 74.1 7 81.2 7 74.4 6 59.6 1 78.7 3 Simple Contextw -1, w 0, t -1, t 1 82.3 9 87.7 7 82.9 1 70.6 2 85.7 7 More Contextw -1, w 0, w 0, w 1, t -1, t 0, t 0, w 1 83.0 9 89.1 3 83.5 1 71.3 1 85.8 9

16 16 Complete Final Results English dev.PrecisionRecallF1F1 LOC94.4494.3494.3 9 MISC90.6283.8487.1 0 ORG87.6389.2688.4 4 PER93.8697.0195.4 1 Overall92.1592.3992.2 7 English testPrecisionRecallF1F1 LOC90.0489.9389.9 8 MISC83.4977.0778.8 5 ORG82.4978.5780.4 8 PER86.6695.1890.7 2 Overall86.1286.4986.3 1 German dev. PrecisionRecallF1F1 LOC75.5366.1370.5 2 MISC78.7147.2359.0 3 ORG77.5753.5163.3 3 PER72.3671.0271.6 9 Overall75.3660.3667.0 3 German testPrecisionRecallF1F1 LOC78.0169.5773.5 4 MISC75.9047.0158.0 6 ORG73.2651.7560.6 5 PER87.6879.8383.5 7 Overall80.3865.0471.9 0


Download ppt "Named-Entity Recognition with Character-Level Models Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D. Manning Stanford University CoNLL-2003: Seventh."

Similar presentations


Ads by Google