Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 1, 2004.

Similar presentations


Presentation on theme: "1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 1, 2004."— Presentation transcript:

1 1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 1, 2004

2 2 Today How shall we transform a huge text collection? Levels of Language Intro to NLTK and Python

3 3 The Enron Email Archive Background Originally made public, and posted to the web by the Federal Energy Regulatory Commission during its investigation. –~500,000 messages –Salon article: http://www.salon.com/news/feature/2003/10/14/enron/index_np.html http://www.salon.com/news/feature/2003/10/14/enron/index_np.html Later purchased by Leslie Kaelbling at MIT People at SRI, notably Melinda Gervasio, cleaned it up –No attachments –Some messages have been deleted "as part of a redaction effort due to requests from affected employees". –Invalid email addresses were converted to user@enron.com Posted online for research on email by William Cohen at –http://www-2.cs.cmu.edu/~enron/http://www-2.cs.cmu.edu/~enron/ Paper describing the dataset: –The Enron Corpus: A New Dataset for Email Classification Research, Klimt and Yang, ECML 2004 http://www-2.cs.cmu.edu/~bklimt/papers/2004_klimt_ecml.pdf http://www-2.cs.cmu.edu/~bklimt/papers/2004_klimt_ecml.pdf

4 4 The Enron Email Archive A valuable resource No other large open email corpus for research A sensitive resource We need to be respectful and careful about how we treat this information We can add value Idea: this class produces something more valuable and interesting than what we started with. Researchers and practitioners will build on our results

5 5 The Enron Email Archive So … what’s in there? 500,000 messages. Let’s search (on a subset of the collection): http://quasi.berkeley.edu/anlp/enron_search.html http://quasi.berkeley.edu/anlp/enron_search.html Now … what more would we like to have?

6 6 Slide adapted from Robert Berwick's Levels of Language Sound Structure (Phonetics and Phonology) The sounds of speech and their production The systematic way that sounds are differently realized in different environments. Word Structure (Morphology) From morphos = shape (not transform, as in morph) Analyzes how words are formed from minimal units of meaning; also derivational rules –dog + s = dogs; eat, eats, ate Phrase Structure (Syntax) From the Greek syntaxis, arrange together Describes grammatical arrangements of words into hierarchical structure

7 7 Slide adapted from Robert Berwick's Levels of Language Thematic Structure Getting closer to meaning Who did what to whom –Subject, object, predicate Semantic Structure How the lower levels combine to convey meaning Pragmatics and Discourse Structure How language is used across sentences.

8 8 Slide adapted from Robert Berwick's Parsing at Every Level Transforming from a surface representation to an underlying representation It’s not straightforward to do any of these mappings! Ambiguity at every level –Word: is “saw” a verb or noun? –Phrase: “I saw the guy on the hill with the telescope.”  Who is on the hill? –Semantic: which hill?

9 9 Python and NLTK The following slides from Diane Litman’s lecture

10 10 Slide by Diane Litman Python and Natural Language Processing Python is a great language for NLP: Simple Easy to debug: –Exceptions –Interpreted language Easy to structure –Modules –Object oriented programming Powerful string manipulation

11 11 Slide by Diane Litman Modules and Packages Python modules “package program code and data for reuse.” (Lutz) Similar to library in C, package in Java. Python packages are hierarchical modules (i.e., modules that contain other modules). Three commands for accessing modules: 1.import 2.from…import 3.reload

12 12 Slide by Diane Litman Modules and Packages: import The import command loads a module: # Load the regular expression module >>> import re To access the contents of a module, use dotted names: # Use the search method from the re module >>> re.search(‘\w+’, str) To list the contents of a module, use dir: >>> dir(re) [‘DOTALL’, ‘I’, ‘IGNORECASE’,…]

13 13 Slide by Diane Litman Modules and Packages from…import The from…import command loads individual functions and objects from a module: # Load the search function from the re module >>> from re import search Once an individual function or object is loaded with from…import, it can be used directly: # Use the search method from the re module >>> search (‘\w+’, str)

14 14 Slide by Diane Litman Import vs. from…import Import Keeps module functions separate from user functions. Requires the use of dotted names. Works with reload. from…import Puts module functions and user functions together. More convenient names. Does not work with reload.

15 15 Slide by Diane Litman Modules and Packages: reload If you edit a module, you must use the reload command before the changes become visible in Python: >>> import mymodule... >>> reload (mymodule) The reload command only affects modules that have been loaded with import ; it does not update individual functions and objects loaded with from...import.

16 16 Slide by Diane Litman Introduction to NLTK The Natural Language Toolkit (NLTK) provides: Basic classes for representing data relevant to natural language processing. Standard interfaces for performing tasks, such as tokenization, tagging, and parsing. Standard implementations of each task, which can be combined to solve complex problems.

17 17 Slide by Diane Litman NLTK: Example Modules nltk.token : processing individual elements of text, such as words or sentences. nltk.probability : modeling frequency distributions and probabilistic systems. nltk.tagger : tagging tokens with supplemental information, such as parts of speech or wordnet sense tags. nltk.parser : high-level interface for parsing texts. nltk.chartparser : a chart-based implementation of the parser interface. nltk.chunkparser : a regular-expression based surface parser.

18 18 Slide by Diane Litman NLTK: Top-Level Organization NLTK is organized as a flat hierarchy of packages and modules. Each module provides the tools necessary to address a specific task Modules contain two types of classes: Data-oriented classes are used to represent information relevant to natural language processing. Task-oriented classes encapsulate the resources and methods needed to perform a specific task.

19 19 Slide by Diane Litman To the First Tutorials Tokens and Tokenization Frequency Distributions

20 20 Slide by Diane Litman The Token Module It is often useful to think of a text in terms of smaller elements, such as words or sentences. The nltk.token module defines classes for representing and processing these smaller elements. What might be other useful smaller elements?

21 21 Slide by Diane Litman Tokens and Types The term word can be used in two different ways: 1.To refer to an individual occurrence of a word 2.To refer to an abstract vocabulary item For example, the sentence “my dog likes his dog” contains five occurrences of words, but four vocabulary items. To avoid confusion use more precise terminology: 1.Word token: an occurrence of a word 2.Word Type: a vocabulary item

22 22 Slide by Diane Litman Tokens and Types (continued) In NLTK, tokens are constructed from their types using the Token constructor: >>> from nltk.token import * >>> my_word= 'dog' >>> my_word_token =Token(TEXT=my_word) ‘dog'@[?]

23 23 Slide by Diane Litman Text Locations A text location @ [s:e] specifies a region of a text: s is the start index e is the end index The text location @ [s:e] specifies the text beginning at s, and including everything up to (but not including) the text at e. This definition is consistent with Python slice. Think of indices as appearing between elements: I saw a man 01234 Shorthand notation when location width = 1.

24 24 Slide by Diane Litman Text Locations(continued) Indices can be based on different units: character word sentence Locations can be tagged with sources (files, other text locations – e.g., the first word of the first sentence in the file) Location member functions: start end unit source

25 25 Slide by Diane Litman Tokenization The simplest way to represent a text is with a single string. Difficult to process text in this format. Often, it is more convenient to work with a list of tokens. The task of converting a text from a single string to a list of tokens is known as tokenization.

26 26 Slide by Diane Litman Tokenization (continued) Tokenization is harder that it seems I’ll see you in New York. The aluminum-export ban. The simplest approach is to use “graphic words” (i.e., separate words using whitespace) Another approach is to use regular expressions to specify which substrings are valid words. NLTK provides a generic tokenization interface: TokenizerI

27 27 Slide by Diane Litman TokenizerI Defines a single method, tokenize, which takes a string and returns a list of tokens Tokenize is independent of the level of tokenization and the implementation algorithm

28 28 For Next Week Monday: holiday, no class Kevin is still installing the software I will send email with details when ready Probably by the end of today Sign up for the email list! Mail to: majordomo@sims.berkeley.edumajordomo@sims.berkeley.edu Put in msg body: subscribe anlp For Wed Sept 8 Do exercises 1-3 in Tutorial 2 (Tokenizing) http://nltk.sourceforge.net/tutorial/introduction/nochunks.html


Download ppt "1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 1, 2004."

Similar presentations


Ads by Google