CS 182 Sections 101 - 102 slides created by Eva Mok modified by JGM March 15 2006.

Slides:



Advertisements
Similar presentations
1 Welcome to CS105 and Happy and fruitful New Year שנה טובה (Happy New Year)
Advertisements

Simplifying CFGs There are several ways in which context-free grammars can be simplified. One natural way is to eliminate useless symbols those that cannot.
331 Final Fall Details 3:30-5:30 Monday, December 16 ITE 229 (this room!) Comprehensive with more emphasis on material since the midterm Look at.
GRAMMAR & PARSING (Syntactic Analysis) NLP- WEEK 4.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Introduction to Computability Theory
Lexical Analysis III Recognizing Tokens Lecture 4 CS 4318/5331 Apan Qasem Texas State University Spring 2015.
Cse321, Programming Languages and Compilers 1 6/12/2015 Lecture #10, Feb. 14, 2007 Modified sets of item construction Rules for building LR parse tables.
CS 182 Sections Created by Eva Mok Modified by JGM 2/2/05 Q:What did the hippocampus say during its retirement speech? A:“Thanks for the memories”
CS 182 Sections slides created by Eva Mok modified by JGM March 22, 2006.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS 330 Programming Languages 09 / 18 / 2007 Instructor: Michael Eckmann.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
Prof. Bodik CS 164 Lecture 61 Building a Parser II CS164 3:30-5:00 TT 10 Evans.
Grammar induction by Bayesian model averaging Guy Lebanon LARG meeting May 2001 Based on Andreas Stolcke’s thesis UC Berkeley 1994.
CS 182 Sections Eva Mok Feb 11, 2004 ( bad puns alert!
Normal forms for Context-Free Grammars
CS 330 Programming Languages 09 / 16 / 2008 Instructor: Michael Eckmann.
CS 182 Sections 101 & 102 Joseph Makin Jan 19, 2004 Slides created by Eva Mok Thanks, Eva!
How to Convert a Context-Free Grammar to Greibach Normal Form
CPSC 388 – Compiler Design and Construction
Grammars and Parsing. Sentence  Noun Verb Noun Noun  boys Noun  girls Noun  dogs Verb  like Verb  see Grammars Grammar: set of rules for generating.
© Curriculum Foundation1 Section 2 The nature of the assessment task Section 2 The nature of the assessment task There are three key questions: What are.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
Problem of the DAY Create a regular context-free grammar that generates L= {w  {a,b}* : the number of a’s in w is not divisible by 3} Hint: start by designing.
CAS LX 502 Semantics 3a. A formalism for meaning (cont ’ d) 3.2, 3.6.
ITEC 380 Organization of programming languages Lecture 2 – Grammar / Language capabilities.
PREPARING FOR THE OSSLT Thursday, March 27 Glenforest S. S. (
CS 2104 Prog. Lang. Concepts Dr. Abhik Roychoudhury School of Computing Introduction.
CFGS – TAKE 2 David Kauchak CS30 – Spring Admin Today’s mentor hours moved to 6-8pm Assignment 4 graded Assignment 5 - how’s it going? - part A.
CSCI 2670 Introduction to Theory of Computing September 28, 2005.
CS/IT 138 THEORY OF COMPUTATION Chapter 1 Introduction to the Theory of Computation.
1 Knowledge Based Systems (CM0377) Lecture 4 (Last modified 5th February 2001)
Design contex-free grammars that generate: L 1 = { u v : u ∈ {a,b}*, v ∈ {a, c}*, and |u| ≤ |v| ≤ 3 |u| }. L 2 = { a p b q c p a r b 2r : p, q, r ≥ 0 }
English: Monday, January 13, 2014 revised 1.Handouts: * Grammar #35 (Adverbs Modifying Adjectives and Adverbs) 2.Homework: * Grammar #35 (Adverbs Modifying.
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Welcome to the Seminar Professor Fred Bittner.  Review Key Terms  Introduce Yourself to your classmates  Read Chapters 1 and 2 in Criminal Investigation.
Software Engineering Chapter 3 CPSC Pascal Brent M. Dingle Texas A&M University.
Functions Is Fibonacci Repeated or Recursive? Created by: Rachel Oakley.
Artificial Intelligence: Natural Language
CSA2050 Introduction to Computational Linguistics Parsing I.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Finding frequent and interesting triples in text Janez Brank, Dunja Mladenić, Marko Grobelnik Jožef Stefan Institute, Ljubljana, Slovenia.
CSCI1600: Embedded and Real Time Software Lecture 28: Verification I Steven Reiss, Fall 2015.
Introduction to Professional Memo Writing
Formal Languages and Grammars
BIT 115: Introduction To Programming Professor: Dr. Baba Kofi Weusijana (say Doc-tor Way-oo-see-jah-nah, Doc-tor, or Bah-bah)
1 A well-parenthesized string is a string with the same number of (‘s as )’s which has the property that every prefix of the string has at least as many.
CSE 311 Foundations of Computing I Lecture 19 Recursive Definitions: Context-Free Grammars and Languages Spring
ON LINE TOPIC FUNCTIONAL SKILLS.  … the ability to read, write and speak in English and to use mathematics at a level necessary to function at work and.
CS 2130 Lecture 18 Bottom-Up Parsing or Shift-Reduce Parsing Warning: The precedence table given for the Wff grammar is in error.
slides derived from those of Eva Mok and Joe Makin March 21, 2007
Natural Language Processing Vasile Rus
Grammars and Parsing.
CS 182 Sections 101 & 102 Leon Barrett Jan 17, 2007
CS 182 Sections 101 & 102 Leon Barrett Jan 25, 2008
CS510 Compiler Lecture 4.
slides derived from those of Eva Mok and Joe Makin March 21, 2007
CS314 – Section 5 Recitation 3
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Compiler Design 4. Language Grammars
Discrete Structures for Computer Science
Jaya Krishna, M.Tech, Assistant Professor
Theory of Computation Languages.
Parsing Costas Busch - LSU.
December 11th and 14th GTP: Seminar #3 December 11th and 14th
LearnZillion Notes: --This is your hook. Start with a question to draw the student in. Try to be as concrete as possible. --You can fill in an example.
Discrete Maths 13. Grammars Objectives
Answer Questions about Exam2 problems
Presentation transcript:

CS 182 Sections slides created by Eva Mok modified by JGM March

Announcements a5 is due Friday night at 11:59pm a6 is out tomorrow (2 nd coding assignment), due the Monday after spring break Midterm solution will be posted (soon)

Quick Recap This Week –you just had the midterm –a bit more motor control –some belief net, feature structure Coming up –Bailey’s Model of learning hand action words

Your Task: As far as the brain / thought / language is concerned, what is the single biggest mystery to you at this point?

Remember Recruitment Learning? One-shot learning The idea is for things like words or grammar, kids learn at least something given a single input Granted, they might not get it completely right in the first shot But over time, their knowledge slowly converges to the right answer (i.e. built a model to fit the data)

Model Merging Goal: –learn a model given data The model should: –explain the data well –be "simple" –be able to make generalizations

Naïve way to make a model create a special case for each piece of data of course get the training data completely right cannot generalize at all when test data comes how to fix this — Model Merging "compact" the special cases into more descriptive rules without losing too much performance

Basic idea of Model Merging Start with the naïve model: one special case for each piece of data While performance increases –Create a more general rule that explains some of the data –Discard the corresponding special cases

2 examples of Model Merging Bailey’s VerbLearn system –model that maps actions to verb labels –performance: complexity of model + ability to explain data  MAP Assignment 6 - Grammar Induction –model that maps sentences to grammar rules –performance: size of grammar + derivation length of sentences  cost

Grammar Grammar: rules that governs what sentences are legal in a language e.g. Regular Grammar, Context Free Grammar Production rules in a grammar have the form    Terminal symbols: a, b, c, etc Non-terminal symbols: S, A, B, X, etc Different classes of grammar restrict where these symbols can go We’ll see an example on the next page

Right-Regular Grammar Right-Regular Grammar is a further restricted class of Regular Grammar Non terminal symbols are always on the right end e.g: S -> a b c X X -> d e X -> f valid sentences would be "abcde" and "abcf“

Grammar Induction As input data (e.g. “abcde”, “abcf”) comes in, we’d like to build up a grammar that explains the data We can certainly have one rule for each sentence we see in the data  naive approach, no generalization Would rather “compact” your grammar In a6, you have two ways of doing this “compaction” –prefix merge –suffix merge

How do we find the model? prefix merge S  a b c d e S  a b c f becomes S  a b c X X  d e X  f suffix merge S  a b c d e S  f c d e becomes S  a b X S  f X X  c d e

Contrived Example Suppose you have these 3 grammar rules: r1: S  eat them here or there r2: S  eat them anywhere r3: S  like them anywhere or here or there 5 merging options –prefix merge (r1, r2, 1) –prefix merge (r1, r2, 2) –suffix merge (r1, r3, 1) –suffix merge (r1, r3, 2) –suffix merge (r1, r3, 3)

Computationally Kids aren’t presented all the data at once Instead they’ll hear these sentences one by one: 1.eat them here or there 2.eat them anywhere 3.like them anywhere or here or there As each sentence (i.e. data) comes in, you create one rule for it, e.g. S  eat them here or there Then you look for ways to merge as more sentences come in

Example 1: just prefix merge After the first two sentences are presented, we can already do a prefix merge of length 2: r1: S  eat them here or there r2: S  eat them anywhere r3: S  eat them X1 r4: X1  here or there r5: X1  anywhere

Example 2: just suffix merge After the first three sentences are presented, we can do a suffix merge of length 3: r1: S  eat them here or there r2: S  eat them anywhere r3: S  like them anywhere or here or there r4: S  eat them X2 r5: S  like them anywhere or X2 r6: X2  here or there

Your Task in a6 pull in sentences one by one monitor your sentences do either a prefix merge or a suffix merge as soon as it’s “good” to do so

How do we know if a model is good? want a small grammar but want it to explain the data well minimize the cost along the way: size of grammarderivation length of sentences c(G) =  s(G) + d(G,D)  : learning factor to play with

Back to Example 2 Your original grammar: r1: S  eat them here or there r2: S  eat them anywhere r3: S  like them anywhere or here or there Remember your data is: 1.eat them here or there 2.eat them anywhere 3.like them anywhere or here or there size of grammar = 15 derivation length of sentences = = 3 c(G) =  s(G) + d(G,D) =  ∙

Back to Example 2 Your new grammar: r2: S  eat them anywhere r4: S  eat them X2 r5: S  like them anywhere or X2 r6: X2  here or there Remember your data is: 1.eat them here or there 2.eat them anywhere 3.like them anywhere or here or there size of grammar = 14 derivation length of sentences = = 5 c(G) =  s(G) + d(G,D) =  ∙ so in fact you SHOULDN’T merge if  ≤ 2