Presentation is loading. Please wait.

Presentation is loading. Please wait.

Textual Entailment Using Univariate Density Model and Maximizing Discriminant Function “Third Recognizing Textual Entailment Challenge 2007 Submission”

Similar presentations


Presentation on theme: "Textual Entailment Using Univariate Density Model and Maximizing Discriminant Function “Third Recognizing Textual Entailment Challenge 2007 Submission”"— Presentation transcript:

1 Textual Entailment Using Univariate Density Model and Maximizing Discriminant Function “Third Recognizing Textual Entailment Challenge 2007 Submission” Scott Settembre University at Buffalo, SNePS Research Group ss424@cse.buffalo.edu

2 Third Recognizing Textual Entailment Challenge (RTE3) The task is to develop a system to determine if a given pair of sentences has the first sentence “entail” the second sentence The pair of sentences is called the Text-Hypothesis pair (or T-H pair) Participants are provided with 800 sample T-H pairs annotated with the correct entailment answers The final testing set consists of 800 non-annotated samples

3 Development set examples Example of a YES result As much as 200 mm of rain have been recorded in portions of British Columbia, on the west coast of Canada since Monday. British Columbia is located in Canada. Example of a NO result Blue Mountain Lumber is a subsidiary of Malaysian forestry transnational corporation, Ernslaw One. Blue Mountain Lumber owns Ernslaw One.

4 Entailment Task Types There are 4 different entailment tasks: –“IE” or Information Extraction Text: “An Afghan interpreter, employed by the United States, was also wounded.” Hypothesis: “An interpreter worked for Afghanistan.” –“IR” or Information Retrieval Text: “Catastrophic floods in Europe endanger lives and cause human tragedy as well as heavy economic losses” Hypothesis: “Flooding in Europe causes major economic losses.”

5 Entailment Task Types - continued The two remaining entailment tasks are: –“SUM” or Multi-document summarization Text: “Sheriff's officials said a robot could be put to use in Ventura County, where the bomb squad has responded to more than 40 calls this year.” Hypothesis: “Police use robots for bomb-handling.” –“QA” or Question Answering Text: “Israel's prime Minister, Ariel Sharon, visited Prague.” Hypothesis: “Ariel Sharon is the Israeli Prime Minister.”

6 Submission Results The two runs submitted this year (2007) scored: –%62.62 (501 correct out of 800) –%61.00 (488 correct out of 800) For the 2 nd RTE Challenge of 2006, a %62.62 ties for 4 th out of 23 teams. –Top scores were %75, %73, %64, and %62.62. –Median: %58.3 –Range: %50.88 to %75.38.

7 Main Focuses Create a process to pool expertise of our research group in addressing entailment –Development of specification for metrics –Import of metric vectors generated from other programs Design a visual environment to manage this process and manage development data set –Ability to select metric vectors and classifier to use –Randomization of off-training sets to prevent overfitting Provide a baseline to evaluate and compare different metrics and classification strategies

8 Development Environment RTE Development Environment –Display and examine the development data set

9 Development Environment - continued –Select off-training set from development data

10 Development Environment - continued –Select metric to use for classification

11 Metrics Metric specification –Continuous value and normalized between 0 and 1 (inclusive) Allows future use of nearest-neighbor classification techniques Prevents scaling issues –Preferably in a Gaussian distribution (bell curve) Metrics developed for our submission –Lexical similarity ratio (word bag) –Average Matched word displacement –Lexical similarity with synonym and antonym replacement

12 Metric - example Lexical similarity ratio (word bag ratio) –# of matches between text and hypothesis / # of words in hypothesis Works for: A bus collision with a truck in Uganda has resulted in at least 30 fatalities and has left a further 21 injured. 30 die in a bus collision in Uganda. Wordbag ratio = 7 / 8 Fails for: Blue Mountain Lumber is a subsidiary of Malaysian forestry transnational corporation, Ernslaw One. Blue Mountain Lumber owns Ernslaw One. Wordbag ratio = 5 / 6 –Weakness: does not consider semantic information

13 Development Environment - continued –Classify testing data using Univariate normal model

14 Classifiers Two classification techniques were used –Univariate normal model (Gaussian density) –Linear discriminant function Univariate normal model –One classifier for each entailment type and value –8 classifiers are developed –Results from the “YES” and “NO” classifiers are compared Linear discriminant function –One classifier for each entailment type –4 classifiers are developed –Result based on which side of the boundary the metric is on

15 Classifiers - Univariate Each curve represents a probability density function –Calculated from the mean and variance of the “YES” and “NO” metrics from the training set To evaluate, calculate a metric’s position on either curve –Use the Gaussian density function –Classify to category with the largest p(x) x p(x) NoYes

16 Classifiers - Simple Linear Discriminant Find a boundary that maximizes result –Very simple for a single metric –Brute force search can be used for good approximation x

17 Classifiers - Weaknesses Univariate normal weakness –Useless when there is a high overlap of metric values for each category (when mean is very close) –Or metrics are not distributed on a Gaussian “bell” curve Simple linear discriminant weaknesses –Processes 1 metric in training vector –Placed a constraint on metric values (0 for no entailment, 1 for max entailment) OverlapNon Gaussian distribution

18 Development Environment - continued –Examine results and compare various metrics

19 Results Combined each classification technique with each metric –Based on training results, the classifier/metric combination was selected for use in challenge submission Wordbag + Univariate Syn/Anto + Univariate Wordbag + Linear Dis Syn/Anto + Linear Dis Overall0.6690.6750.7170.644 IE0.4250.5750.6250.600 IR0.6880.6670.6880.646 QA0.8110.7840.8110.784 SUM0.7710.6860.7750.543 Training Results Wordbag + Univariate Syn/Anto + Univariate Wordbag + Linear Dis Syn/Anto + Linear Dis Overall0.6150.6260.610.629 IE0.4750.5100.4950.505 IR0.6650.6300.6350.640 QA0.7350.750 SUM0.5850.6150.5600.620 Final results from competition set

20 Future Enhancements Use of multivariate model to process metric vector –Ability to use more than one metric at a time to classify Add more metrics that consider semantics –Examination of incorrect answers show that a modest effort to process semantic information would yield better results –Current metrics only use lexical similarity Increase ability for tool to interface in other ways –Currently we can process metrics from Matlab, COM and.NET objects, and pre-processed metric vector files

21 RTE Challenge - Final Notes See our progress at: http://www.cse.buffalo.edu/~ss424/rte3_challenge.html RTE Web Site: http://www.pascal-network.org/Challenges/RTE3/ Textual Entailment resource pool: http://aclweb.org/aclwiki/index.php?title=Textual_Entailment_Resource_Pool Actual ranking released in June 2007 at: http://www.pascal-network.org/Challenges/RTE3/Results/ April 13, 2007CSEGSA ConferenceScott Settembre


Download ppt "Textual Entailment Using Univariate Density Model and Maximizing Discriminant Function “Third Recognizing Textual Entailment Challenge 2007 Submission”"

Similar presentations


Ads by Google