Model Classification Model by Barry Hennessy. Model Basis My model was based on the Prototype model: that previously seen examples are combined into a.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Anselm On the Existence of God. “Nor do I seek to understand so that I can believe, but rather I believe so that I can understand. For I believe this.
Naïve-Bayes Classifiers Business Intelligence for Managers.
Single category classification
Cognitive Modelling – An exemplar-based context model Benjamin Moloney Student No:
Cognitive Modelling Assignment Suzanne Cotter March 2010.
Statistical Techniques I EXST7005 Multiple Regression.
 This study examined the effect of run-up velocity on the peak height achieved by the athlete in the pole vault and on the corresponding changes in the.
Some terminology When the relation between variables are expressed in this manner, we call the relevant equation(s) mathematical models The intercept and.
Plausible values and Plausibility Range 1. Prevalence of FSWs in some west African Countries 2 0.1% 4.3%
Chi-Square Test A fundamental problem is genetics is determining whether the experimentally determined data fits the results expected from theory (i.e.
Introduction to Computing Science and Programming I
Math 1241, Spring 2014 Section 3.1, Part One Introduction to Limits Finding Limits From a Graph One-sided and Infinite Limits.
Final Exam: May 10 Thursday. If event E occurs, then the probability that event H will occur is p ( H | E ) IF E ( evidence ) is true THEN H ( hypothesis.
Bag-of-Words Methods for Text Mining CSCI-GA.2590 – Lecture 2A
Conjunctive classification. What is conjunctive classification? In single-category classification, we want our model to give each test item to be classified.
Single Category Classification Stage One Additive Weighted Prototype Model.
Cognitive Modelling Assignment 1 MODEL 6: MSc Cognitive Science Elaine Cohalan Feb 9 th 2005.
Labor Demand. Overview In the next few chapters we will consider the demand for resources. Just as a coin has two sides, when viewing a firm we could.
Cognitive Modelling Experiment Clodagh Collins. Clodagh Collins.
 Monday, 9/30/02, Slide #1 CS106 Introduction to CS1 Monday, 9/30/02  QUESTIONS (on HW02, etc.)??  Today: Libraries, program design  More on Functions!
Assessing cognitive models What is the aim of cognitive modelling? To try and reproduce, using equations or similar, the mechanism that people are using.
Foundations of Network and Computer Security J J ohn Black Lecture #3 Aug 28 th 2009 CSCI 6268/TLEN 5550, Fall 2009.
LEARNING DECISION TREES
Choose a category. You will be given the answer. You must give the correct question. Click to begin.
Classical Techniques: Statistics, Neighborhoods, and Clustering.
Selecting Informative Genes with Parallel Genetic Algorithms Deodatta Bhoite Prashant Jain.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Conceptual modelling. Overview - what is the aim of the article? ”We build conceptual models in our heads to solve problems in our everyday life”… ”By.
Software Development Unit 6.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Nonparametric or Distribution-free Tests
Fact or Fiction: Teaching with Historical Fiction American History Foundations August 18, 2011 Fran Macko, Ph.D.
Lecture for Week Spring.  Numbers can be represented in many ways. We are familiar with the decimal system since it is most widely used in everyday.
Chapter 8 Introduction to Inference Target Goal: I can calculate the confidence interval for a population Estimating with Confidence 8.1a h.w: pg 481:
Python November 28, Unit 9+. Local and Global Variables There are two main types of variables in Python: local and global –The explanation of local and.
© The McGraw-Hill Companies, 2006 Chapter 4 Implementing methods.
Basic linear regression and multiple regression Psych Fraley.
4.1 Instance Variables, Constructors, and Methods.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
Evaluating What’s Been Learned. Cross-Validation Foundation is a simple idea – “ holdout ” – holds out a certain amount for testing and uses rest for.
CSC2515 Fall 2008 Introduction to Machine Learning Lecture 11a Boosting and Naïve Bayes All lecture slides will be available as.ppt,.ps, &.htm at
Binomial Distributions
M1G Introduction to Database Development 2. Creating a Database.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
CSC 2535 Lecture 8 Products of Experts Geoffrey Hinton.
Conjunctive classification. What is conjunctive classification? In single-category classification, we want our model to give each test item to be classified.
Chapter(3) Qualitative Risk Analysis. Risk Model.
BPS - 5th Ed. Chapter 221 Two Categorical Variables: The Chi-Square Test.
Grade Book Database Presentation Jeanne Winstead CINS 137.
ME 2304: 3D Geometry & Vector Calculus
Bag-of-Words Methods for Text Mining CSCI-GA.2590 – Lecture 2A Ralph Grishman NYU.
Tirgul 11 Notes Hash tables –reminder –examples –some new material.
CS1Q Computer Systems Lecture 11 Simon Gay. Lecture 11CS1Q Computer Systems - Simon Gay 2 The D FlipFlop The RS flipflop stores one bit of information.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
M Machine Learning F# and Accord.net.
CPSC 871 John D. McGregor Module 8 Session 1 Testing.
Talk about the ethics of calculating probability. Probability – How likely something will occur Probability is usually expressed as a decimal number. A.
AP Statistics Section Statistical significance is valued because it points to an effect that is __________ to occur simply by chance. Carrying out.
BMTRY 789 Lecture9: Proc Tabulate Readings – Chapter 11 & Selected SUGI Reading Lab Problems , 11.2 Homework Due Next Week– HW6.
Unpacking each and every strategy! THE MATHEMATICIAN’S TOOLBOX.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
This was written with the assumption that workbooks would be added. Even if these are not introduced until later, the same basic ideas apply Hopefully.
Personal Responsibility and Decision Making
CPSC 372 John D. McGregor Module 8 Session 1 Testing.
CSE202: Introduction to Formal Languages and Automata Theory
Information Extraction Review of Übung 2
Mind-Brain Type Identity Theory
Data Mining Practical Machine Learning Tools and Techniques
Brian Nisonger Shauna Eggers Joshua Johanson
Presentation transcript:

Model Classification Model by Barry Hennessy

Model Basis My model was based on the Prototype model: that previously seen examples are combined into a single example that characterises the set. I it seemed more plausible than the exemplar model: In particular it doesn’t require the person to memorise everything they have ever seen. This seems to be ignoring the idea of chunking which has always appealed to me, and is a cornerstone of much in cognitive psychology.

That being said, prototype theory doesn’t seem completely plausible either: There simply is too much variation in ‘things that make up categories’ for there to be simple and strict divisions between them. Although this is somewhat alleviated by conjunctive classification. So I chose Prototype as the foundation of my model...

Irregular Conjunction I chose not to base my model’s conjunction on any of the given mathematical methods, average, multiplicative etc. None of these seemed particularly plausible.

Conjunction I chose instead to base my conjunction on differences between categories. If a category is within a certain range (confidence parameter) of the maximum then it is considered a conjunction. Basis:If an item is definitely of one category and probably of another then it is more accurate, and safer, to classify it as both. Also I felt that one possible classification shouldn’t impinge on the possibilities of others.

Performance My model classified everything perfectly!... OutputExpectedConfidence 0.15 category A category Ccategory C: A&C category Bcategory B: category Bcategory B:

but

Problem My model didn’t output any weights for the conjunction. So this leaves me with only 5 datapoints to test against So despite the fact that my model classifies everything correctly I can’t really say how good it is without more test data

Implementation I implemented my model in C# It was very object oriented and quite (over) complicated

//Regular prototype happens up here foreach (string thisCategory in allPatients.listCategories()) { if (mainCategory != thisCategory ) { //test categories proximity if (Math.Abs(mainCatWeight - testCatWeight) < confusionParam) { //set category to conjunction category = mainCategory + "&" + thisCategory ; } Conjunction

//find weights for all categories from diagnoseReturningWeights Dictionary weightsByCat = diagnoseReturningWeights(pat); //go through each category and select the maximum foreach (string thisCategory in allPatients.listCategories()) { if (thisCategoryWeight > max) { max = thisCategoryWeight ; mainCategory = thisCategory; } Prototype Basically just runs categoryWeight() for each category diagnoseReturningWeights categoryWeight Which basically just adds up the weight of the patients attributes on each dimension for the category in question. See getWeightByDimensionAndAttrAndCategory()

//these functions get a list of the number of times all attributes occur in the training set attrCount = allPatients.countAttributesByDimensionAndCategory(dimension, category); allAttrCount = allPatients.countAttributesByDimension(dimension); //these int’s are the actual counts for the attribute we want int attributeCount; int allAttributeCount; attributeCount = attrCount.GetValue(attribute); allAttributeCount = allAttrCount. GetValue(attribute); //This is the weight for the category on the dimension weight = attributeCount / allAttributeCount; getWeightByDimensionAndAttrAndCategory()