Instance Based Learning

Slides:



Advertisements
Similar presentations
Text Categorization.
Advertisements

1 Classification using instance-based learning. 3 March, 2000Advanced Knowledge Management2 Introduction (lazy vs. eager learning) Notion of similarity.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Evaluation of Decision Forests on Text Categorization
K-means method for Signal Compression: Vector Quantization
1 CS 391L: Machine Learning: Instance Based Learning Raymond J. Mooney University of Texas at Austin.
1 Machine Learning: Lecture 7 Instance-Based Learning (IBL) (Based on Chapter 8 of Mitchell T.., Machine Learning, 1997)
Lazy vs. Eager Learning Lazy vs. eager learning
Nearest Neighbor. Predicting Bankruptcy Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return.
Learning for Text Categorization
K nearest neighbor and Rocchio algorithm
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
Instance Based Learning
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Instance Based Learning IB1 and IBK Small section in chapter 20.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
INSTANCE-BASE LEARNING
Nearest Neighbor Classifiers other names: –instance-based learning –case-based learning (CBL) –non-parametric learning –model-free learning.
CS Instance Based Learning1 Instance Based Learning.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Module 04: Algorithms Topic 07: Instance-Based Learning
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Nearest Neighbor Learning
Learning: Nearest Neighbor Artificial Intelligence CMSC January 31, 2002.
PrasadL11Classify1 Vector Space Classification Adapted from Lectures by Raymond Mooney and Barbara Rosario.
DATA MINING LECTURE 10 Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
K Nearest Neighbors Classifier & Decision Trees
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
CpSc 881: Machine Learning Instance Based Learning.
CpSc 810: Machine Learning Instance Based Learning.
KNN Classifier.  Handed an instance you wish to classify  Look around the nearby region to see what other classes are around  Whichever is most common—make.
Information Retrieval and Organisation Chapter 14 Vector Space Classification Dell Zhang Birkbeck, University of London.
DATA MINING LECTURE 10b Classification k-nearest neighbor classifier
CS Machine Learning Instance Based Learning (Adapted from various sources)
Eick: kNN kNN: A Non-parametric Classification and Prediction Technique Goals of this set of transparencies: 1.Introduce kNN---a popular non-parameric.
Kansas State University Department of Computing and Information Sciences CIS 890: Special Topics in Intelligent Systems Wednesday, November 15, 2000 Cecil.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Data Science Algorithms: The Basic Methods
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
Ch8: Nonparametric Methods
Classification Nearest Neighbor
Instance Based Learning
Instance Based Learning (Adapted from various sources)
K Nearest Neighbor Classification
Introduction to Data Mining, 2nd Edition by
Classification Nearest Neighbor
Nearest-Neighbor Classifiers
Text Categorization Assigning documents to a fixed set of categories
Instance Based Learning
COSC 4335: Other Classification Techniques
Nearest Neighbor Classifiers
Chap 8. Instance Based Learning
Advanced Artificial Intelligence Classification
Machine Learning: UNIT-4 CHAPTER-1
Nearest Neighbors CSC 576: Data Mining.
Nearest Neighbor Classifiers
Text Categorization Berlin Chen 2003 Reference:
Chapter 7: Transformations
SVMs for Document Ranking
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Support Vector Machines 2
Machine Learning in Practice Lecture 20
Data Mining CSCI 307, Spring 2019 Lecture 23
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

Instance Based Learning Based on Raymond J. Mooney’s slides University of Texas at Austin

Instance-Based Learning Unlike other learning algorithms, does not involve construction of an explicit abstract generalization but classifies new instances based on direct comparison and similarity to known training instances. Training can be very easy, just memorizing training instances. Testing can be very expensive, requiring detailed comparison to all past training instances. Also known as: Case-based Exemplar-based Nearest Neighbor Memory-based Lazy Learning

Similarity/Distance Metrics Instance-based methods assume a function for determining the similarity or distance between any two instances. For continuous feature vectors, Euclidian distance is the generic choice: Where ap(x) is the value of the pth feature of instance x. For discrete features, assume distance between two values is 0 if they are the same and 1 if they are different (e.g. Hamming distance for bit vectors). To compensate for difference in units across features, scale all continuous values to the interval [0,1].

Other Distance Metrics Mahalanobis distance Scale-invariant metric that normalizes for variance. Cosine Similarity Cosine of the angle between the two vectors. Used in text and other high-dimensional data. Pearson correlation Standard statistical correlation coefficient. Used for bioinformatics data. Edit distance Used to measure distance between unbounded length strings. Used in text and bioinformatics.

K-Nearest Neighbor Calculate the distance between a test point and every training instance. Pick the k closest training examples and assign the test instance to the most common category amongst these nearest neighbors. Voting multiple neighbors helps decrease susceptibility to noise. Usually use odd value for k to avoid ties.

5-Nearest Neighbor Example

Implicit Classification Function Although it is not necessary to explicitly calculate it, the learned classification rule is based on regions of the feature space closest to each training example. For 1-nearest neighbor with Euclidian distance, the Voronoi diagram gives the complex polyhedra segmenting the space into the regions closest to each point.

Efficient Indexing Linear search to find the nearest neighbors is not efficient for large training sets. Indexing structures can be built to speed testing. For Euclidian distance, a kd-tree can be built that reduces the expected time to find the nearest neighbor to O(log n) in the number of training examples. Nodes branch on threshold tests on individual features and leaves terminate at nearest neighbors. Other indexing structures possible for other metrics or string data. Inverted index for text retrieval.

kd-tree The kd-tree is a binary tree in which every node is a k-dimensional point. Every non-leaf node generates a splitting hyperplane that divides the space into two subspaces. Points left to the hyperplane represent the left sub-tree of that node and the points right to the hyperplane by the right sub-tree. The hyperplane direction is chosen in the following way: every node split to sub-trees is associated with one of the k-dimensions, such that the hyperplane is perpendicular to that dimension vector.

Nearest Neighbor Variations Can be used to estimate the value of a real-valued function (regression) by taking the average function value of the k nearest neighbors to an input point. All training examples can be used to help classify a test instance by giving every training example a vote that is weighted by the inverse square of its distance from the test instance.

Feature Relevance and Weighting Standard distance metrics weight each feature equally when determining similarity. Problematic if many features are irrelevant, since similarity along many irrelevant examples could mislead the classification. Features can be weighted by some measure that indicates their ability to discriminate the category of an example, such as information gain. Overall, instance-based methods favor global similarity over concept simplicity. + Training Data – Test Instance ??

Rules and Instances in Human Learning Biases Psychological experiments show that people from different cultures exhibit distinct categorization biases. “Western” subjects favor simple rules (straight stem) and classify the target object in group 2. “Asian” subjects favor global similarity and classify the target object in group 1.

Other Issues Can reduce storage of training instances to a small set of representative examples. Support vectors in an SVM are somewhat analogous. Can hybridize with rule-based methods or neural-net methods. Radial basis functions in neural nets and Gaussian kernels in SVMs are similar. Can be used for more complex relational or graph data. Similarity computation is complex since it involves some sort of graph isomorphism. Can be used in problems other than classification. Case-based planning Case-based reasoning in law and business.

Conclusions IBL methods classify test instances based on similarity to specific training instances rather than forming explicit generalizations. Typically trade decreased training time for increased testing time.