1Linear Classifiers (perceptrons) Inputs are feature valuesEach feature has a weightSum is the activationIf the activation is:Positive: output +1Negative, output -1
2Classification: Weights Binary case: compare features to a weight vectorLearning: figure out the weight vector from examples
3Binary Decision Rule In the space of feature vectors Examples are pointsAny weight vector is a hyperplaneOne side corresponds to Y = +1Other corresponds to Y = -1
4Learning: Binary Perceptron Start with weights = 0For each training instance:Classify with current weightsIf correct (i.e. y=y*), no change!If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.
5Multiclass Decision Rule If we have multiple classes:A weight vector for each class:Score (activation) of a class y:Prediction highest score winsBinary = multiclass where the negative class has weight zero
6Learning: Multiclass Perceptron Start with all weights = 0Pick up training examples one by onePredict with current weightsIf correct, no change!If wrong: lower score of wrong answer, raise score of right answer
8Today Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)
9Properties of Perceptrons Separability: some parameters get the training set perfectly correctConvergence: if the training is separable, perceptron will eventually converge (binary case)Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability
11Problems with the Perceptron Noise: if the data isn’t separable, weights might thrashAveraging weight vectors over time can help (averaged perceptron)Mediocre generalization: finds a “barely” separating solutionOvertraining: test / held-out accuracy usually rises, then fallsOvertraining is a kind of overfitting
12Fixing the PerceptronIdea: adjust the weight update to mitigate these effectsMIRA*: choose an update size that fixes the current mistake…but, minimizes the change to wThe +1 helps to generalize*Margin Infused Relaxed Algorithm
13Minimum Correcting Update min not τ=0, or would not have made an error, so min will be where equality holds
14Maximum Step SizeIn practice, it’s also bad to make updates that are too largeExample may be labeled incorrectlyYou may not have enough featuresSolution: cap the maximum possible value of τ with some constant CCorresponds to an optimization that assumes non-separable dataUsually converges faster than perceptronUsually better, especially on noisy data
15Outline Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)
16Linear SeparatorsWhich of these linear separators is optimal?
17Support Vector Machines Maximizing the margin: good according to intuition, theory, practiceOnly support vectors matter; other training examples are ignorableSupport vector machines (SVMs) find the separator with max marginBasically, SVMs are MIRA where you optimize over all examples at onceMIRASVM
18Classification: Comparison Naive BayesBuilds a model training dataGives prediction probabilitiesStrong assumptions about feature independenceOne pass through data (counting)Perceptrons / MIRA:Makes less assumptions about dataMistake-driven learningMultiple passes through data (prediction)Often more accurate
19Extension: Web Search Information retrieval: Given information needs, produce informationIncludes, e.g. web search, question answering, and classic IRWeb search: not exactly classification, but rather rankingx = “Apple Computers”
22Pacman Apprenticeship! Examples are states sCandidates are pairs (s,a)“correct” actions: those taken by expertFeatures defined over (s,a) pairs: f(s,a)Score of a q-state (s,a) given by:w×f(s,a)How is this VERY different from reinforcement learning?
23Outline Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)
24Case-Based Reasoning Similarity for classification .Similarity for classificationCase-based reasoningPredict an instance’s label using similar instancesNearest-neighbor classification1-NN: copy the label of the most similar data pointK-NN: let the k nearest neighbors vote (have to devise a weighting scheme)Key issue: how to define similarityTrade-off:Small k gives relevant neighborsLarge k gives smoother functions..Generated data...1-NN
25Parametric / Non-parametric Parametric models:Fixed set of parametersMore data means better settingsNon-parametric models:Complexity of the classifier increases with dataBetter in the limit, often worse in the non-limit(K)NN is non-parametric
26Nearest-Neighbor Classification Nearest neighbor for digits:Take new imageCompare to all training imagesAssign based on closest exampleEncoding: image is vector of intensities:What’s the similarity function?Dot product of two images vectors?Usually normalize vectors so ||x||=1min = 0 (when?), max = 1(when?)
27Basic Similarity Many similarities based on feature dot products: If features are just the pixels:Note: not all similarities are of this form
28This and next few slides adapted from Xiao Hu, UIUC Invariant MetricsBetter distances use knowledge about visionInvariant metrics:Similarities are invariant under certain transformationsRotation, scaling, translation, stroke-thickness…E.g.:16*16=256 pixels; a point in 256-dim spaceSmall similarity in R256 (why?)How to incorporate invariance into similarities?This and next few slides adapted from Xiao Hu, UIUC
29Invariant Metrics Each example is now a curve in R256 Rotation invariant similarity:s’=max s(r( ),r( ))E.g. highest similarity between images’ rotation lines
30Invariant Metrics Problems with s’: Tangent distance: Hard to compute Allows large transformations(6→9)Tangent distance:1st order approximation at original points.Easy to computeModels small rotations
31Template Deformation Deformable templates: An “ideal” version of each categoryBest-fit to image using min varianceCost for high distortion of templateCost for image points being far from distorted templateUsed in many commercial digit recognizersExamples from [Hastie 94]
32A Tale of Two Approaches… Nearest neighbor-like approachesCan use fancy similarity functionsDon’t actually get to do explicit learningPerceptron-like approachesExplicit training to reduce empirical errorCan’t use fancy similarity, only linearOr can they? Let’s find out!
33Perceptron WeightsWhat is the final value of a weight wy of a perceptron?Can it be any real vector?No! it’s build by adding up inputsCan reconstruct weight vectors (the primal representation) from update counts (the dual representation)primaldual
34Dual Perceptron How to classify a new example x? If someone tells us the value of K for each pair of examples, never need to build the weight vectors!
35Dual Perceptron Start with zero counts (alpha) Pick up training instances one by oneTry to classify xn,If correct, no change!If wrong: lower count of wrong class (for this instance), raise score of right class (for this instance)
36Kernelized Perceptron If we had a black box (kernel) which told us the dot product of two examples x and y:Could work entirely with the dual representationNo need to ever take dot products (“kernel trick”)Like nearest neighbor – work with black-box similaritiesDownside: slow if many examples get nonzero alpha