3/22/2017 Unsupervised learning of visual representations and their use in object & face recognition Gary Cottrell Chris Kanan Honghao Shan Lingyun.

Slides:



Advertisements
Similar presentations
Numbers Treasure Hunt Following each question, click on the answer. If correct, the next page will load with a graphic first – these can be used to check.
Advertisements

Adders Used to perform addition, subtraction, multiplication, and division (sometimes) Half-adder adds rightmost (least significant) bit Full-adder.
EE384y: Packet Switch Architectures
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
1
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Multicriteria Decision-Making Models
Copyright © 2011, Elsevier Inc. All rights reserved. Chapter 6 Author: Julia Richards and R. Scott Hawley.
Myra Shields Training Manager Introduction to OvidSP.
STATISTICS Joint and Conditional Distributions
STATISTICS Random Variables and Distribution Functions
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
Objectives: Generate and describe sequences. Vocabulary:
1 RA I Sub-Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Casablanca, Morocco, 20 – 22 December 2005 Status of observing programmes in RA I.
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
Custom Statutory Programs Chapter 3. Customary Statutory Programs and Titles 3-2 Objectives Add Local Statutory Programs Create Customer Application For.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
1 Hierarchical Part-Based Human Body Pose Estimation * Ramanan Navaratnam * Arasanathan Thayananthan Prof. Phil Torr * Prof. Roberto Cipolla * University.
Programming Language Concepts
Chapter 7 Sampling and Sampling Distributions
1 Outline relationship among topics secrets LP with upper bounds by Simplex method basic feasible solution (BFS) by Simplex method for bounded variables.
1 Click here to End Presentation Software: Installation and Updates Internet Download CD release NACIS Updates.
CS525: Special Topics in DBs Large-Scale Data Management
PP Test Review Sections 6-1 to 6-6
Bright Futures Guidelines Priorities and Screening Tables
Bellwork Do the following problem on a ½ sheet of paper and turn in.
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Exarte Bezoek aan de Mediacampus Bachelor in de grafische en digitale media April 2014.
Computer vision: models, learning and inference
Text Categorization.
Hypothesis Tests: Two Independent Samples
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
CONTROL VISION Set-up. Step 1 Step 2 Step 3 Step 5 Step 4.
Adding Up In Chunks.
Chapter 6: ER – Entity Relationship Diagram
Page 1 of 43 To the ETS – Bidding Query by Map Online Training Course Welcome This training module provides the procedures for using Query by Map for a.
Artificial Intelligence
6.4 Best Approximation; Least Squares
1 Using Bayesian Network for combining classifiers Leonardo Nogueira Matos Departamento de Computação Universidade Federal de Sergipe.
Subtraction: Adding UP
Chapter 2 Entity-Relationship Data Modeling: Tools and Techniques
Analyzing Genes and Genomes
DTU Informatics Introduction to Medical Image Analysis Rasmus R. Paulsen DTU Informatics TexPoint fonts.
Essential Cell Biology
Chapter 8 Estimation Understandable Statistics Ninth Edition
Exponents and Radicals
Intracellular Compartments and Transport
PSSA Preparation.
Essential Cell Biology
Immunobiology: The Immune System in Health & Disease Sixth Edition
Energy Generation in Mitochondria and Chlorplasts
9. Two Functions of Two Random Variables
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
1 Decidability continued…. 2 Theorem: For a recursively enumerable language it is undecidable to determine whether is finite Proof: We will reduce the.
Basics of Statistical Estimation
Classification Classification Examples
Commonly Used Distributions
Math Review with Matlab:
Chapter 5 The Mathematics of Diversification
Adaptive Segmentation Based on a Learned Quality Metric
Chapter 2.
E ffi cient Coding: From Retina Ganglion Cells To V2 Cells Honghao Shan Garrison W. Cottrell The Temporal Dynamics of Learning Center Gary's Unbelievable.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Presentation transcript:

3/22/2017 Unsupervised learning of visual representations and their use in object & face recognition Gary Cottrell Chris Kanan Honghao Shan Lingyun Zhang Matthew Tong Tim Marks OSHER

3/22/2017 Collaborators Chris Kanan Honghao Shan OSHER

3/22/2017 Collaborators Lingyun Zhang Matt Tong Tim Marks OSHER

Efficient Encoding of the world 3/22/2017 Efficient Encoding of the world Sparse Principal Components Analysis: A model of unsupervised learning for early perceptual processing (Honghao Shan) The model embodies three constraints Keep as much information as possible While trying to equalize the neural responses And minimizing the connections. OSHER

3/22/2017 Efficient Encoding of the world leads to magno- and parvo-cellular response properties… Trained on video cubes Spatial extent Temporal extent Trained on color images Persistent, small Transient, large Midget? Parasol? Trained on grayscale images This suggests that these cell types exist because they are useful for efficiently encoding the temporal dynamics of the world. OSHER

3/22/2017 Efficient Encoding of the world leads to gammatone filters as in auditory nerves: Using exactly the same algorithm, applied to speech, environmental sounds, etc.: OSHER

Efficient Encoding of the world 3/22/2017 Efficient Encoding of the world A single unsupervised learning algorithm leads to Model cells with properties similar to those found in the retina when applied to natural videos Models cells with properties similar to those found in auditory nerve when applied to natural sounds One small step towards a unified theory of temporal processing. OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) Recursive ICA (RICA 1.0 (Shan et al., 2008)): Alternately compress and expand representation using PCA and ICA; ICA was modified by a component-wise nonlinearity Receptive fields expanded at each ICA layer OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) ICA was modified by a component-wise nonlinearity: Think of ICA as a generative model: The pixels are the sum of many independent random variables: Gaussian. Hence ICA prefers its inputs to be Gaussian-distributed. We apply an inverse cumulative Gaussian to the absolute value of the ICA components to “gaussianize” them. OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) Strong responses, either positive or negative, are mapped to the positive tail of the Gaussian; weak ones, to the negative tail; ambiguous ones to the center. OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) RICA 2.0: Replace PCA by SPCA SPCA OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) RICA 2.0 Results: Multiple layer system with Center-surround receptive fields at the first layer Simple edge filters at the second (ICA) layer Spatial pooling of orientations at the third (SPCA) layer: V2-like response properties at the fourth (ICA) layer OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) V2-like response properties at the fourth (ICA) layer These maps show strengths of connections to layer 1 ICA filters. Warm and cold colors are strong +/- connections, gray is weak connections, orientation corresponds to layer 1 orientation. The left-most column displays two model neurons that show uniform orientation preference to layer-1 ICA features. The middle column displays model neurons that have non-uniform/varying orientation preference to layer-1 ICA features. The right column displays two model neurons that have location preference, but no orientation preference, to layer-1 ICA features. The left two columns are consistent with Anzen, Peng, & Van Essen 2007. The right hand column is a prediction OSHER

3/22/2017 Unsupervised Learning of Hierarchical Representations (RICA 2.0; cf. Shan et al., NIPS 19) Dimensionality Reduction & Expansion might be a general strategy of information processing in the brain. The first step removes noise and reduces complexity, the second step captures the statistical structure. We showed that retinal ganglion cells and V1 complex cells may be derived from the same learning algorithm, applied to pixels in one case, and V1 simple cell outputs in the second. This highly simplified model of early vision is the first one that learns the RFs of all early visual layers, using a consistent theory - the efficient coding theory. We believe it could serve as a basis for more sophisticated models of early vision. An obvious next step is to train and thus make predictions about higher layers. OSHER

3/22/2017 Nice, but is it useful? We showed in Shan & Cottrell (CVPR 2008) that we could achieve state-of-the-art face recognition with the non-linear ICA features and a simple softmax output. We showed in Kanan & Cottrell (CVPR 2010) that we could achieve state-of-the-art face and object recognition with a system that used an ICA-based salience map, simulated fixations, non-linear ICA features, and a kernel-density memory. Here I briefly describe the latter. OSHER

One reason why this might be a good idea… 3/22/2017 One reason why this might be a good idea… Our attention is automatically drawn to interesting regions in images. Our salience algorithm is automatically drawn to interesting regions in images. These are useful locations for discriminating one object (face, butterfly) from another. OSHER

Main Idea Training Phase (learning object appearances): 3/22/2017 Main Idea Training Phase (learning object appearances): Use the salience map to decide where to look. (We use the ICA salience map) Memorize these samples of the image, with labels (Bob, Carol, Ted, or Alice) (We store the (compressed) ICA feature values) OSHER

Main Idea Testing Phase (recognizing objects we have learned): 3/22/2017 Main Idea Testing Phase (recognizing objects we have learned): Now, given a new face, use the salience map to decide where to look. Compare new image samples to stored ones - the closest ones in memory get to vote for their label. OSHER

Result: 7 votes for Alice, only 3 for Bob. It’s Alice! 3/22/2017 Stored memories of Bob Stored memories of Alice New fragments Result: 7 votes for Alice, only 3 for Bob. It’s Alice! 19 OSHER 19

3/22/2017 Voting The voting process is based on Bayesian updating (with Naïve Bayes). The size of the vote depends on the distance from the stored sample, using kernel density estimation. Hence NIMBLE: NIM with Bayesian Likelihood Estimation. OSHER

Overview of the system The ICA features do double-duty: 3/22/2017 Overview of the system The ICA features do double-duty: They are combined to make the salience map - which is used to decide where to look They are stored to represent the object at that location 8:40 - OSHER 21

NIMBLE vs. Computer Vision 3/22/2017 NIMBLE vs. Computer Vision Compare this to (most, not all!) computer vision systems: One pass over the image, and global features. Image Global Features Global Classifier Decision This is in stark contrast to the predominant methods used in computer vision, and even many models in computational neurosciece Line 1: one-shot system Line 2: active vision Note that the bottom approach is primate-like (although pretty dumbed down) Note that I’m leaving out most of the details OSHER 22

3/22/2017 Humans make ~170,000 saccades each day OSHER 23

Belief After 10 Fixations 3/22/2017 Explain how it uses a saliency map to acquire information and how as it serially acquires more information over time NIMBLE becomes more confident about the correct category. Belief After 1 Fixation Belief After 10 Fixations OSHER 24

3/22/2017 Robust Vision Human vision works in multiple environments - our basic features (neurons!) don’t change from one problem to the next. We tune our parameters so that the system works well on Bird and Butterfly datasets - and then apply the system unchanged to faces, flowers, and objects This is very different from standard computer vision systems, that are (usually) tuned to a particular domain OSHER

Cal Tech 101: 101 Different Categories 3/22/2017 Cal Tech 101: 101 Different Categories AR dataset: 120 Different People with different lighting, expression, and accessories OSHER

Flowers: 102 Different Flower Species 3/22/2017 Flowers: 102 Different Flower Species OSHER

~7 fixations required to achieve at least 90% of maximum performance 3/22/2017 ~64 fixations required to achieve 99% of maximum accuracy Averaged over 10 cross validation runs ~7 fixations required to achieve at least 90% of maximum performance OSHER 28

But it isn’t that complicated. 3/22/2017 So, we created a simple cognitive model that uses simulated fixations to recognize things. But it isn’t that complicated. How does it compare to approaches in computer vision? OSHER

Still superior to MKL with very few training examples per category. 3/22/2017 Caveats: As of mid-2010. Only comparing to single feature type approaches (no “Multiple Kernel Learning” (MKL) approaches). Still superior to MKL with very few training examples per category. OSHER

NUMBER OF TRAINING EXAMPLES 3/22/2017 Note that this is a comparison versus the best results using a single feature type and looks at percent improvement in performance (not absolute improvement, so it is 1 - (Nimble Perf / Best One-Desc Perf) Mention training instances on X-axis 1 5 15 30 NUMBER OF TRAINING EXAMPLES OSHER 31

NUMBER OF TRAINING EXAMPLES 3/22/2017 Note again that NIMBLE performs very well using few training images even when dealing with disguises 1 2 3 6 8 NUMBER OF TRAINING EXAMPLES OSHER 32

3/22/2017 OSHER

Again, best for single feature-type systems 3/22/2017 Again, best for single feature-type systems and for 1 training instance better than all systems OSHER

People don’t randomly sample images. A foveated retina 3/22/2017 More neurally and behaviorally relevant gaze control and fixation integration. People don’t randomly sample images. A foveated retina Comparison with human eye movement data during recognition/classification of faces, objects, etc. OSHER

…Especially when you don’t have a lot of training images. 3/22/2017 A biologically-inspired, fixation-based approach can work well for image classification. Fixation-based models can achieve, and even exceed, some of the best models in computer vision. …Especially when you don’t have a lot of training images. OSHER

Software and Paper Available at www.chriskanan.com 3/22/2017 Software and Paper Available at www.chriskanan.com For more details email: ckanan@ucsd.edu We showed that NIMBLE is not a toy cognitive model, but one with real-world applicability This work was supported by the NSF (grant #SBE-0542013) to the Temporal Dynamics of Learning Center., G.W. Cottrell, PI. This work was supported by the NSF (grant #SBE-0542013) to the Temporal Dynamics of Learning Center. OSHER 37

3/22/2017 Thanks! OSHER

Sparse Principal Components Analysis 3/22/2017 Sparse Principal Components Analysis We minimize: Subject to the following constraint: Include in the overview information on the purpose and mission of the SLC, the strategic concept and milestones, achievements, new directions; the organization of the research thrusts (or equivalent); value of the Center mode; and the integrative nature and relationship of all following presentations (scientific and other) to research and overall vision of Center. The Center’s vision should address each of the SLC program goals: advancing the frontiers of the science of learning through integrated research; connecting this research to specific scientific, technological, educational, and workforce challenges; and enabling research communities that can capitalize on new opportunities and discoveries and respond to new challenges. OSHER

The SPCA model as a neural net… 3/22/2017 The SPCA model as a neural net… Include in the overview information on the purpose and mission of the SLC, the strategic concept and milestones, achievements, new directions; the organization of the research thrusts (or equivalent); value of the Center mode; and the integrative nature and relationship of all following presentations (scientific and other) to research and overall vision of Center. The Center’s vision should address each of the SLC program goals: advancing the frontiers of the science of learning through integrated research; connecting this research to specific scientific, technological, educational, and workforce challenges; and enabling research communities that can capitalize on new opportunities and discoveries and respond to new challenges. It is AT that is mostly 0… OSHER

3/22/2017 Results suggesting the 1/f power spectrum of images is where this is coming from… OSHER

Results The role of : Recall this reduces the number of connections… 3/22/2017 Results The role of : Recall this reduces the number of connections… OSHER

3/22/2017 Results The role of : higher  means fewer connections, which alters the contrast sensitivity function (CSF). Matches recent data on malnourished kids and their CSF’s: lower sensitivity at low spatial frequencies, but slightly better at high than normal controls… OSHER

NIMBLE represents its beliefs using probability distributions 3/22/2017 NIMBLE represents its beliefs using probability distributions Simple nearest neighbor density estimation to estimate: P(fixationt | Category = k) Fixations are combined over fixations/time using Bayesian updating OSHER

3/22/2017 OSHER