Transfer Learning for Image Classification Group No.: 15 Group member : Feng Cai Sauptik Dhar Sauptik.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Active Appearance Models
Scalable Learning in Computer Vision
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Rajat Raina Honglak Lee, Roger Grosse Alexis Battle, Chaitanya Ekanadham, Helen Kwong, Benjamin Packer, Narut Sereewattanawoot Andrew Y. Ng Stanford University.
1 Semi-supervised learning for protein classification Brian R. King Chittibabu Guda, Ph.D. Department of Computer Science University at Albany, SUNY Gen*NY*sis.
A CTION R ECOGNITION FROM V IDEO U SING F EATURE C OVARIANCE M ATRICES Kai Guo, Prakash Ishwar, Senior Member, IEEE, and Janusz Konrad, Fellow, IEEE.
Support vector machine
Machine learning continued Image source:
An Overview of Machine Learning
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
Differentiable Sparse Coding David Bradley and J. Andrew Bagnell NIPS
Self Taught Learning : Transfer learning from unlabeled data Presented by: Shankar B S DMML Lab Rajat Raina et al, CS, Stanford ICML 2007.
Assuming normally distributed data! Naïve Bayes Classifier.
1 Transfer Learning Algorithms for Image Classification Ariadna Quattoni MIT, CSAIL Advisors: Michael Collins Trevor Darrell.
Efficient Sparse Coding Algorithms
Graz University of Technology, AUSTRIA Institute for Computer Graphics and Vision Fast Visual Object Identification and Categorization Michael Grabner,
Proceedings of the 2007 SIAM International Conference on Data Mining.
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Learning from Multiple Outlooks Maayan Harel and Shie Mannor ICML 2011 Presented by Minhua Chen.
Introduction to machine learning
Image Classification using Sparse Coding: Advanced Topics
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
A Geometric Framework for Unsupervised Anomaly Detection: Detecting Intrusions in Unlabeled Data Authors: Eleazar Eskin, Andrew Arnold, Michael Prerau,
This week: overview on pattern recognition (related to machine learning)
Thien Anh Dinh1, Tomi Silander1, Bolan Su1, Tianxia Gong
ECSE 6610 Pattern Recognition Professor Qiang Ji Spring, 2011.
An Example of Course Project Face Identification.
Transfer Learning Task. Problem Identification Dataset : A Year: 2000 Features: 48 Training Model ‘M’ Testing 98.6% Training Model ‘M’ Testing 97% Dataset.
Unsupervised Constraint Driven Learning for Transliteration Discovery M. Chang, D. Goldwasser, D. Roth, and Y. Tu.
Presented by: Mingyuan Zhou Duke University, ECE June 17, 2011
Pseudo-supervised Clustering for Text Documents Marco Maggini, Leonardo Rigutini, Marco Turchi Dipartimento di Ingegneria dell’Informazione Università.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Sparse Bayesian Learning for Efficient Visual Tracking O. Williams, A. Blake & R. Cipolloa PAMI, Aug Presented by Yuting Qi Machine Learning Reading.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
Ohad Hageby IDC Support Vector Machines & Kernel Machines IP Seminar 2008 IDC Herzliya.
HAITHAM BOU AMMAR MAASTRICHT UNIVERSITY Transfer for Supervised Learning Tasks.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
Support Vector Machines and Gene Function Prediction Brown et al PNAS. CS 466 Saurabh Sinha.
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
1 Classification and Feature Selection Algorithms for Multi-class CGH data Jun Liu, Sanjay Ranka, Tamer Kahveci
ACADS-SVMConclusions Introduction CMU-MMAC Unsupervised and weakly-supervised discovery of events in video (and audio) Fernando De la Torre.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Ariadna Quattoni Xavier Carreras An Efficient Projection for l 1,∞ Regularization Michael Collins Trevor Darrell MIT CSAIL.
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Transfer Learning for Image Classification. Transfer Learning Approaches Leverage data from related tasks to improve performance:  Improve generalization.
Web-Mining Agents: Transfer Learning TrAdaBoost R. Möller Institute of Information Systems University of Lübeck.
Incremental Reduced Support Vector Machines Yuh-Jye Lee, Hung-Yi Lo and Su-Yun Huang National Taiwan University of Science and Technology and Institute.
Learning by Loss Minimization. Machine learning: Learn a Function from Examples Function: Examples: – Supervised: – Unsupervised: – Semisuprvised:
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome.
Deeply learned face representations are sparse, selective, and robust
Supervised Time Series Pattern Discovery through Local Importance
ABSTRACT FACE RECOGNITION RESULTS
Recovery from Occlusion in Deep Feature Space for Face Recognition
Machine Learning Basics
Using Transductive SVMs for Object Classification in Images
Pawan Lingras and Cory Butz
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
CS 2750: Machine Learning Support Vector Machines
Concave Minimization for Support Vector Machine Classifiers
Domingo Mery Department of Computer Science
Semi-Supervised Learning
An Efficient Projection for L1-∞ Regularization
Presentation transcript:

Transfer Learning for Image Classification Group No.: 15 Group member : Feng Cai Sauptik Dhar Sauptik Dhar Jingying Lin Jingying Lin Group Project for EE Spring

CURRENT STATE OF ART(SELF TAUGHT LEARNING with SPARSE CODING) CURRENT STATE OF ART(SELF TAUGHT LEARNING with SPARSE CODING) OUR METHODS (UNSUPERVISED TRANSFER LEARNING) OUR METHODS (UNSUPERVISED TRANSFER LEARNING) OUR METHODS (SUPERVISED TRANSFER LEARNING) OUR METHODS (SUPERVISED TRANSFER LEARNING) EXPERIMENTAL SETUP/DATASET EXPERIMENTAL SETUP/DATASET RESULTS RESULTS CONCLUSION CONCLUSION B RIEF O UTLINE

SELF-TAUGHT LEARNING  WHAT IS SPARSE CODING? Sparse coding is the representation of items by the strong activation of a Sparse coding is the representation of items by the strong activation of a relatively small set. relatively small set. BASIC FORMULATION  WHAT IS SELF-TAUGHT LEARNING? [1] Unlike Semi-Supervised classification ;no assumption that unlabeled data follows the same class labels or generative distribution as the labeled data.  WHAT IS TRANSFER LEARNING? [2] Involves two interrelated learning problems with the goal of using knowledge about one set of task to improve performance on a related task. [Details: An extra normalization constraint on b j is required.]

UNSUPERVISED TRANSFER LEARNING STEP 1: USE SELF LEARNING APPROACH TO OBTAIN THE BASIS VECTORS.[1] STEP 1: USE SELF LEARNING APPROACH TO OBTAIN THE BASIS VECTORS.[1] STEP 2: FIND THE COEFFICIENTS C USING FOLLOWING EQUATIONS STEP 2: FIND THE COEFFICIENTS C USING FOLLOWING EQUATIONS Define the estimation of as: Define the estimation of as: Here is a pseudo-norm that counts the number of non-zero rows in. The coefficient for example i in group k can be computed as: The coefficient for example i in group k can be computed as: STEP3: ARE USED AS NEW FEATURES AND WE TRAIN SVM CLASSIFIERS IN EACH GROUP STEP3: ARE USED AS NEW FEATURES AND WE TRAIN SVM CLASSIFIERS IN EACH GROUP

SUPERVISED TRANSFER LEARNING STEP 1: USE SELF LEARNING APPROACH TO OBTAIN THE BASIS VECTORS.[1] STEP 1: USE SELF LEARNING APPROACH TO OBTAIN THE BASIS VECTORS.[1] STEP 2: MAP THE LABLED TRAINING DATA IN THE BASIS SPACE STEP 2: MAP THE LABLED TRAINING DATA IN THE BASIS SPACE STEP 3:PERFORM SUPERVISED TRANSFER LEARNING WITH SPARSE CODING.[2] STEP 3:PERFORM SUPERVISED TRANSFER LEARNING WITH SPARSE CODING.[2]Let, STEP 4:COMPUTE THE RELEVANT PROTOTYPE REPRESENTATION STEP 4:COMPUTE THE RELEVANT PROTOTYPE REPRESENTATIONFinally,

DATASET UNLABELED DATASET( The Yale Face Database B ) Contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions.( LABELED DATASET(CMU Face Images Data Set ) This data consists of 640 gray level face images of people taken with varying pose and expression.( EXPERIMENTAL SETUP Classification of FACIAL EXPRESSION using TRANSFER LEARNING. CLASS LABELS = Happy(+1) or Sad(-1). GROUP LABELS = PERSON ID. SCALED DOWN PROBLEM Number of Unlabeled samples=15 Number of Basis used =25 Number of Tasks=3 Number of Training samples(Labeled)=56 Number of Test samples(Labeled)=19 EXPERIMENTAL SETUP/DATASET

RESULTS METHOD USED PREDICTION ERROR RAW DATA (dim=256) SELF-LEARNING (dim=25) (1) SUPERVISED TRANSFER LEARNING (dim=13) METHOD USEDPREDICTION ERROR [5 5] PREDICTION ERROR [10 10] RAW DATA (dim=256) SELF-LEARNING (dim=25) (1) SUPERVISED TRANSFER LEARNING (dim=13) TRAINING SET=56 samples TEST SET=19 samples DOUBLE RESAMPLING (56 samples) TABLE 1. PREDICTION ERROR for LINEAR SVM (for different methods) TABLE 2. PREDICTION ERROR for LINEAR SVM (for different methods) (1) There is a caveat involved in obtaining the results for this method.

REFERENCE [1] Self-taught learning: transfer learning from unlabeled data. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Pacher, Andrew Y. Ng. 24th International Conference on Machine Learning [2] Transfer learning for image classification with sparse prototype representations. Ariadna Quattoni, Michael Collins, Trevor Darrell. IEEE CVPR [2] Transfer learning for image classification with sparse prototype representations. Ariadna Quattoni, Michael Collins, Trevor Darrell. IEEE CVPR CONCLUSION 1.The feature selection methodology conserves the discriminative patterns with the added advantage of a lower problem dimensionality. 2. The new transfer learning methodology provides better results than the self-learning approach(at least for the current case).