Download presentation

Presentation is loading. Please wait.

Published byGrayson Mann Modified over 3 years ago

1
Learning visual representations for unfamiliar environments Kate Saenko, Brian Kulis, Trevor Darrell UC Berkeley EECS & ICSI

2
The challenge of large scale visual interaction Last decade has proven the superiority of models learned from data vs. hand engineered structures!

3
Unsupervised: Learn models from found data; often exploit multiple modalities (text+image) Large-scale learning … The Tote is the perfect example of two handbag design principles that... The lines of this tote are incredibly sleek, but... The semi buckles that form the handle attachments are...

4
E.g., finding visual senses 4 Artifact sense: telephone DICTIONARY 1: (n) telephone, phone, telephone set (electronic equipment that converts sound into electrical signals that can be transmitted over distances and then converts received signals back into sounds):phone telephone set 2: (n) telephone, telephony (transmitting speech at a distance): telephony [Saenko and Darrell 09]

5
Unsupervised: Learn models from found data; often exploit multiple modalities (text+image) Supervised: Crowdsource labels (e.g., ImageNet) Large-scale Learning … The Tote is the perfect example of two handbag design principles that... The lines of this tote are incredibly sleek, but... The semi buckles that form the handle attachments are...

6
Yet… Even the best collection of images from the web and strong machine learning methods can often yield poor classifiers on in-situ data! Supervised learning assumption: training distribution == test distribution Unsupervised learning assumption: joint distribution is stationary w.r.t. online world and real world Almost never true! 6 ?

7
What You Saw Is Not What You Get The models fail due to domain shift SVM:54% NBNN:61% SVM:20% NBNN:19%

8
Close-up Far-away amazon.com Consumer images FLICKR CCTV Examples of visual domain shifts digital SLRwebcam

9
Examples of domain shift: change in camera, feature type, dimension digital SLR webcam SURF VQ to 300 SIFT VQ to 1000 Different dimensions

10
Solutions? Do nothing (poor performance) Collect all types of data (impossible) Find out what changed (impractical) Learn what changed

11
Prior Work on Domain Adaptation Pre-process the data [Daumé 07] : replicate features to also create source- and domain- specific versions; re-train learner on new features SVM-based methods [Yang07], [Jiang08], [Duan09], [Duan10] : adapt SVM parameters Kernel mean matching [Gretton09] : re-weight training data to match test data distribution

12
Our paradigm: Transform-based Domain Adaptation Previous methods drawbacks cannot transfer learned shift to new categories cannot handle new features We can do both by learning domain transformations * Example: green and blue domains W * Saenko, Kulis, Fritz, and Darrell. Adapting visual category models to new domains. ECCV, 2010

13
Symmetric assumption fails! Limitations of symmetric transforms Saenko et al. ECCV10 used metric learning: symmetric transforms same features How do we learn more general shifts ? W

14
Asymmetric transform (rotation) Latest approach*: asymmetric transforms Metric learning model no longer applicable We propose to learn asymmetric transforms – Map from target to source – Handle different dimensions *Kulis, Saenko, and Darrell, What You Saw is Not What You Get: Domain Adaptation Using Asymmetric Kernel Transforms, CVPR 2011

15
Asymmetric transform (rotation) W Latest approach: asymmetric transforms Metric learning model no longer applicable We propose to learn asymmetric transforms – Map from target to source – Handle different dimensions

16
Model Details Learn a linear transformation to map points from one domain to another – Call this transformation W – Matrices of source and target: W

17
Loss Functions Choose a point x from the source and y from the target, and consider inner product: Should be large for similar objects and small for dissimilar objects

18
Loss Functions Input to problem includes a collection of m loss functions General assumption: loss functions depend on data only through inner product matrix

19
Regularized Objective Function Minimize a linear combination of sum of loss functions and a regularizer: We use squared Frobenius norm as a regularizer – Not restricted to this choice

20
The Model Has Drawbacks A linear transformation may be insufficient Cost of optimization grows as the product of the dimensionalities of the source and target data What to do?

21
Kernelization Main idea: run in kernel space – Use a non-linear kernel function (e.g., RBF kernel) to learn non-linear transformations in input space – Resulting optimization is independent of input dimensionality – Additional assumption necessary: regularizer is a spectral function

22
Kernelization Original Transformation Learning Problem Kernel matrices for source and target New Kernel Problem Relationship between original and new problems at optimality

23
Summary of approach Input space 1. Multi-Domain Data 2. Generate Constraints, Learn W 3. Map via W4. Apply to New Categories Test point y1y1 y2y2

24
Multi-domain dataset

25
Experimental Setup Utilized a standard bag-of-words model Also utilize different features in the target domain – SURF vs SIFT – Different visual word dictionaries Baseline for comparing such data: KCCA

26
Same-Category Results Baselines (knn, svm, metric learning) explained in paper Our Method

27
Novel-class experiments Test methods ability to transfer domain shift to unseen classes Train transform on half of the classes, test on the other half Our Method (linear) Our Method

28
Extreme shift example Nearest neighbors in source using transformation Query from target Nearest neighbors in source using KCCA+KNN

29
Conclusion Should not rely on hand-engineered features any more than we rely on hand engineered models! Learn feature transformation across domains Developed a domain adaptation method based on regularized non-linear transforms – Asymmetric transform achieves best results on more extreme shifts – Saenko et al ECCV 2010 and Kulis et al CVPR 2011; journal version forthcoming

Similar presentations

OK

1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.

1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google