Presentation is loading. Please wait.

Presentation is loading. Please wait.

Large Image Databases and Small Codes for Object Recognition Rob Fergus (NYU) Antonio Torralba (MIT) Yair Weiss (Hebrew U.) William T. Freeman (MIT)

Similar presentations


Presentation on theme: "Large Image Databases and Small Codes for Object Recognition Rob Fergus (NYU) Antonio Torralba (MIT) Yair Weiss (Hebrew U.) William T. Freeman (MIT)"— Presentation transcript:

1 Large Image Databases and Small Codes for Object Recognition Rob Fergus (NYU) Antonio Torralba (MIT) Yair Weiss (Hebrew U.) William T. Freeman (MIT)

2 Banksy, 2006 Object Recognition Pixels  Description of scene contents

3 Amazing resource Maybe we can use it for recognition? Large image datasets Internet contains billions of images

4 Web image dataset 79.3 million images Collected using image search engines List of nouns taken from Wordnet 32x32 resolution a-bomb a-horizon a._conan_doyle a._e._burnside a._e._housman a._e._kennelly a.e. a_battery a_cappella_singing a_horizon Example first 10: See “80 Million Tiny images” TR

5 Noisy Output from Image Search Engines

6 Nearest-Neighbor methods for recognition # images 10 5 10 6 10 8

7 Issues Need lots of data How to find neighbors quickly? What distance metric to use? How to transfer labels from neighbors to query image? What happens if labels are unreliable?

8 Overview 1.Fast retrieval using compact codes 2.Recognition using neighbors with unreliable labels

9 Overview 1.Fast retrieval using compact codes 2.Recognition using neighbors with unreliable labels

10 Binary codes for images Want images with similar content to have similar binary codes Use Hamming distance between codes – Number of bit flips – E.g.: Semantic Hashing [Salakhutdinov & Hinton, 2007] – Text documents Ham_Dist(10001010,10001110)=1 Ham_Dist(10001010,11101110)=3

11 Binary codes for images Permits fast lookup via hashing – Each code is a memory address – Find neighbors by exploring Hamming ball around query address – Lookup time depends on radius of ball, NOT on # data points Address Space Query Image Semantic Hash Function Semantically similar images Query address Figure adapted from Salakhutdinov & Hinton ‘07

12 Compact Binary Codes Google has few billion images (10 9 ) Big PC has ~10 Gbytes (10 11 bits) Codes must fit in memory (disk too slow)  Budget of 10 2 bits/image 1 Megapixel image is 10 7 bits 32x32 color image is 10 4 bits  Semantic hash function must also reduce dimensionality

13 Code requirements Preserves neighborhood structure of input space Very compact (<10 2 bits/image) Fast to compute Three approaches: 1.Locality Sensitive Hashing (LSH) 2.Boosting 3.Restricted Boltzmann Machines (RBM’s)

14 Image representation: Gist vectors Pixels not a convenient representation Use GIST descriptor instead 512 dimensions/image L2 distance btw. Gist vectors not bad substitute for human perceptual distance Oliva & Torralba, IJCV 2001

15 1. Locality Sensitive Hashing Gionis, A. & Indyk, P. & Motwani, R. (1999) Take random projections of data Quantize each projection with few bits For our N bit code: – Compute first N PCA components of data – Each random projection must be linear combination of the N PCA components

16 2. Boosting Modified form of BoostSSC [Shaknarovich, Viola & Darrell, 2003] GentleBoost with exponential loss Positive examples are pairs of neighbors Negative examples are pairs of unrelated images Each binary regression stump examines a single coordinate in input pair, comparing to some learnt threshold to see that they agree

17 3. Restricted Boltzmann Machine (RBM) Hidden units: h Visible units: v Symmetric weights w Parameters: Weights wBiases b Network of binary stochastic units Hinton & Salakhutdinov, Science 2006

18 RBM architecture Hidden units: h Visible units: v Symmetric weights w Learn weights and biases using Contrastive Divergence Parameters: Weights wBiases b Network of binary stochastic units Hinton & Salakhutdinov, Science 2006 Convenient conditional distributions:

19 Multi-Layer RBM: non-linear dimensionality reduction 512 w1w1 Input Gist vector (512 dimensions) Layer 1 512 256 w2w2 Layer 2 256 N w3w3 Layer 3 Output binary code (N dimensions)

20 Training RBM models Two phases 1.Pre-training – Unsupervised – Use Contrastive Divergence to learn weights and biases – Gets parameters to right ballpark 2.Fine-tuning – Can make use of label information – No longer stochastic – Back-propagate error to update parameters – Moves parameters to local minimum

21 Greedy pre-training (Unsupervised) 512 w1w1 Input Gist vector (512 dimensions) Layer 1

22 Greedy pre-training (Unsupervised) Activations of hidden units from layer 1 (512 dimensions) 512 256 w2w2 Layer 2

23 Greedy pre-training (Unsupervised) Activations of hidden units from layer 2 (256 dimensions) 256 N w3w3 Layer 3

24 Backpropagation using Neighborhood Components Analysis objective 512 w1w1 Input Gist vector (512 dimensions) Layer 1 512 256 w2w2 Layer 2 256 N w3w3 Layer 3 Output binary code (N dimensions)

25 Neighborhood Components Analysis Goldberger, Roweis, Salakhutdinov & Hinton, NIPS 2004 Labels defined in input space (high dim.) Points in output space (low dimensional)

26 Neighborhood Components Analysis Goldberger, Roweis, Salakhutdinov & Hinton, NIPS 2004 Output of RBM W are RBM weights

27 Neighborhood Components Analysis Goldberger, Roweis, Salakhutdinov & Hinton, NIPS 2004 Pulls nearby points OF SAME CLASS closer Push nearby points OF DIFFERENT CLASS away

28 Neighborhood Components Analysis Goldberger, Roweis, Salakhutdinov & Hinton, NIPS 2004 Pulls nearby points OF SAME CLASS closer Preserves neighborhood structure of original, high dimensional, space Push nearby points OF DIFFERENT CLASS away

29 Two test datasets 1.LabelMe – 22,000 images (20,000 train | 2,000 test) – Ground truth segmentations for all – Can define ground truth distance btw. images using these segmentations – Per pixel labels [SUPERVISED] 2.Web data – 79.3 million images – Collected from Internet – No labels, so use L2 distance btw. GIST vectors as ground truth distance [UNSUPERVISED] – Noisy image labels only

30 Retrieval Experiments

31 Examples of LabelMe retrieval 12 closest neighbors under different distance metrics

32 LabelMe Retrieval Size of retrieval set % of 50 true neighbors in retrieval set 0 2,000 10,000 20,0000

33 LabelMe Retrieval Size of retrieval set % of 50 true neighbors in retrieval set 0 2,000 10,000 20,0000 Number of bits % of 50 true neighbors in first 500 retrieved

34 Web images retrieval % of 50 true neighbors in retrieval set Size of retrieval set

35 Web images retrieval Size of retrieval set % of 50 true neighbors in retrieval set Size of retrieval set

36 Examples of Web retrieval 12 neighbors using different distance metrics

37 Retrieval Timings

38 LabelMe Recognition examples

39 LabelMe Recognition

40 Overview 1.Fast retrieval using compact codes 2.Recognition using neighbors with unreliable labels

41 Label Assignment Retrieval gives set of nearby images How to compute label? Issues: – Labeling noise – Keywords can be very specific e.g. yellowfin tuna Query Grover Cleveland Linnet Birdcage Chiefs Casing Neighbors

42 Wordnet – a Lexical Dictionary Synonyms/Hypernyms (Ordered by Estimated Frequency) of noun aardvark Sense 1 aardvark, ant bear, anteater, Orycteropus afer => placental, placental mammal, eutherian, eutherian mammal => mammal => vertebrate, craniate => chordate => animal, animate being, beast, brute, creature => organism, being => living thing, animate thing => object, physical object => entity http://wordnet.princeton.edu/

43 Wordnet Hierarchy Synonyms/Hypernyms (Ordered by Estimated Frequency) of noun aardvark Sense 1 aardvark, ant bear, anteater, Orycteropus afer => placental, placental mammal, eutherian, eutherian mammal => mammal => vertebrate, craniate => chordate => animal, animate being, beast, brute, creature => organism, being => living thing, animate thing => object, physical object => entity Convert graph structure into tree by taking most common meaning

44

45 Wordnet Voting Scheme Ground truth One image – one vote

46 Classification at Multiple Semantic Levels Votes: Animal6 Person33 Plant5 Device3 Administrative4 Others22 Votes: Living44 Artifact9 Land3 Region7 Others10

47 Wordnet Voting Scheme

48 Person Recognition 23% of all images in dataset contain people Wide range of poses: not just frontal faces

49 Person Recognition – Test Set 1016 images from Altavista using “person” query High res and 32x32 available Disjoint from 79 million tiny images

50 Person Recognition Task: person in image or not? 64-bit RBM code trained on web data (Weak image labels only) Viola-Jones

51 Object Classification Hand-designed metric

52 Distribution of objects in the world 1 10 1,000 100 10,000 10 1001,000 LabelMestatistics object rank 10% of the classes account for 93% of the labels! 1500 classes with less than 100 samples Number of labeled samples LabelMe dataset

53 Conclusions Possible to build compact codes for retrieval – # bits seems to depend on dataset – Much room for improvement – Use JPEG coefficients as input to RBM Can do interesting things with lots of data – What would happen with Google’s ~ 2 billion images? – Video data

54


Download ppt "Large Image Databases and Small Codes for Object Recognition Rob Fergus (NYU) Antonio Torralba (MIT) Yair Weiss (Hebrew U.) William T. Freeman (MIT)"

Similar presentations


Ads by Google