Presentation is loading. Please wait.

Presentation is loading. Please wait.

Watch Listen & Learn: Co-training on Captioned Images and Videos

Similar presentations


Presentation on theme: "Watch Listen & Learn: Co-training on Captioned Images and Videos"— Presentation transcript:

1 Watch Listen & Learn: Co-training on Captioned Images and Videos
Sonal Gupta, Joohyun Kim, Kirstenn Garuman and Raymond Mooney.

2 Their endeavour : Recognize categories from natural scenes from images with captions Recognizing human actions in sports videos accompanied with commentary.

3 How do they go about it Their model learns to classify images and videos from labelled and unlabelled multi- modal examples. A semi-supervised approach. Use image or video content together with its textual annotation(captions or commentary) to learn scene and action categories.

4 Features: Visual Features Textual features Static Image features
Motion descriptors from videos Textual features

5 Histogram Of Oriented Gradients(HOG)
Divide image into small connected regions(cells) Compile a histogram of gradient directions or edge orientation(This is the descriptor). The implementation of these descriptors can be achieved by dividing the image into small connected regions, called cells, and for each cell compiling a histogram of gradient directions or edge orientations for the pixels within the cell. The combination of these histograms then represents the descriptor.

6 Gabor Filter Convolution of a sinusoid with a gaussian
Texture detection

7 LAB color space L(lightness) a and b are color opponent dimensions
Includes all perceivable colors(exceeds that of RGB and CYMk) device independent Designed to approximate human vision

8 Static Image Features Each Image is broken into 4x6 grids
texture feature for each regions using gabor filters Record the mean, std deviation & skewness of per channel RGB Lab color pixel values 30 -dimensional feature vector for each of the 24 regions K-means clustering Each region of each image is assigned to the closest cluster centroid.

9 N x 30

10 Bag of visual words The final bag of visual words for each image consists of: A vector of k-values: The ith element represents the number of regions in the image that belongs to the ith cluster.

11 Motion descriptors laptev spatio temporal motion descriptors
At each feature point the patch is divided into 3x3x2 spatio-temporal blocks 4-bin HOG descriptors calculated for each block 72 element feature vector is obtained. A video clip is represented as a histogram over this vocabulary Motion descriptors from all video in the training set are clustered to form a vocabulary.

12 N x 72

13 Textual features Image captions or transcribed video commentary
Preprocess:Remove stop words and stem the remaining. The frequency of resulting word stem comprised the feature set.

14 Co- training What is it? - A semi supervised learning paradigm that exploits two mutually independent views. Independent views in our case: Text classifier Visual classifier

15 Text classifier Visual Classifier Text View Visual View + -
Initially labelled Instances

16 Supervised learning Classifiers used to learn from labeled instances
Visual Classifier Text classifier Text View + _ Visual View - + Initially labelled Instances

17 Co -train The trained classifier from the previous step used to label unlabelled instances Text classifier Visual Classifier Text View Visual View Unlabelled Instances

18 Supervised learning Classify Most confident instances Visual View
Visual Classifier Text classifier Text View Visual View Partially labelled Instances

19 Supervised learning Label all views in the instances Visual Classifier
Text classifier Text View + - Visual View + - Classifier labelled Instances

20 Retrain Classifier Retrain based on the new labels Visual Classifier
Text classifier Text View + - Visual View + -

21 Classify a new instance
Text View Visual View Text View Visual view Text View Visual View Text view Visual View

22 Ready for Co- training Classify Most confident instances System Input
A Set of labelled and unlabeled examples each with two set of features(one for each view) Two classifiers whose predictions can be combined to classify new test instances. System Input Output

23 Combined result used for labeling test instances
Early and Late Fusion Early Fusion Visual Features Fused Vector Classifier Combined result used for labeling test instances Textual Features

24 Combined result used for labeling test instances
Early and Late Fusion Late Fusion Visual Features Visual Classifier Combined result used for labeling test instances Textual Features Text Classifier

25 Semi-supervised EM and Transductive SVMs
Semi-supervised Expectation Maximization Learns the probabilistic classifier from the labeled training data Performs EM iterations E-step – uses the currently trained classifier to probabilistically label the unlabeled training examples M-step – retrains the classifier on the union of labeled data and probabilistically labeled unsupervised examples

26 Transductive SVMs Method of improving the generalization accuracy of SVMs by using unlabeled data Finds the labeling of the test examples that results in maximum margin hyperplane that separates the positive and negative examples of both training and testing data

27 Transductive SVMs (contd)
Transductive SVMs are typically designed to improve the performance of the test data by utilizing its availability during training. It is also used directly in semi supervised setting where unlabeled data is available during training, which comes from the same distribution as the test data.

28 Methodology For co-training,
Support Vector Machine – Base classifier for both image and text views We use Weka implementation of sequential minimal optimization (SMO) SMO is an algorithm for efficiently solving the optimization problem which arises during the training of SVMs

29 Methodology (Continued)
Parameters used in SMO -RBF Kernel (γ=0.01) -Batch size: 5 Confidence Threshold Static Images Videos Image/Video view 0.65 0.6 Text view 0.98 0.9

30 Methodology (Continued)
Ten iterations of ten-fold cross validation is performed to get smoother and reliable results. Test set is disjoint from both the labeled and unlabeled training data. Learning curves are used for evaluating the accuracy.

31 Learning Curves The learning curve can represent at a glance, the initial difficulty of learning something and, to an extent, how much there is to learn after initial familiarity. These curves are generated where at each point some fraction of the training data is labeled and the remainder is used as unlabeled training data

32 Results Classify captioned static images:
Image dataset : Israel dataset Images have short text captions Two classes : ‘Desert’ , ‘Trees’

33 Examples of images DESERT Ibex in Judean Desert
Bedouin Leads His Donkey That Carries Load Of Straw TREES Ibex Eating In The Nature Entrance To Mikveh Israel Agricultural School

34 Co-training Vs Supervised Classifiers

35 Co-training Vs. Semi-Supervised EM
Co-training Vs. Transductive SVM

36 Results(contd.) Recognize actions from Commented videos:
Video clips of soccer and ice-skating Resized to 240x360 resolution and then divided manually into short clips. Clip length varies from 20 to 120 frames. Four categories : kicking, dribbling, spinning, dancing

37 Examples of Videos

38 Examples of Videos(contd.)

39 Co-training Vs. supervised learning on commented video dataset

40 Co-training Vs. supervised learning when text commentary is not available

41 Limitations in the approach
Data set used is small and requires only binary classification. Images that have explicit captions are used.

42 QUESTIONS ?????

43 THANK YOU Joydeep Sinha Anuhya Koripella Akshitha Muthireddy


Download ppt "Watch Listen & Learn: Co-training on Captioned Images and Videos"

Similar presentations


Ads by Google