Presentation on theme: "Ray Mooney Department of Computer Science"— Presentation transcript:
1 Generating Natural-Language Video Descriptions Using Text-Mined Knowledge Ray MooneyDepartment of Computer ScienceUniversity of Texas at AustinJoint work withNiveda Krishnamoorthy Girish MalkarmenkarTanvi Motwani.Kate Saenko Sergio Guadarrama..
2 Integrating Language and Vision Integrating natural language processing and computer vision is an important aspect of language grounding and has many applications.NIPS-2011 Workshop on Integrating Language and VisionNAACL-2013 Workshop on Vision and LanguageCVPR-2013 Workshop on Language for Vision
3 Video Description Dataset (Chen & Dolan, ACL 2011) 2,089 YouTube videos with 122K multi-lingual descriptions.Originally collected for paraphrase and machine translation examples.Available at:
5 Sample M-Turk Human Descriptions (average ~50 per video) A MAN PLAYING WITH TWO DOGSA man takes a walk in a field with his dogs.A man training the dogs in a big field.A person is walking his dogs.A woman is walking her dogs.A woman is walking with dogs in a field.A woman is walking with four dogs outside.A woman walks across a field with several dogs.All dogs are going along with the woman.dogs are playingDogs follow a man.Several dogs follow a person.some dog playing each otherSomeone walking in a field with dogs.very cute dogsA MAN IS GOING WITH A DOG.four dogs are walking with woman in fieldthe man and dogs walking the forestDogs are Walking with a Man.The woman is walking her dogs.A person is walking some dogs.A man walks with his dogs in the field.A man is walking dogs.a dogs are runningA guy is training his dogsA man is walking with dogs.a men and some dog are runningA men walking with dogs.A person is walking with dogs.A woman is walking her dogs.Somebody walking with his/her pets.the man is playing with the dogs.A guy training his dogs.A lady is roaming in the field with his dogs.A lady playing with her dogs.A man and 4 dogs are walking through a field.A man in a field playing with dogs.A man is playing with dogs.
6 Our Video Description Task Generate a short, declarative sentence describing a video in this corpus.First generate a subject (S), verb (V), object (O) triplet for describing the video.<cat, play, ball>Next generate a grammatical sentence from this triplet.A cat is playing with a ball.
7 person ride SUBJECT VERB OBJECT motorbike This can be done by correctly identifying the subject, the verb and the object in the video.SUBJECTVERBOBJECTpersonridemotorbikeA person is riding a motorbike.
8 OBJECT DETECTIONS table 0.07 dog 0.15 aeroplane 0.05 car 0.29 cow 0.11 We start by running object detectors on each video frame.car0.29cow0.11person0.42motorbike0.51train0.17
9 SORTED OBJECT DETECTIONS motorbike0.51person0.42car0.29These detections are then sorted by their confidence scores.…aeroplane0.05
10 VERB DETECTIONS move 0.34 slice 0.13 dance 0.05 ride 0.19 hold 0.23 Similarly, we run verb detectors trained on the spatio-temporal interest points in the video. These points, indicated by yellow circles, show us where interesting events occur.ride0.19hold0.23drink0.11climb0.17shoot0.07
11 SORTED VERB DETECTIONS move0.34hold0.23ride0.19The verb detections are also sorted by their confidence scores.…dance0.05
12 SORTED OBJECT DETECTIONS SORTED VERB DETECTIONS motorbike0.51move0.34person0.42hold0.23car0.29ride0.19Now that we have the sorted object and verb detections,……aeroplane0.05dance0.05
13 move 1.0 walk 0.8 pass 0.8 ride 0.8 EXPAND VERBS OBJECTS VERBS we expand each verb with its most similar verbs to generate a larger set of potential words for describing the action.
17 Web-scale text corpora GigaWord, BNC, ukWaC, WaCkypedia, GoogleNgramsGET Dependency ParsesOBJECTSVERBSA man rides a horse det(man-2, A-1) nsubj(rides-3, man-2) root(ROOT-0, rides-3) det(horse-5, a-4) dobj(rides-3, horse-5)EXPANDED VERBSSince vision detections can be noisy especially while dealing challenging YouTube videos, we need to incorporate some real world knowledge that reflects the likelihood of various entities being the subject or object of a given activity. This can be learnt from web-scale text corpora by mining for subject, verb , object triplets via dependency parses.<person, ride, horse>Subject-Verb-Object triplet
18 Web-scale text corpora GigaWord, BNC, ukWaC, WaCkypedia, GoogleNgramsOBJECTSVERBS<person, ride, horse> <person, walk, dog> <person, hit, ball> .EXPANDED VERBSWe then train a language model on these subject, verb, object triplets using Kneser-Ney smoothing. A language model assigns a probability to judge how likely a person would be to say such a sentence. This S-V-O language model is now capable of ranking triplets by their real world likelihood.SVO Language Model
19 Web-scale text corpora GigaWord, BNC, ukWaC, WaCkypedia, GoogleNgramsOBJECTSVERBS<person, ride, horse> <person, walk, dog> <person, hit, ball> .EXPANDED VERBSWe also train a regular language model on the Google n gram corpus.RegularLanguageModelSVO Language Model
20 Web-scale text corpora GigaWord, BNC, ukWaC, WaCkypedia, GoogleNgramsOBJECTSVERBSSVO Language ModelREGULAR Language ModelEXPANDED VERBSThe best triplet can now be selected by combining the vision detection scores with the score from the language model.CONTENT PLANNING: <person, ride, motorbike>
21 Web-scale text corpora GigaWord, BNC, ukWaC, WaCkypedia, GoogleNgramsOBJECTSVERBSSVO Language ModelREGULAR Language ModelEXPANDED VERBSIn the final step we generate a set of candidate descriptions using a template based approach on the Subject, Verb, Object triplet. The regular language model picks the best description for the video from these candidate descriptions.CONTENT PLANNING: <person, ride, motorbike>SURFACE REALIZATION: A person is riding a motorbike.
22 Object DetectionUsed Felzenszwalb et al.’s (2008) pretrained deformable part models.Covers 20 PASCAL VOC object categoriesAeroplanesBicyclesBirdsBoatsBottlesBusesCarsCatsChairsCowsDining tablesDogsHorsesMotorbikesPeoplePotted plantsSheepSofasTrainsTV/Monitors
23 Activity Detection Process Parse video descriptions to find the majority verb stem for describing each training video.Automatically create activity classes from the video training corpus by clustering these verbs.Train a supervised activity classifier to recognize the discovered activity classes.
24 Automatically Discovering Activity Classes ….Video ClipsA puppy is playing in a tub of water.A dog is playing with water in a small tub.A dog is sitting in a basin of water and playing with the water.A dog sits and plays in a tub of water.A puppy is playing in a tub of water.A dog is playing with water in a small tub.A dog is sitting in a basin of water and playing with the water.A dog sits and plays in a tub of water.A girl is dancing.A young woman is dancing ritualistically.Indian women are dancing in traditional costumes.Indian women dancing for a crowd.The ladies are dancing outside.A girl is dancing.A young woman is dancing ritualistically.Indian women are dancing in traditional costumes.Indian women dancing for a crowd.The ladies are dancing outside.A man is cutting a piece of paper in half lengthwise using scissors.A man cuts a piece of paper.A man is cutting a piece of paper.A man is cutting a paper by scissor.A guy cuts paper.A person doing somethingA man is cutting a piece of paper in half lengthwise using scissors.A man cuts a piece of paper.A man is cutting a piece of paper.A man is cutting a paper by scissor.A guy cuts paper.A person doing something…. NL Descriptions.… ~314Verb LabelsplaythrowhitdancejumpcutchopsliceHierarchicalClusteringcut, chop, slicethrow, hitdance, jumpplay # throw # hit # dance # jump # cut # chop # slice # …..
25 Automatically Discovering Activity Classes Automatically Discovering Activities and Producing Labeled Training DataAutomatically Discovering Activity ClassesHierarchical Agglomerative ClusteringUsing “res” metric from WordNet::Similarity (Pedersen et al.),We cut the resulting hierarchy to obtain 58 activity clusters
26 Creating Labeled Activity Data A girl is dancing.A young woman is dancing ritualistically.A girl is dancing.A young woman is dancing ritualistically.A man is cutting a piece of paper in half lengthwise using scissors.A man cuts a piece of paper.A man is cutting a piece of paper in half lengthwise using scissors.A man cuts a piece of paper.A woman is riding horse on a trail.A woman is riding on a horse.A woman is riding horse on a trail.A woman is riding on a horse.A group of young girls are dancing on stage.A group of girls perform a dance onstage.A group of young girls are dancing on stage.A group of girls perform a dance onstage.A woman is riding a horse on the beach.A woman is riding a horse.A woman is riding a horse on the beach.A woman is riding a horse.climb, flyride, walk, run, move, racecut, chop, slicedance, jumpplaythrow, hitride, walk, run, move, racecut, chop, slicedance, jump
27 Supervised Activity Recognition Extract video features for Spatio-Temporal Interest Points (STIPs)(Laptev et al., CVPR-2008)Histograms of Oriented Gradients (HoG)Histograms of Optical Flow (Hof)Use extracted features to train a Support Vector Machine (SVM) to classify videos.
28 Activity Recognizer using Video Features SVM Trained on STIP features and activity cluster labelsTraining VideoSTIP featuresA woman is riding horse in a beach.A woman is riding on a horse.ride, walk, run, move, raceNL descriptionDiscovered Activity Label
29 Selecting SVO Just Using Vision (Baseline) Top object detection from vision = SubjectNext highest object detection = ObjectTop activity detection = Verb
31 Evaluating SVO Triples A ground-truth SVO for a test video is determined by picking the most common S, V, and O used to describe this video (as determined by dependency parsing).Predicted S, V, and O are compared to ground-truth using two metrics:Binary: 1 or 0 for exact match or notWUP: Compare predicted word to ground truth using WUP semantic word similarity score from WordNet Similarity (0≤WUP≤1)
32 Experiment DesignSelected 235 potential test videos that contain VOC objects based on object names (or synonyms) appearing in their descriptions.Used remaining 1,735 videos to discover activity clusters, keeping clusters with at least 9 videos.Keep training and test videos whose verb is in the 58 discovered clusters.1,596 training videos185 test videos
34 Vision Detections are Faulty! Top object detections:motorbike: 0.67person: 0.56dog: 0.11Top activity detections:go_run_bowl_move: 0.41ride: 0.32lift: 0.23Vision triplet: (motorbike, go_run_bowl_move, person)
35 Using Text-Mining to Determine SVO Plausibility Build a probabilistic model to predict the real-world likelihood of a given SVO.P(person,ride,motorbike) > P(motorbike,run,person)Run the Stanford dependency parser on a large text corpus, and extract the S, V, and O for each sentence.Train a trigram language model on this SVO data, using Kneyser-Ney smoothing to back-off to SV and VO bigrams.
36 British National Corpus (BNC) Text CorporaCorporaSize of text (# words)British National Corpus (BNC)100MGigaWord1BukWaC2BWaCkypedia_EN800MGoogleNgrams1012Stanford dependency parses from first 4 corpora used to buildSVO language model.Full language model used for surface realization trained onGoogleNgrams using BerkeleyLM
37 <person, park, bat> <person, ride, motorcycle> <person, walk, dog><car, move, bag><car, move, motorcycle><person, hit, ball>person hit ball -1.17person ride motorcycle -1.3person walk dog -2.18person park bat -4.76SVO Language Modelcar move bag -5.47car move motorcycle -5.52
38 Verb ExpansionGiven the poor performance of activity recognition, it is helpful to “expand” the set of verbs considered beyond those actually in the predicted activity clusters.We also consider all verbs with a high WordNet WUP similarity (>0.5) to a word in the predicted clusters.
39 Sample Verb Expansion Using WUP go 1.0 walk 0.8 pass 0.8 follow 0.8 fly 0.8 fall 0.8 come 0.8 ride 0.8 run 0.67 chase 0.67 approach 0.67move
40 Integrated Scoring of SVOs Consider the top n=5 detected objects and the top k=10 verb detections (plus their verb expansions) for a given test video.Construct all possible SVO triples from these nouns and verbs.Pick the best overall SVO using a metric that combines evidence from both vision and language.
41 Combining SVO ScoresLinearly interpolate vision and language-model scores:Compute SVO vision score assuming independence of components and taking into account similarity of expanded verbs.
45 Surface Realization: Template + Language Model Input:The best SVO triplet from the content planning stageBest fitting preposition connecting the verb & object (mined from text corpora)Template:Determiner + Subject + (conjugated Verb) + Preposition(optional) + Determiner + ObjectGenerate all sentences fitting this template and rank them using a Language Model trained on Google NGrams
46 Automatic Evaluation of Sentence Quality Evaluate generated sentences using standard Machine Translation (MT) metrics.Treat all human provided descriptions as “reference translations”
47 Human Evaluation of Descriptions Asked 9 unique MTurk workers to evaluate descriptions of each test video.Asked to choose between vision-baseline sentence, SVO-LM (VE) sentence, or “neither.”Gold-standard item included in each HIT to exclude unreliable workers.When preference expressed, 61.04% preferred SVO-LM (VE) sentence.For 84 videos where the majority of judges had a clear preference, 65.48% preferred the SVO-LM (VE) sentence.
50 Discussion PointsHuman judges seem to care more about correct objects than correct verbs, which helps explain why their preferences are not as pronounced as differences in SVO scores.Novelty of YouTube videos (e.g. someone dragging a cat on the floor), mutes impact of SVO model learned from ordinary text.
51 Future WorkLarger scale experiment using bigger sets of objects (ImageNet) and activities.Ability to generate more complex sentences with adjectives, adverbs, multiple objects, and scenes.Ability to generate multi-sentential descriptions of longer videos with multiple events.
52 ConclusionsGrounding language in vision is a fundamental problem with many applications.We have developed a preliminary broad-scale video description system.Mining common-sense knowledge (e.g. an SVO model) from large-scale parsed text, improves performance across multiple evaluations.Many directions for improving the complexity and coverage of both language and vision components.