Human Computation and Computer Vision CS143 Computer Vision James Hays, Brown University.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

The blue and green colors are actually the same.
Visual Recognition With Humans in the Loop Steve Branson Catherine Wah Florian Schroff Boris Babenko Serge Belongie Peter Welinder Pietro Perona ECCV 2010,
Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley.
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Imbalanced data David Kauchak CS 451 – Fall 2013.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Supervised Learning Techniques over Twitter Data Kleisarchaki Sofia.
Recovering Human Body Configurations: Combining Segmentation and Recognition Greg Mori, Xiaofeng Ren, and Jitentendra Malik (UC Berkeley) Alexei A. Efros.
Machine learning continued Image source:
Utility data annotation via Amazon Mechanical Turk Alexander Sorokin David Forsyth University of Illinois at Urbana-Champaign
Parsing Clothing in Fashion Photographs
Partitioned Logistic Regression for Spam Filtering Ming-wei Chang University of Illinois at Urbana-Champaign Wen-tau Yih and Christopher Meek Microsoft.
Crowdsourcing 04/11/2013 Neelima Chavali ECE 6504.
Taking Computer Vision Into The Wild Neeraj Kumar October 4, 2011 CSE 590V – Fall 2011 University of Washington.
Assuming normally distributed data! Naïve Bayes Classifier.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Statistical Recognition Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and Kristen Grauman.
Ensemble Learning: An Introduction
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
Peekaboom: A Game for Locating Objects in Images
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
Presented by Zeehasham Rasheed
5/30/2006EE 148, Spring Visual Categorization with Bags of Keypoints Gabriella Csurka Christopher R. Dance Lixin Fan Jutta Willamowski Cedric Bray.
Recap: HOGgles Data, Representation, and Learning matter. – This work looked just at representation By creating a human-understandable HoG visualization,
Opportunities of Scale, Part 2 Computer Vision James Hays, Brown Many slides from James Hays, Alyosha Efros, and Derek Hoiem Graphic from Antonio Torralba.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Beyond datasets: Learning in a fully-labeled real world Thesis proposal Alexander Sorokin.
Get Another Label? Using Multiple, Noisy Labelers Joint work with Victor Sheng and Foster Provost Panos Ipeirotis Stern School of Business New York University.
Crash Course on Machine Learning
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
by B. Zadrozny and C. Elkan
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York.
Computer Vision CS 776 Spring 2014 Recognition Machine Learning Prof. Alex Berg.
Universit at Dortmund, LS VIII
CSSE463: Image Recognition Day 31 Today: Bayesian classifiers Today: Bayesian classifiers Tomorrow: project day. Tomorrow: project day. Questions? Questions?
Representations for object class recognition David Lowe Department of Computer Science University of British Columbia Vancouver, Canada Sept. 21, 2006.
CS101 Computer Programming I Chapter 4 Extra Examples.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Limitations of Cotemporary Classification Algorithms Major limitations of classification algorithms like Adaboost, SVMs, or Naïve Bayes include, Requirement.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
Active learning Haidong Shi, Nanyi Zeng Nov,12,2008.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
Image Classification for Automatic Annotation
CSSE463: Image Recognition Day 31 Today: Bayesian classifiers Today: Bayesian classifiers Tomorrow: Out-of-class project workday Tomorrow: Out-of-class.
Christopher M. Bishop Object Recognition: A Statistical Learning Perspective Microsoft Research, Cambridge Sicily, 2003.
CS 1699: Intro to Computer Vision Active Learning Prof. Adriana Kovashka University of Pittsburgh November 24, 2015.
Virtual Examples for Text Classification with Support Vector Machines Manabu Sassano Proceedings of the 2003 Conference on Emprical Methods in Natural.
Consensus Relevance with Topic and Worker Conditional Models Paul N. Bennett, Microsoft Research Joint with Ece Kamar, Microsoft Research Gabriella Kazai,
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
SUN Database: Large-scale Scene Recognition from Abbey to Zoo Jianxiong Xiao *James Haysy Krista A. Ehinger Aude Oliva Antonio Torralba Massachusetts Institute.
Data annotation with Amazon Mechanical Turk. Alexander Sorokin David Forsyth University of Illinois at Urbana-Champaign
COM24111: Machine Learning Decision Trees Gavin Brown
CS 2750: Machine Learning Active Learning and Crowdsourcing
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Dan Roth University of Illinois, Urbana-Champaign 7 Sequential Models Tutorial on Machine Learning in Natural.
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Course Project Lists for ITCS6157 Jianping Fan. Project Implementation Lists Automatic Image Clustering You can download 1,000,000 images from You can.
Advanced data mining with TagHelper and Weka
Human-in-the-loop ECE6504 Xiao Lin.
CSSE463: Image Recognition Day 17
Mentor: Salman Khokhar
Michal Rosen-Zvi University of California, Irvine
LECTURE 23: INFORMATION THEORY REVIEW
MAS 622J Course Project Classification of Affective States - GP Semi-Supervised Learning, SVM and kNN Hyungil Ahn
THE ASSISTIVE SYSTEM SHIFALI KUMAR BISHWO GURUNG JAMES CHOU
Presentation transcript:

Human Computation and Computer Vision CS143 Computer Vision James Hays, Brown University

24 hours of Photo Sharing installation by Erik Kessels

And sometimes Internet photos have useful labels Im2gps. Hays and Efros. CVPR 2008 But what if we want more?

Image Categorization Training Labels Training Images Classifier Training Training Image Features Trained Classifier

Image Features Testing Test Image Trained Classifier Outdoor Prediction Image Categorization Training Labels Training Images Classifier Training Image Features Trained Classifier Training

Human Computation for Annotation Training Labels Training Images Classifier Training Training Image Features Trained Classifier Show images, Collect and filter labels Unlabeled Images

Active Learning Training Labels Training Images Classifier Training Training Image Features Trained Classifier Show images, Collect and filter labels Unlabeled Images Find images near decision boundary

Human-in-the-loop Recognition Human Estimate Human Observer Image Features Testing Test Image Trained Classifier Outdoor Prediction

Outline Human Computation for Annotation – ESP Game – Mechanical Turk Human-in-the-loop Recognition – Visipedia

Luis von Ahn and Laura Dabbish. Labeling Images with a Computer Game. ACM Conf. on Human Factors in Computing Systems, CHI 2004Labeling Images with a Computer Game

Utility data annotation via Amazon Mechanical Turk Alexander Sorokin David Forsyth University of Illinois at Urbana-Champaign Slides by Alexander Sorokin X = $5000

Task Amazon Mechanical Turk Is this a dog? o Yes o No Workers Answer: Yes Task: Dog? Pay: $0.01 Broker $0.01

Annotation protocols Type keywords Select relevant images Click on landmarks Outline something Detect features ……….. anything else ………

Type keywords $0.01

Select examples Joint work with Tamara and Alex Berg

Select examples requester mtlabel $0.02

Click on landmarks $0.01

Outline something $ Data from Ramanan NIPS06

Motivation X = $5000 Custom annotations Large scaleLow price

Issues Quality? – How good is it? – How to be sure? Price? – How to price it?

Annotation quality Agree within 5-10 pixels on 500x500 screen There are bad ones. ACEG

How do we get quality annotations?

Ensuring Annotation Quality Consensus / Multiple Annotation / “Wisdom of the Crowds” Gold Standard / Sentinel – Special case: qualification exam Grading Tasks – A second tier of workers who grade others

Pricing Trade off between throughput and cost Higher pay can actually attract scammers

Visual Recognition with Humans in the Loop Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder, Pietro Perona, Serge Belongie Part of the Visipedia projectVisipedia project Slides from Brian O’Neil

Introduction: Computers starting to get good at this. If it’s hard for humans, it’s probably too hard for computers. Semantic feature extraction difficult for computers. Combine strengths to solve this problem.

The Approach: What is progress? Supplement visual recognition with the human capacity for visual feature extraction to tackle difficult (fine-grained) recognition problems. Typical progress is viewed as increasing data difficulty while maintaining full autonomy Here, the authors view progress as reduction in human effort on difficult data.

The Approach: 20 Questions Ask the user a series of discriminative visual questions to make the classification.

Which 20 questions? At each step, exploit the image itself and the user response history to select the most informative question to ask next. Image x Ask user a question Stop? No Yes

Some definitions: Set of possible questions Possible answers to question i Possible confidence in answer i (Guessing, Probably, Definitely) User response History of user responses at time t

Question selection Seek the question that gives the maximum information gain (entropy reduction) given the image and the set of previous user responses. Probability of obtaining Response u i given the image And response history Entropy when response is Added to history Entropy before response is added. where

Incorporating vision Bayes Rule A visual recognition algorithm outputs a probability distribution across all classes that is used as the prior. A posterior probability is then computed based on the probability of obtaining a particular response history given each class.

Modeling user responses Assume that the questions are answered independently. Required for posterior computation Required for information gain computation

The Dataset: Birds images of 200 species

Implementation Assembled 25 visual questions encompassing 288 visual attributes extracted from Mechanical Turk users asked to answer questions and provide confidence scores.

User Responses.

Visual recognition Any vision system that can output a probability distribution across classes will work. Authors used Andrea Vedaldis’s code. – Color/gray SIFT – VQ geometric blur – 1 v All SVM Authors added full image color histograms and VQ color histograms

Experiments 2 Stop criteria: – Fixed number of questions – evaluate accuacy – User stops when bird identified – measure number of questions required. Image x Ask user a question Stop? No Yes

Results Average number of questions to make ID reduced from to 6.43 Method allows CV to handle the easy cases, consulting with users only on the more difficult cases.

Results Vision allows the questions to be salient right away. User input can correct gross errors from CV

Key Observations Visual recognition reduces labor over a pure “20 Q” approach. Visual recognition improves performance over a pure “20 Q” approach. (69% vs 66%) User input dramatically improves recognition results. (66% vs 19%)

Strengths and weaknesses Handles very difficult data and yields excellent results. Plug-and-play with many recognition algorithms. Requires significant user assistance Reported results assume humans are perfect verifiers Is the reduction from 11 questions to 6 really that significant?