Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List Comparison Author: Florian Schroff, Tali Treibitz, David.

Slides:



Advertisements
Similar presentations
A Comparison of Implicit and Explicit Links for Web Page Classification Dou Shen 1 Jian-Tao Sun 2 Qiang Yang 1 Zheng Chen 2 1 Department of Computer Science.
Advertisements

Attributes for Classifier Feedback Amar Parkash and Devi Parikh.
Active Appearance Models
Thomas Berg and Peter Belhumeur
Document Summarization using Conditional Random Fields Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, Zheng Chen IJCAI 2007 Hao-Chin Chang Department of Computer.
A Unified Framework for Context Assisted Face Clustering
Recognizing Human Actions by Attributes CVPR2011 Jingen Liu, Benjamin Kuipers, Silvio Savarese Dept. of Electrical Engineering and Computer Science University.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Lior Wolf and Noga Levy The SVM-minus Similarity Score for Video Face Recognition Makarand Tapaswi CVPR Reading VGG 1.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
3 Small Comments Alex Berg Stony Brook University I work on recognition: features – action recognition – alignment – detection – attributes – hierarchical.
Neeraj Kumar, Alexander C. Berg, Peter N. Belumeur, and Shree K. Nayar Presented by Gregory Teodoro.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Capturing Human Insight for Visual Learning Kristen Grauman Department of Computer Science University of Texas at Austin Work with Sudheendra Vijayanarasimhan,
Beyond Mindless Labeling: Really Leveraging Humans to Build Intelligent Machines Devi Parikh Virginia Tech.
Weiwei Zhang, Jian Sun, and Xiaoou Tang, Fellow, IEEE.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Face Alignment by Explicit Shape Regression
IT’S NOT POLITE TO POINT: DESCRIBING PEOPLE WITH UNCERTAIN ATTRIBUTES CVPR 2013 Poster.
Discriminative Segment Annotation in Weakly Labeled Video Kevin Tang, Rahul Sukthankar Appeared in CVPR 2013 (Oral)
Computer and Robot Vision I
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Relative Attributes Presenter: Shuai Zheng (Kyle) Supervised by Philip H.S. Torr Author: Devi Parikh (TTI-Chicago) and Kristen Grauman (UT-Austin)
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Text Mining: Finding Nuggets in Mountains of Textual Data Jochen Dijrre, Peter Gerstl, Roland Seiffert Presented by Huimin Ye.
© 2013 IBM Corporation Efficient Multi-stage Image Classification for Mobile Sensing in Urban Environments Presented by Shashank Mujumdar IBM Research,
Gender and 3D Facial Symmetry: What’s the Relationship ? Xia BAIQIANG (University Lille1/LIFL) Boulbaba Ben Amor (TELECOM Lille1/LIFL) Hassen Drira (TELECOM.
Data Mining Techniques
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
Unsupervised Learning of Categories from Sets of Partially Matching Image Features Kristen Grauman and Trevor Darrel CVPR 2006 Presented By Sovan Biswas.
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial Visual Object Recognition Bastian Leibe & Computer Vision Laboratory ETH.
Enhancing Human-Machine Communication via Visual Attributes Devi Parikh Virginia Tech.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Deformable Part Model Presenter : Liu Changyu Advisor : Prof. Alex Hauptmann Interest : Multimedia Analysis April 11 st, 2013.
A Novel Local Patch Framework for Fixing Supervised Learning Models Yilei Wang 1, Bingzheng Wei 2, Jun Yan 2, Yang Hu 2, Zhi-Hong Deng 1, Zheng Chen 2.
Sharing Features Between Objects and Their Attributes Sung Ju Hwang 1, Fei Sha 2 and Kristen Grauman 1 1 University of Texas at Austin, 2 University of.
C. Lawrence Zitnick Microsoft Research, Redmond Devi Parikh Virginia Tech Bringing Semantics Into Focus Using Visual.
Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Ivica Dimitrovski 1, Dragi Kocev 2, Suzana Loskovska 1, Sašo Džeroski 2 1 Faculty of Electrical Engineering and Information Technologies, Department of.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
WhittleSearch: Image Search with Relative Attribute Feedback CVPR 2012 Adriana Kovashka Devi Parikh Kristen Grauman University of Texas at Austin Toyota.
Hello, Who is Calling? Can Words Reveal the Social Nature of Conversations?
Interactively Discovery of Attributes Vocabulary Devi Parikh and Kristen Grauman.
Final Report (30% final score) Bin Liu, PhD, Associate Professor.
Bringing Order to the Web : Automatically Categorizing Search Results Advisor : Dr. Hsu Graduate : Keng-Wei Chang Author : Hao Chen Susan Dumais.
Richer Human-Machine Communication in Attributes-based Visual Recognition Devi Parikh TTIC.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Face Detection 蔡宇軒.
Zhuode Liu 2016/2/13 University of Texas at Austin CS 381V: Visual Recognition Discovering the Spatial Extent of Relative Attributes Xiao and Lee, ICCV.
A. M. R. R. Bandara & L. Ranathunga
Deeply learned face representations are sparse, selective, and robust
Compact Bilinear Pooling
Bag-of-Visual-Words Based Feature Extraction
Face recognition using improved local texture pattern
Recovery from Occlusion in Deep Feature Space for Face Recognition
Paper Presentation: Shape and Matching
Object detection as supervised classification
Thesis Advisor : Prof C.V. Jawahar
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Attributes and Simile Classifiers for Face Verification
Domingo Mery Department of Computer Science
Objects as Attributes for Scene Classification
Ying Dai Faculty of software and information science,
Zeroshot Learning Mun Jonghwan.
CVPR 2019 Poster.
Presentation transcript:

Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List Comparison Author: Florian Schroff, Tali Treibitz, David Kriegman, Serge Belongie Speaker :刘昕

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Authors(1/4)  Florian Schroff :

Authors(2/4)  Tali Treibitz :  Background : Ph.D. student in the Dept. of Electrical Engineering, Technion  Publication: 三篇 CVPR ,一篇 PAMI

Authors(3/4)  David J. Kriegman :  Background: UCSD Professor of Computer Science & Engineering. UIUC Adjunct Professor of Computer Science and Beckman Institute. IEEE Transactions on Pattern Analysis & Machine Intelligence, Editor-in- Chief,

Authors(4/4)  Serge J. Belongie:  Background: Professor Computer Science and Engineering University of California, San Diego

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Paper Information  文章出处  ICCV 2011  相关文献 Chunhui Zhu; Fang Wen; Jian Sun. A Rank-Order Distance based Clustering Algorithm for Face Tagging, CVPR2011 Lior Wolf ; Tal Hassner;Yaniv Taigman; One shot similarity kernel, ICCV09 Kumar, N.; Berg, A.C.; Belhumeur, P.N.; Nayar, S.K.Attribute and simile classifiers for face verification, CVPR09

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Abstract(1/2)  Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression.  To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures.

Abstract(2/2)  Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library.  We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and Multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Motivation Learn a new distance metric D ’

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Methods—Overview

Methods-Assumption  This approach stems from the observation that ranked Doppelganger lists are similar for similar people(Even under different imaging conditions)

Methods-Set up Face database  Using MultiPIE as a Face Library:

Methods-Finding Alike  Calculating the list:

Methods-Compare List  Calculating similarity:

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Experiment on FacePix(across pose)

Experiment- Verification Across Large Variations of Pose

Experiment- on Multi-PIE  The classification performance using ten fold cross-validation is 76:6% ± 2.0(both FPLBP and SSIM on direct image comparison perform near chance). To the best of our knowledge these are the first results reported across all pose, illumination and expression conditions on Multi-PIE.

Experiment on LFW ( 1/2)  LFW 实验结果

Experiment on LFW ( 2/2)

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Conclusion(1/2)  To the best of our knowledge, we have shown the first verification results for face-similarity measures under truly unconstrained expression, illumination and pose, including full profile, on both Multi-PIE and FacePix.  The advantages of the ranked Doppelganger lists become apparent when the two probe images depict faces in very different poses. Our method does not require explicit training and is able to cope with large pose ranges.  It is straightforward to generalize our method to an even larger variety of imaging conditions, by adding further examples to the Library. No change in our algorithm is required, as its only assumption is that imaging conditions.

Conclusion(2/2)  We expect that a great deal of improvement can be achieved by using this powerful comparison method as an additional feature in a complete verification or recognition pipeline where it can add the robustness that is required for face recognition across large pose ranges. Furthermore, we are currently exploring the use of ranked lists of identities in other classification domains.

Thanks for listening Xin Liu

Relative Attributes Author: Devi Parikh, Kristen Grauman Speaker :刘昕

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Authors(1/2)  Devi Parikh :  Background : Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC)  Publication: L. Zitnick and D. Parikh The Role of Image Understanding in Segmentation IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012 (to appear) D. Parikh and L. Zitnick, Exploring Tiny Images: The Roles of Appearance and Contextual Information for Machine and Human Object Recognition,Pattern Analysis and Machine Intelligence (PAMI), 2012 (to appear) 一堆牛会,一堆牛期刊

Authors(2/2)  Kristen Grauman :  Background : Clare Boothe Luce Assistant Professor Microsoft Research New Faculty Fellow Department of Computer Science University of Texas at Austin  Publication: 一堆 CVPR ,一堆 ICCV……

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Paper Information  文章出处 ICCV 2011 Oral  获奖情况 Marr Prize!

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Abstract(1/2)  Human-nameable visual “attributes” can benefit various recognition tasks. However, existing techniques restrict these properties to categorical labels (for example, a person is ‘smiling’ or not, a scene is ‘dry’ or not), and thus fail to capture more general semantic relationships.  We propose to model relative attributes. Given training data stating how object/scene categories relate according to different attributes, we learn a ranking function per attribute. The learned ranking functions predict the relative strength of each property in novel images.

Abstract(2/2)  We then build a generative model over the joint space of attribute ranking outputs, and propose a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’).  We further show how the proposed relative attributes enable richer textual descriptions for new images, which in practice are more precise for human interpretation. We demonstrate the approach on datasets of faces and natural scenes, and show its clear advantages over traditional binary attribute prediction for these new tasks

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Motivation However, for a large variety of attributes, not only is this binary setting restrictive, but it is also unnatural. Why we model relative attributes?

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Methods—Formulation(1/3 )  Ranking functions: For each attributeopen

Methods—Formulation(2/3 )  Objective Function:  Compared to SVM:

Methods—Formulation(3/3 )  Margin and support vectors w m T X+b=0 Geometric margin :

Methods- ZeroShot Learning From Relationships ( 1/3)  Overview :

Methods- ZeroShot Learning From Relationships ( 2/3)  Image representation:

Methods- ZeroShot Learning From Relationships ( 3/3)  Generative model:

Methods- Describing Images in Relative Terms ( 1/2)  How to describe ?

Methods- Describing Images in Relative Terms ( 2/2)  E.g. Relative (ours): More natural than tallbuilding Less natural than forest More open than tallbuilding Less open than coast Has more perspective than tallbuilding Binary (existing): Not natural Not open Has perspective

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Experiment-Overview(1/2)  OSR and PubFig:

Experiment-Overview(2/2)  Baseline:

Experiment- Relative zero-shot Learning(1/4) Proposed Binary attributes Classifier score How does performance vary with more unseen categories? classical recognition problem binary ~ relative supervision 55

Experiment- Relative zero-shot Learning(2/4) << baseline supervisioncan give unique ordering on all classes 56

Experiment- Relative zero-shot Learning(3/4) 57

Experiment- Relative zero-shot Learning(4/4) Relative attributes jointly carve out space for unseen category 58

Experiment-Human study(2/2) 18 subjects Test cases: 10OSR, 20 PubFig 59

Outline  Authors  Paper Information  Abstract  Motivation  Methods  Experiment  Conclusion

Conclusion  We introduced relative attributes, which allow for a richer language of supervision and description than the commonly used categorical (binary) attributes. We presented two novel applications: zero-shot learning based on relationships and describing images relative to other images or categories. Through extensive experiments as well as a human subject study, we clearly demonstrated the advantages of our idea. Future work includes exploring more novel applications of relative attributes, such as guided search or interactive learning, and automatic discovery of relative attributes.

Thanks for listening Xin Liu