We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byOmari Colby
Modified over 2 years ago
© Imperial College LondonPage 1 FERA2011: The First Facial Expression Recognition and Analysis Challenge FG’11 March 2011 Michel Valstar, Marc Méhu, Marcello Mortillaro, Maja Pantic, Klaus Scherer
Participation overview Data downloaded by 20 teams 15 submissions 11 accepted papers 13 teams in Emotion Sub-Challenge 5 teams in AU Sub-Challenge Institutes from 6 countries 53 researchers, median of 6 per paper 5 entries were multi-institute endeavours © Imperial College LondonPage 2
Trends © Imperial College LondonPage 3 Machine Learning trends: 13/15 teams used SVM Three teams used multiple kernel SVMs, including the AU winner Only 1 team modelled time Only 1 team used probabilistic graphical models Feature trends: 4 teams encode appearance dynamics 4 teams use both appearance and geometric features (including AU winners) Only 1 team infers 3D, but appears successful! (AU winner) Only 1 team uses Geometric features only, ranked 11 th
Baseline System – LBP based Expression Recognition © Imperial College LondonPage 4 Face is registered using detected eyes. Uniform Local Binary Pattern features are computed on every pixel (LBP). Face is divided in 10x10 blocks. In each block a 256 bin histogram of the LBP features is generated. For every AU a GentleBoost-SVM is learned. Upper face AUs use the concatenated histograms of the top five rows, Lower face AUs the bottom five rows. For every Emotion a GentleBoost-SVM is learned using all rows. SVM predictions are per frame, decision is made by voting. Local Binary Pattern appearance descriptors are applied to the face region to detect AUs and discrete emotions
Baseline Overview (LAUD) © Imperial College LondonPage 5 B. Jiang, M.F. Valstar, and M. Pantic, “Action Unit detection using sparse appearance descriptors in space-time video volumes”, FG’11
Winner of the Emotion Detection sub- challenge 3. Karlsruhe Institute of Technology Tobias Gehrig, Hazim Ekenel © Imperial College LondonPage 6 2. UIUC-UMC Usman Tariq, Xi Zhou, Kai-Hsiang Lin, Zhen Li, Zhaowen Wang, Vuang Le, Thomas Huang, Tony Han, Xutao Lv 1. University of California, Riverside Songfan Yang, Bir Bhanu
Ranking – Emotion Sub-challenge © Imperial College LondonPage 7
Person independent/specific emotion detection © Imperial College LondonPage 8
Emotion secondary test results © Imperial College LondonPage 9
Winner of the Action Unit Detection sub- challenge 3. Karlsruhe Institute of Technology Tobias Gehrig, Hazim Ekenel © Imperial College LondonPage 10 2. University of California San Diego Nicholas Butko, Javier Movellan, Tingfan Wu, Paul Ruvolo, Jacob Whitehill, Marian Bartlett 1. University of French West Indies & Guyana Lionel Prevost, Thibaud Senechal, Vincent Rapp, Hanan Salam, Renaud Seguier, Keving Bailly
Ranking – Action Unit Sub-challenge © Imperial College LondonPage 11
Person independent/specific AU detection © Imperial College LondonPage 12
Conclusion and new goals Conclusions: Person dependent discrete emotion detection is incredibly successful Dynamic appearance is very successful Combined appearance/geometric approaches seem to be the way forward AU detection far from solved © Imperial College LondonPage 13 New avenues: Given the high success of discrete emotion, dimensional affect may be a new goal to pursue Explicitly detecting temporal segments of facial expressions Analyse sensitivity of approaches to AU intensities. Leverage person specific approaches for AU detection Detection of AU intensity levels
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author：Patrick Lucey, Jeffrey F. Cohn, Takeo.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Facial Smile Detection Based on Deep Learning Features Authors: Kaihao Zhang, Yongzhen Huang, Hong Wu and Liang Wang Center for Research on Intelligent.
Overview The FG 2015 Video Person Recognition Evaluation 1 Colorado State University, USA, 2 University of Notre Dame, USA, 3 University of Surrey, UK,
1 Probabilistic Formulation for Skin Detection Sanun Srisuk Seminar I.
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
A General Framework for Tracking Multiple People from a Moving Camera
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Human Detection Method Combining HOG and Cumulative Sum based Binary Pattern Jong Gook Ko', Jin Woo Choi', So Hee Park', Jang Hee You', ' Electronics and.
Watch Listen & Learn: Co-training on Captioned Images and Videos
First-Person Activity Recognition: What Are They Doing to Me? M. S. Ryoo and Larry Matthies Jet Propulsion Laboratory, California Institute of Technology,
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
Week8 Fatemeh Yazdiananari. Fixed the issues with classifiers We retrained SVMs with the new UCF101 histograms On temporally untrimmed videos: ◦
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人：蒲薇榄.
© 2017 SlidePlayer.com Inc. All rights reserved.