We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

COMPUTER AIDED DIAGNOSIS: CLASSIFICATION Prof. Yasser Mostafa Kadah –
Recognizing Human Actions by Attributes CVPR2011 Jingen Liu, Benjamin Kuipers, Silvio Savarese Dept. of Electrical Engineering and Computer Science University.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
My name is Dustin Boswell and I will be presenting: Ensemble Methods in Machine Learning by Thomas G. Dietterich Oregon State University, Corvallis, Oregon.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
Discriminative and generative methods for bags of features
Face Recognition Committee Machine Presented by Sunny Tang.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Efficient Convex Relaxation for Transductive Support Vector Machine Zenglin Xu 1, Rong Jin 2, Jianke Zhu 1, Irwin King 1, and Michael R. Lyu 1 4. Experimental.
Speaker Adaptation for Vowel Classification
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Dynamic Face Recognition Committee Machine Presented by Sunny Tang.
Ensemble Learning: An Introduction
1 Integrating User Feedback Log into Relevance Feedback by Coupled SVM for Content-Based Image Retrieval 9-April, 2005 Steven C. H. Hoi *, Michael R. Lyu.
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
A Study of the Relationship between SVM and Gabriel Graph ZHANG Wan and Irwin King, Multimedia Information Processing Laboratory, Department of Computer.
Experimental Evaluation
Oral Defense by Sunny Tang 15 Aug 2003
Artificial Intelligence (AI) Addition to the lecture 11.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Table 3:Yale Result Table 2:ORL Result Introduction System Architecture The Approach and Experimental Results A Face Processing System Based on Committee.
1 Formal Models for Expert Finding on DBLP Bibliography Data Presented by: Hongbo Deng Co-worked with: Irwin King and Michael R. Lyu Department of Computer.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Artificial Intelligence Techniques Multilayer Perceptrons.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 5.
ICML2004, Banff, Alberta, Canada Learning Larger Margin Machine Locally and Globally Kaizhu Huang Haiqin Yang, Irwin King, Michael.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
1 Pattern Recognition Pattern recognition is: 1. A research area in which patterns in data are found, recognized, discovered, …whatever. 2. A catchall.
A Face processing system Based on Committee Machine: The Approach and Experimental Results Presented by: Harvest Jang 29 Jan 2003.
Designing multiple biometric systems: Measure of ensemble effectiveness Allen Tang NTUIM.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
1 Heat Diffusion Classifier on a Graph Haixuan Yang, Irwin King, Michael R. Lyu The Chinese University of Hong Kong Group Meeting 2006.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
A generic face processing framework: Technologies, Analyses and Applications Supervised by: Prof. Michael R. Lyu Presented by Jang Kim Fung Oral Defense.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
MMM2005The Chinese University of Hong Kong MMM2005 The Chinese University of Hong Kong 1 Video Summarization Using Mutual Reinforcement Principle and Shot.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Automatic Classification of Audio Data by Carlos H. L. Costa, Jaime D. Valle, Ro L. Koerich IEEE International Conference on Systems, Man, and Cybernetics.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning: Ensemble Methods
Machine Learning Supervised Learning Classification and Regression
Semi-Supervised Clustering
Machine Learning Clustering: K-means Supervised Learning
WSRec: A Collaborative Filtering Based Web Service Recommender System
Hybrid Features based Gender Classification
Neuro-Computing Lecture 5 Committee Machine
Video Summarization by Spatial-Temporal Graph Optimization
Bird-species Recognition Using Convolutional Neural Network
Zhenjiang Lin, Michael R. Lyu and Irwin King
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
CSSE463: Image Recognition Day 13
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Model generalization Brief summary of methods
CSSE463: Image Recognition Day 18
Physics-guided machine learning for milling stability:
Random Neural Network Texture Model
Presentation transcript:

We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the final result. We adopt different approaches to find the result and confidence. Eigenface, Fisherface and EGM: We employ K nearest- neighbor classifiers, which five nearest training set images with the test image are chosen. The final result for expert i is defined as the class j with the highest votes v in J classes among the five results: and its confidence is defined as the number of votes of the result class divided by K: SVM: To recognize an image in J different classes, J C 2 SVMs are constructed. The image is tested against each SVM and the class j with the highest votes in all SVMs is selected as the recognition result r i. The confidence is defined as the number of votes of the result class divided by J-1. Neural Networks: We choose a binary vector of size J for the target representation. The target class is set to 1 and the others are set to 0. The class j with output value closest to 1 is chosen as the result and the output value is chosen as the confidence. Department of Computer Science and Engineering The Chinese University of Hong Kong Figure 1: SFRCM Overview Voting Machine Result (r) & Confidence (c) Eigenface Fisherface EGM SVM Neural Network Weights (w) SFRCM adopts static structure in committee machine. Each expert gives its Result r and Confidence c for the result to the voting machine. Together with the Weight w of each expert, recognized class is chosen with the highest Score s among j classes which is defined as: Weight w is derived from the average performance of the algorithms in the ORL and Yale testing. Performance of each expert is normalized to ensure that its weight is positive, and within the range [0, 1] by an exponential mapping function: where p i is the average performance of expert i. The use of weight further reduce high confidence result of poor performance expert to affect the ensemble result significantly. Ho-Man Tang, Michael R. Lyu and Irwin King Department of Computer Science and Engineering The Chinese University of Hong Kong, Shatin, N.T. Hong Kong SAR. Result & Confidence Introduction Face Recognition Committee Machine Dynamic Vs. Static Structures Weight Static Structure In recent years, committee machine, an ensemble of estimators, has proven to give more accurate results than a single predictor. There exists two types of structure: Static Structure: This is generally known as an ensemble method. Input is not involved in combining the experts. Dynamic Structure: Input is directly involved in the combining mechanism. It uses an integrating unit to adjust the weight of each expert according to the input. This poster describes the design of Face Recognition Committee Machine (FRCM). It is composed of five state-of- the-art face recognition algorithms: (1) Eigenface, (2) Fisherface, (3) Elastic Graph Matching (EGM), (4) Support Vector Machine (SVM) and (5) Neural Networks. We propose Static (SFRCM) and Dynamic (DFRCM) structure for the FRCM, and compare their performances and the five algorithms on ORL and Yale face database to show the improvement. Recognized Class Input Image

In DFRCM, each expert is trained independently on different face databases. Expert's performance is then determined in the testing phase, which is defined as: where n i,j is the total number of correction recognition and t i,j is the total number of trail for expert i on face database j. We propose a feedback mechanism to solve the second problem, which updates the weights for the experts continuously. 1. Initialize n i,j and t i,j to 0 2.Train each expert i on different database j 3.While TESTING a)Determine j for each test image b)Recognize the image in each expert i c)If t i,j != 0 then Calculate p i,j d)Else Set p i,j = 0 e)Calculate w i,j f)Determine ensemble result g)If FEEDBACK then Update n i,j and t i,j 4.End while In SFRCM, input is not involved in the determination of weight. However, there are two major drawbacks: Fixed weights under all situations: Experts may have various performances under different situations. Therefore, fixed weights for faces under all situations are undesirable. No update mechanism for weights: Weight for the experts cannot be updated once the system is trained. SFRCM Drawbacks Experimental Results Department of Computer Science and Engineering The Chinese University of Hong Kong Table 4: SFRCM Yale Result Figure 2: DFRCM Overview Input Image Voting Machine Recognized Class EigenfaceFisherfaceEGM SVM w1w2w3w4w5 r1,c1r2,c2r3,c3r4,c4 r5,c5 Gating Network Table 2: SFRCM ORL Result Table 1: DFRCM ORL Result Table 3: DFRCM Yale Result We evaluate the performance of DFRCM SFRCM and the experts with ORL and Yale face database. We use leaving-one- out for SFRCM and cross validation partition for DFRCM in the experiments. The results are shown as follows: Feedback Mechanism Dynamic Structure Neural Network To overcome the first problem, we develop a gating network in DFRCM which includes a neural network to accept input images and assign a specific weight for each individual expert.