Presentation is loading. Please wait.

Presentation is loading. Please wait.

We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the.

Similar presentations


Presentation on theme: "We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the."— Presentation transcript:

1 We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the final result. We adopt different approaches to find the result and confidence. Eigenface, Fisherface and EGM: We employ K nearest- neighbor classifiers, which five nearest training set images with the test image are chosen. The final result for expert i is defined as the class j with the highest votes v in J classes among the five results: and its confidence is defined as the number of votes of the result class divided by K: SVM: To recognize an image in J different classes, J C 2 SVMs are constructed. The image is tested against each SVM and the class j with the highest votes in all SVMs is selected as the recognition result r i. The confidence is defined as the number of votes of the result class divided by J-1. Neural Networks: We choose a binary vector of size J for the target representation. The target class is set to 1 and the others are set to 0. The class j with output value closest to 1 is chosen as the result and the output value is chosen as the confidence. Department of Computer Science and Engineering The Chinese University of Hong Kong Figure 1: SFRCM Overview Voting Machine Result (r) & Confidence (c) Eigenface Fisherface EGM SVM Neural Network Weights (w) SFRCM adopts static structure in committee machine. Each expert gives its Result r and Confidence c for the result to the voting machine. Together with the Weight w of each expert, recognized class is chosen with the highest Score s among j classes which is defined as: Weight w is derived from the average performance of the algorithms in the ORL and Yale testing. Performance of each expert is normalized to ensure that its weight is positive, and within the range [0, 1] by an exponential mapping function: where p i is the average performance of expert i. The use of weight further reduce high confidence result of poor performance expert to affect the ensemble result significantly. Ho-Man Tang, Michael R. Lyu and Irwin King Department of Computer Science and Engineering The Chinese University of Hong Kong, Shatin, N.T. Hong Kong SAR. {hmtang,lyu,king}@cse.cuhk.edu.hk Result & Confidence Introduction Face Recognition Committee Machine Dynamic Vs. Static Structures Weight Static Structure In recent years, committee machine, an ensemble of estimators, has proven to give more accurate results than a single predictor. There exists two types of structure: Static Structure: This is generally known as an ensemble method. Input is not involved in combining the experts. Dynamic Structure: Input is directly involved in the combining mechanism. It uses an integrating unit to adjust the weight of each expert according to the input. This poster describes the design of Face Recognition Committee Machine (FRCM). It is composed of five state-of- the-art face recognition algorithms: (1) Eigenface, (2) Fisherface, (3) Elastic Graph Matching (EGM), (4) Support Vector Machine (SVM) and (5) Neural Networks. We propose Static (SFRCM) and Dynamic (DFRCM) structure for the FRCM, and compare their performances and the five algorithms on ORL and Yale face database to show the improvement. Recognized Class Input Image

2 In DFRCM, each expert is trained independently on different face databases. Expert's performance is then determined in the testing phase, which is defined as: where n i,j is the total number of correction recognition and t i,j is the total number of trail for expert i on face database j. We propose a feedback mechanism to solve the second problem, which updates the weights for the experts continuously. 1. Initialize n i,j and t i,j to 0 2.Train each expert i on different database j 3.While TESTING a)Determine j for each test image b)Recognize the image in each expert i c)If t i,j != 0 then Calculate p i,j d)Else Set p i,j = 0 e)Calculate w i,j f)Determine ensemble result g)If FEEDBACK then Update n i,j and t i,j 4.End while In SFRCM, input is not involved in the determination of weight. However, there are two major drawbacks: Fixed weights under all situations: Experts may have various performances under different situations. Therefore, fixed weights for faces under all situations are undesirable. No update mechanism for weights: Weight for the experts cannot be updated once the system is trained. SFRCM Drawbacks Experimental Results Department of Computer Science and Engineering The Chinese University of Hong Kong Table 4: SFRCM Yale Result Figure 2: DFRCM Overview Input Image Voting Machine Recognized Class EigenfaceFisherfaceEGM SVM w1w2w3w4w5 r1,c1r2,c2r3,c3r4,c4 r5,c5 Gating Network Table 2: SFRCM ORL Result Table 1: DFRCM ORL Result Table 3: DFRCM Yale Result We evaluate the performance of DFRCM SFRCM and the experts with ORL and Yale face database. We use leaving-one- out for SFRCM and cross validation partition for DFRCM in the experiments. The results are shown as follows: Feedback Mechanism Dynamic Structure Neural Network To overcome the first problem, we develop a gating network in DFRCM which includes a neural network to accept input images and assign a specific weight for each individual expert.


Download ppt "We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the."

Similar presentations


Ads by Google