Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 473 Ensemble Learning. © CSE AI Faculty 2 Ensemble Learning Sometimes each learning technique yields a different hypothesis (or function) But no perfect.

Similar presentations


Presentation on theme: "CSE 473 Ensemble Learning. © CSE AI Faculty 2 Ensemble Learning Sometimes each learning technique yields a different hypothesis (or function) But no perfect."— Presentation transcript:

1 CSE 473 Ensemble Learning

2 © CSE AI Faculty 2 Ensemble Learning Sometimes each learning technique yields a different hypothesis (or function) But no perfect hypothesis… Could we combine several imperfect hypotheses to get a better hypothesis?

3 © CSE AI Faculty 3 Example this line is one simple classifier saying that everything to the left is + and everything to the right is - Combining 3 linear classifiers  More complex classifier

4 © CSE AI Faculty 4 Analogies: Elections combine voters’ choices to pick a good candidate (hopefully) Committees combine experts’ opinions to make better decisions Students working together on Othello project Intuitions: Individuals make mistakes but the “majority” may be less likely to Individuals often have partial knowledge; a committee can pool expertise to make better decisions Ensemble Learning: Motivation

5 © CSE AI Faculty 5 Technique 1: Bagging Combine hypotheses via majority voting

6 © CSE AI Faculty 6 Bagging: Analysis Error probability went down from 0.1 to 0.01!

7 © CSE AI Faculty 7 Weighted Majority Voting In practice, hypotheses rarely independent Some hypotheses have less errors than others  all votes are not equal! Idea: Let’s take a weighted majority

8 © CSE AI Faculty 8 Technique 2: Boosting Most popular ensemble learning technique Computes a weighted majority of hypotheses Can “boost” performance of a “weak learner” Operates on a weighted training set Each training example (instance) has a “weight” Learning algorithm takes weight of input into account Idea: when an input is misclassified by a hypothesis, increase its weight so that the next hypothesis is more likely to classify it correctly

9 © CSE AI Faculty 9 Boosting Example with Decision Trees (DTs) training case correctly classified training case has large weight in this round this DT has a strong vote. Output of h final is weighted majority of outputs of h 1,…,h 4

10 © CSE AI Faculty 10 AdaBoost Algorithm

11 © CSE AI Faculty 11 AdaBoost Example Original training set D 1 : Equal weights to all training inputs Goal: In round t, learn classifier h t that minimizes error with respect to weighted training set h t maps input to True (+1) or False (-1) Taken from “A Tutorial on Boosting” by Yoav Freund and Rob Schapire

12 © CSE AI Faculty 12 AdaBoost Example ROUND 1 MisclassifiedIncrease weights z 1 = 0.42

13 © CSE AI Faculty 13 AdaBoost Example ROUND 2 z 2 = 0.65

14 © CSE AI Faculty 14 AdaBoost Example ROUND 3 z 3 = 0.92

15 © CSE AI Faculty 15 AdaBoost Example h final sign(x) = +1 if x > 0 and -1 otherwise

16 Example 1: Semantic Mapping

17 Motivation Human-Robot interaction: User: “Go to the corridor” Room Corridor Doorway

18 Shape Room

19 Observations Room Doorway

20 Observations Room Corridor Doorway

21 Simple Features Gap = d > θ f = # Gaps Minimum f =Area f =Perimeter f = d d d i d f = d d Σ d i

22 Experiments Training (top) # examples: 16045 Test (bottom) # examples: 18726 classificatio n: 93.94% Building 079 Uni. Freiburg Room Corridor Doorway

23 Application to a New Environment Training map Intel Research Lab in Seattle

24 Application to a New Environment Training map Intel Research Lab in Seattle Room Corridor Doorway

25 Example 2: Wearable Multi- Sensor Unit © CSE AI Faculty Battery Camera (on ribbon cable) GPS receiver iMote2 + two sensor boards MicrophoneCamera Light sensors 2 GB SD card Indicator LEDs Records 4 hours of audio, images (1/sec), GPS, and sensor data (accelerometer, barometric pressure, light intensity, gyroscope, magnetometer)

26 Data Stream © CSE AI Faculty Courtesy of G. Borriello

27 Activity Recognition Model © CSE AI Faculty Boosted cassifiers [Lester-Choudhury-etAl: IJCAI-05] Virtual evidence boosting [Liao-Choudhury-Fox-Kautz: IJCAI-07] Accuracy: 88% activities, 93% environment

28 Boosting Extremely flexible framework Handles high-dimensional continuous data Easy to implement Limitation: Only models local classification problems 28 © CSE AI Faculty


Download ppt "CSE 473 Ensemble Learning. © CSE AI Faculty 2 Ensemble Learning Sometimes each learning technique yields a different hypothesis (or function) But no perfect."

Similar presentations


Ads by Google