Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 08 Classification-based Learning

Similar presentations


Presentation on theme: "Lecture 08 Classification-based Learning"— Presentation transcript:

1 Lecture 08 Classification-based Learning
Topics Basics Decision Trees Multi-Layered Perceptrons Applications

2 Basics Machine Learning Classification Inductive learning
Deductive learning Abductive learning (reasoning) Reinforcement learning Collaborative learning Classification Given a set of examples, each labelled as in a specific class, learn how we classify the examples Supervised learning

3 Basics Decision Trees Multi-Layered Perceptrons (MLP)
A symbolic representation of of a reasoning process. It describes a data set by a tree-like structure. Multi-Layered Perceptrons (MLP) A subsymbolic representation architecture with multiple layers of preceptrons Parallel inference Learning by generalizing patterns to approximate functions

4 Decision Trees Example

5 Decision Trees Data representation: Attribute-based language
Attribute-list and two examples: Alist: [Gender Age Blood Smoking Caffeine HT?] Ex1: [Male high ≥1pack ≥3cups high] Ex2: [Female low ≥1pack ≥3cups normal] Where Blood: Blood-pressure Caffeine: Caffeine-intake HT: Hypertension, class label

6 Decision Trees Knowledge representation: Tree-like structure
The tree always starts from the root node and grows down by splitting the data at each level into new nodes according to some predictor (attribute, feature). The root node contains the entire data set (all data records), and child nodes hold respective subsets of that set. A split in a decision tree corresponds to the predictor with the maximum separating power. The best split does the best job in creating nodes where a single class dominates. Two of the best known methods of calculating the predictor’s power: Gini coefficient Entropy

7 Decision Trees The Gini coefficient is calculated as the area between the Lorenz curve and the diagonal divided by the area below the diagonal, i.e., (A-B)/B. The Gini coefficient ranges from 0 (perfect equality) to 1 (perfect inequality). Looking for largest Gini coefficient Brown formula:

8 Decision Trees Selecting an optimal tree with Gini splitting

9 Decision Trees Calculation of Gini coefficient for Class A (with 7 leaf nodes forming a Lorenz curve) (Non-increasingly) sorted instances in Class A 59, 23, 11, 4, 2, 1, 0 Corresponding cumulative percentages for Class A 59/100, 82/100, 93/100, 97/100, 99/100, 100/100, 100/100 Corresponding cumulative percentages for total population 60/150, 86/150, 97/150, 102/150, 105/150, 114/150, 150/150 G(A) = |1- [60/150 * 59/100 + (86-60)/150 * (59+82)/ (97-86)/150 * (82+93)/ (102-97)/150 * (97+93)/ ( )/150 * (99+97)/ ( )/150 * (100+99)/ ( )/150 * ( )/ 100]| =0.311

10 Decision Trees Gain chart of Class A

11 Decision Trees Selecting an optimal tree with random splitting

12 Decision Trees Extracting rules from decision trees The path from the root node to a bottom leaf reveals a decision rule For example, a rule associated with the right bottom leaf in the figure that represents Gini splits can be represented as follows: if (Predictor 1 = no) and (Predictor 4 = no) and (Predictor 6 = no) then class = Class A

13 Decision Trees Entropy
Node A contains n classes, ci, i = 1, …, n, each with probability p(ci) - Entropy of node A: - Entropy of child nodes of A by some split (CNi: child node i; p(CNi): probability of CNi): - Gain of the split at node A:

14 Decision Trees Select the split with largest gain.
When to end splitting? Calculate deviation D of the split against random splitting - p’ and n’: expected instances in class p and n if randomly distributed Null hypothesis: if random splitting, the deviation D will be distributed according to distribution with n-1 degrees of freedom

15 Decision Trees distribution (pdf) Stop splitting if D(n) ≤
r: degree of freedom χ2 p value r 0.25 0.20 0.15 0.10 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005 1 1.32 1.64 2.07 2.71 3.84 5.02 5.41 6.63 7.88 9.14 10.83 12.12 2 2.77 3.22 3.79 4.61 5.99 7.38 7.82 9.21 10.60 11.98 13.82 15.20 3 4.11 4.64 5.32 6.25 7.81 9.35 9.84 11.34 12.84 14.32 16.27 17.73 Stop splitting if D(n) ≤

16 Decision Trees The main advantage of the decision-tree approach to classification is it visualises the solution; it is easy to follow any path through the tree. Relationships learned by a decision tree can be expressed as a set of rules, which can then be used in developing an intelligent system.

17 Decision Trees Data preprocessing
Continuous data, such as age or income, have to be grouped into ranges, which can unwittingly hide important patterns. Missing or inconsistent data have to be brought back or resolved. Inability to examine more than one variable at a time. This confines trees to only the problems that can be solved by dividing the solution space into several successive rectangles

18 Multi-Layered Perceptrons
Example

19 Multi-Layered Perceptrons
Emulating biological neural network

20 Multi-Layered Perceptrons
Data representation: Coded attribute-based language Each input node takes an attribute (feature) Coded attribute-list and two examples: A-list: [Gender Age Blood Smoking Caffeine HT?] Ex1: [ ] Ex2: [ ] Gender (categorical data): 0: male; 1:female Age (continuous data ): age/100 (or 0.1:<10 yrs; 0.2: 20~21; …; 0.9: 90~99; 1.0: >99) Blood (Blood-pressure): 0: low; 1: high Smoking: cigarettes /40 (or 0 for 0 cigarettes; 0.1: <10cigarretes; 0.5: 10~19; 1: ≥ 1 pack) Caffeine (Caffeine-intake continuous data): cups/3 (or 0: 0 cups; 0.5: 1~2 cups; 1.0: ≥ 3cups ) HT (Hypertension, class label): 0: normal; 1: high

21 Multi-Layered Perceptrons
Knowledge representation: weight, neuron, and network structure

22 Multi-Layered Perceptrons
Neuron structure: A neuron computes the weighted sum of the input signals and compares the result with a threshold value, . If the net input is less than the threshold, the neuron output is –1. But if the net input is greater than or equal to the threshold, the neuron becomes activated and its output attains a value +1. The neuron uses the following transfer or activation function: Y(X) is called a sign function, Ysign.

23 Multi-Layered Perceptrons
Sample activation functions

24 MLP - Perceptrons Network structure: Preceptrons
A perceptron is the simplest form of a neural network, consisting of a single neuron with adjustable synaptic weights and a hard limiter A single-layer two input perceptron

25 MLP - Perceptrons The aim of the perceptron is to classify inputs, x1, x2, . . ., xn, into one of two classes, say Y=A1 or Y=A2. In the case of an elementary perceptron, the n-dimensional input space is divided by a hyperplane into two decision regions. The hyperplane is defined by the linearly separable function:

26 MLP - Perceptrons cf: ax+by=c cf: ax+by+cz=d

27 MLP - Perceptrons A perceptron learn its classification by making small adjustments in the weights to reduce the difference (error) between the desired and actual outputs of the perceptron. The initial weights are randomly assigned, usually in a small range, and then updated to obtain the output consistent with the training examples. If the error is positive, we need to increase perceptron output; if it is negative, we need to decrease perceptron output.

28 MLP - Perceptron learning algorithm
Step 1: Initialization Set initial weights w1, w2,…, wn and threshold  to random numbers in the range [0.5, 0.5].

29 MLP - Perceptron learning algorithm
Step 2: Activation (a) Activate the perceptron by applying inputs x1(p), x2(p),…, xn(p) and desired output Yd (p). Calculate the actual output at iteration p = 1: where n is the number of the perceptron inputs. (b) Calculate the output error:

30 MLP - Perceptron learning algorithm
Step 3: Weight training Calculate the weight correction at iteration p, Dwi(p), using delta rule: Update the weights of the perceptron Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until convergence Epoch: An epoch finishes the weight adjustment with the whole training examples.

31 Multi-Layered Perceptrons
Linearly inseparable problem with perceptrons Increasing layers: from perceptrons to MLP An MLP is a feedforward neural network with one or more hidden layers The network consists of an input layer of source neurons, at least one middle or hidden layer of computational neurons, and an output layer of computational neurons. The input signals are propagated in a forward direction on a layer-by-layer basis.

32 Multi-Layered Perceptrons
MLP with two hidden layers

33 Multi-Layered Perceptrons
Learning in an MLP proceeds the same way as for a perceptron. First, a training input pattern is presented to the network input layer. The network propagates the input pattern from layer to layer until the output pattern is generated by the output layer. If this pattern is different from the desired output, an error is calculated and then propagated backwards through the network from the output layer to the input layer. The weights are modified as the error is propagated - Back-propagation learning algorithm (BP)

34 Multi-Layered Perceptrons

35 MLP - BP Learning Algorithm
Step 1: Initialization Set all the weights and threshold levels of the network to random numbers uniformly distributed inside a small range: where Fi is the total number of inputs of neuron i in the network. The weight initialization is done on a neuron-by-neuron basis.

36 MLP - BP Learning Algorithm
Step 2: Activation Activate MLP by applying inputs x1(p), x2(p),…, xn(p) and desired outputs yd,1(p), yd,2(p),…, yd,l(p). (a) Calculate the actual outputs of the neurons in the hidden layer: where j = 1, .., m.

37 MLP - BP Learning Algorithm
Step 2: Activation (continued) (b) Calculate the actual outputs of the neurons in the output layer: where k = 1, …, l. (c) Calculate the output errors of the neurons in the output layer:

38 MLP - BP Learning Algorithm
Step 3: Weight training (a) Calculate the error gradient for the neurons in the output layer: Calculate the weight corrections by delta rule: Update the weights at the output neurons:

39 MLP - BP Learning Algorithm
Step 3: Weight training (continued) (b) Calculate the propagated errors for the neurons in the hidden layer: Calculate the error gradient for the neurons in the hidden layer: Calculate the weight corrections by delta rule: Update the weights at the hidden neurons:

40 MLP - BP Learning Algorithm
Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until the selected error criterion is satisfied.

41 Multi-Layered Perceptrons
We can accelerate training by including a momentum term in the delta rule and changing it into the generalized delta rule: where  is a positive number (0    1), called the momentum constant. Typically, the momentum constant is set to 0.95.

42 Multi-Layered Perceptrons
To accelerate the convergence and yet avoid the danger of instability, we can apply two heuristics: Heuristic 1 If the change of the sum of squared errors has the same algebraic sign for several consequent epochs, then the learning rate parameter, , should be increased. The sum of squared errors (network performance measure): Heuristic 2 If the algebraic sign of the change of the sum of squared errors alternates for several consequent epochs, then the learning rate parameter, , should be decreased.

43 Multi-Layered Perceptrons
Adapting the learning rate If the sum of squared errors at the current epoch exceeds the previous value by more than a predefined ratio (typically 1.04), the learning rate parameter is decreased (typically by multiplying by 0.7) and new weights and thresholds are calculated. If the error is less than the previous one, the learning rate is increased (typically by multiplying by 1.05).

44 Applications Decision trees Multi-Layered Perceptrons Data mining
Churn model construction Multi-Layered Perceptrons Function approximation Pattern recognition Hand-writing recognition Case-based retrieval


Download ppt "Lecture 08 Classification-based Learning"

Similar presentations


Ads by Google