Download presentation

Presentation is loading. Please wait.

Published byOlivia Lindsey Modified over 2 years ago

1
BOAI: Fast Alternating Decision Tree Induction based on Bottom-up Evaluation Bishan Yang, Tengjiao Wang, Dongqing Yang, and Lei Chang School of EECS, Peking University

2
Outline Motivation Related work Preliminaries Our Approach Experimental Results Summary

3
Motivation Alternating Decision Tree (ADTree) is an effective decision tree algorithm based on AdaBoost. –High accurate classifier –Small size of tree and easy to interpret –Provide measures of prediction confidence Wide range of applications –Customer churn prediction –Fraud detection –Disease trait modeling –……

4
Limitation of existing work Very expensive to apply to large sets of training data –Takes hours to train on large number of examples and attributes The training time will grow up exponentially with the increasing size of data

5
Related work Several techniques have been developed to tackle the efficiency problem for traditional decision trees (ID3,C4.5) –SLIQ (EDBT Mehta et al.) Introduce data structures called attribute list and class list –SPRINT (VLDB J. Shafer et al.) Only construct attribute list –PUBLIC (VLDB Rastogi & Shim) Integrates MDL prunning into tree building process –Rainforest (VLDB Gehrke, Ramakrishnan & Ganti) Introduce AVC-groups which are sufficient for split evaluation –BOAT (PODS Gehrke, Ganti, Ramakrishnan & Loh) build tree based on a subset of data by using bootstrapping Cant not directly apply to ADTree –evaluate splits based on information of the current node

6
Related work Several optimizing methods for ADTree –PAKDD – Pfahringer et al. Zpure cut off Merging Three heuristic mechanisms –ILP – Vanassche et al. Caching optimization Cons: Little improvement until reaching large number of iterations Cons: cant guarantee the model quality Cons :the additional memory consumption grows fast with increasing number of boosting rounds

7
Preliminaries Alternating Decision Tree (ADTree) (ICML) Classification: –the sign of the sum of the prediction values along the paths defined by the instance –Eg. Instance (age, income) = (35, 1300), –sign(f(x)) = sign( ) = sign(0.7) = Age < 40Income > Income > Age > : prediction node : decision node

8
Preliminaries Algorithm ADTree Induction Input: For t=1 to T do /* evaluation phase*/ for all such that calculate Select, which minimize /*Partition phase*/ Set update weights:, is the prediction value P t is set of preconditions, C is set of base conditions, ( c=(A(v i +v i+1 )/2) or c=(A=v i )) P t is set of preconditions, C is set of base conditions, ( c=(A(v i +v i+1 )/2) or c=(A=v i )) W + (c) (resp. W - (c)) is the total weight of the positive (resp. negative) instances that satisfying condition c Complexity mainly lies in this part! Weights are increased for misclassified instances and decreased for correctly classified instances

9
Our approach – BOAI Evaluation phase in top-down evaluation –instances need to be sorted on numeric attributes at each prediction node –the weight distribution for all possible splits need to be calculated by scanning instances at each prediction node –great deal of sorting and computing overlap BOAI (Bottom-up Evaluation for ADTree Induction) –Presorting technique reduce the sorting cost –Bottom-up evaluation evaluate splits from the leaf nodes to the root node avoid much redundant computing and sorting cost obtain the exactly same evaluation results with the top-down evaluation approach

10
Pre-sorting technique Preprocessing step –Sort the values of the numeric attribute, and map the sorting space of values x 0,x 1,…,x m-1 to 0,1,…,m- 1, and then replace the attribute values as the sorted indexes. use the sorted indexes to speed up the sorting time the split evaluation phase. sorting replacing Sorting space: 1500, 1600, 1800 Sorted index: 0,1,2 Sorting space: 1500, 1600, 1800 Sorted index: 0,1,2 Replace the original values in the data with their sorted indexes

11
VW-set (Attribute-Value, Class-Weight) Only need weight distribution (, ) on distinct attribute values for split evaluation. –Just keep the necessary information! VW-set of attribute A at node p –stores the weight distribution of each class for each distinct value of A in F(p). (F(p) denotes the instances projected onto p) –If A is a numeric attribute, the distinct values in VW-set must be sorted. VW-group of node p –the set of all VW-sets at node p Each prediction node can be evaluated based on its VW- group

12
VW-set (Attribute-Value, Class-Weight) The size of the VW-set is determined by the distinct attribute values appeared in F(p) and is not proportional to the size of F(p) W + (Dept.=2)W - (Dept.=2)

13
Bottom-up evaluation The main idea: –evaluate splits from the leaf nodes to the root node –use already computed statistics to evaluate parent nodes –much computing and sorting redundancy can be avoided Evaluate based on VW-group

14
Bottom-up evaluation (Cont.) For leaf nodes –directly construct VW-group by scanning instances at the node –VW-set on categorical attribute: Use hash table to index different values and collect their weights –VW-set on numeric attribute: Map weights to the corresponding index in the value space and compress them to VW-set Sort can take linear time! O(n+m)

15
Bottom-up evaluation (Cont.) For internal nodes –construct VW-group by merging the VW-groups of its two children nodes (prediction nodes) Sort cost: O(V 1 +V 2 )

16
Evaluation Algorithm directlyconstructiondirectlyconstruction mergeconstructionmergeconstruction categorical split numeric split evaluate other children children

17
Computation analysis Prediction node p, |F(p)|=n Top-down evaluation: sorting cost for each numeric attribute : O(nlogn) Z-value calculation cost for each attribute : O(n) Bottom-up evaluation: sorting cost for each numeric attribute : –Leaf node: in most case O(n+m) –Internal node: sort through merging, O(V 1 +V 2 ), where V 1,V 2 are the numbers of distinct values in the two merged VW-groups. They always much smaller then n Z-value calculation cost for each attribute: –O(V), V is the number of distinct values in the VW-group, always much smaller than n

18
Experiments Data sets –Synthetic data sets: IBM Quest data mining group, up to 500,000 instances –Real data sets: China Mobile Communication Company, 290,000 subscribers covering 92 variables Environment –AMD CPU running windows XP with 768MB main memory

19
Experimental results (Synthetic data)

20
Experimental results (real data)

21
Apply to churn prediction –Calibration set: 20,083, validation set: 5,062 –Imbalance problem: about 2.1% churn rate –Re-balancing strategy: Multiply the weight of each instance in the minority class by W maj /W min (W maj (resp. W min ) is the total weight of the majority (resp. minority) class instances) Little information loss and does not introduce more computing power on average ModelsF-measureG-meanW-accuracyModeling Time (sec) ADT(w/o re-balancing) Random Forests TreeNet BOAI

22
Summary We developed a novel approach for ADTree induction to speed up training time on large data sets Key insight: –eliminate the great redundancy of sorting and computation in the tree induction by using a bottom-up evaluation approach based on VW-group Experiments on both synthetic and real datasets show that BOAI offers significant performance improvement while constructing exactly the same model. Its an attractive algorithm for modeling on large data sets!

23
Thanks!

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google