Using decision trees and their ensembles for analysis of NIR spectroscopic data WSC-11, Saint Petersburg, 2018 In the light of morning session on superresolution.

Slides:



Advertisements
Similar presentations
Random Forest Predrag Radenković 3237/10
Advertisements

A Quick Overview By Munir Winkel. What do you know about: 1) decision trees 2) random forests? How could they be used?
Sparse vs. Ensemble Approaches to Supervised Learning
Ensemble Learning: An Introduction
Prediction Methods Mark J. van der Laan Division of Biostatistics U.C. Berkeley
Sparse vs. Ensemble Approaches to Supervised Learning
Data mining and statistical learning - lecture 11 Neural networks - a model class providing a joint framework for prediction and classification  Relationship.
Ensemble Learning (2), Tree and Forest
Machine Learning CS 165B Spring 2012
Predicting Income from Census Data using Multiple Classifiers Presented By: Arghya Kusum Das Arnab Ganguly Manohar Karki Saikat Basu Subhajit Sidhanta.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Combining multiple learners Usman Roshan. Bagging Randomly sample training data Determine classifier C i on sampled data Goto step 1 and repeat m times.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Ensemble Methods: Bagging and Boosting
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
CLASSIFICATION: Ensemble Methods
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
Overview of the final test for CSC Overview PART A: 7 easy questions –You should answer 5 of them. If you answer more we will select 5 at random.
Konstantina Christakopoulou Liang Zeng Group G21
Kaggle Competition Prudential Life Insurance Assessment
Random Forests Ujjwol Subedi. Introduction What is Random Tree? ◦ Is a tree constructed randomly from a set of possible trees having K random features.
Classification Ensemble Methods 1
COMP24111: Machine Learning Ensemble Models Gavin Brown
Classification and Prediction: Ensemble Methods Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CS 189 Brian Chu Slides at: brianchu.com/ml/ brianchu.com/ml/ Office Hours: Cory 246, 6-7p Mon. (hackerspace lounge)
Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made.
Competition II: Springleaf Sha Li (Team leader) Xiaoyan Chong, Minglu Ma, Yue Wang CAMCOS Fall 2015 San Jose State University.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Ensemble Learning, Boosting, and Bagging: Scaling up Decision Trees (with thanks to William Cohen of CMU, Michael Malohlava of 0xdata, and Manish Amde.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Overfitting, Bias/Variance tradeoff. 2 Content of the presentation Bias and variance definitions Parameters that influence bias and variance Bias and.
Kaggle Winner Presentation Template. Agenda 1.Background 2.Summary 3.Feature selection & engineering 4.Training methods 5.Important findings 6.Simple.
Ensemble Classifiers.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning: Ensemble Methods
University of Waikato, New Zealand
Introduction to Machine Learning
Data Mining Practical Machine Learning Tools and Techniques
Bagging and Random Forests
Week 2 Presentation: Project 3
Introduction to Machine Learning and Tree Based Methods
Zaman Faisal Kyushu Institute of Technology Fukuoka, JAPAN
Chapter 13 – Ensembles and Uplift
Boosting and Additive Trees (2)
Lecture 17. Boosting¶ CS 109A/AC 209A/STAT 121A Data Science: Harvard University Fall 2016 Instructors: P. Protopapas, K. Rader, W. Pan.
Trees, bagging, boosting, and stacking
COMP61011 : Machine Learning Ensemble Models
Ensemble Learning Introduction to Machine Learning and Data Mining, Carla Brodley.
Basic machine learning background with Python scikit-learn
Predict House Sales Price
ECE 5424: Introduction to Machine Learning
CS548 Fall 2017 Decision Trees / Random Forest Showcase by Yimin Lin, Youqiao Ma, Ran Lin, Shaoju Wu, Bhon Bunnag Showcasing work by Cano,
ECE 471/571 – Lecture 12 Decision Tree.
Ungraded quiz Unit 6.
CIKM Competition 2014 Second Place Solution
Direct or Remotely sensed
Machine Learning practical
Ensembles.
Decision Trees By Cole Daily CSCI 446.
Statistical Learning Dong Liu Dept. EEIS, USTC.
Ensemble learning.
Ensemble learning Reminder - Bagging of Trees Random Forest
CART on TOC CART for TOC R 2 = 0.83
Model generalization Brief summary of methods
Analysis for Predicting the Selling Price of Apartments Pratik Nikte
Ensembles An ensemble is a set of classifiers whose combined results give the final decision. test feature vector classifier 1 classifier 2 classifier.
Ensemble Methods: Bagging.
CS639: Data Management for Data Science
Advisor: Dr.vahidipour Zahra salimian Shaghayegh jalali Dec 2017
Presentation transcript:

Using decision trees and their ensembles for analysis of NIR spectroscopic data WSC-11, Saint Petersburg, 2018 In the light of morning session on superresolution

Outline S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions bpimediagroup.com S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Why decision trees? Why not? S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

But why decision trees? Kaggle CEO and Founder Anthony Goldbloom: ”…in the history of Kaggle competitions, there are only two Machine Learning approaches that win competitions: Handcrafted and Neural Networks” ”…It used to be random forest that was the big winner, but over the last six months a new algorithm called XGboost has cropped up, and it’s winning practically every competition in the structured data category” S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Why NIR spectroscopic data? When a linear regression can be better that the decision trees methods? when relationship between X and y is fully linear when there is a very large number of features with low S/N ratio when covariate shift is likely S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions www.ign.com S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Drinks beer? yes no Knows statistics? Not chemometrician Chemometrician Steals ideas from statisticians? Chemometrician Not chemometrician S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables Where are other variables? On every split the best variable is used Number of splits (tree depth) is limited Efficiency of split is a reduction of misclassification errors S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? –50% Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits 50 –44% 6 –2% 4 S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees for numeric variables How many splits? –50% Limit minimum number of objects in each bucket Limit the maximum tree size (depth/split number) Make a big tree and prune all inefficient splits 50% –44% 6% –2% 4% Use cross-validation to calculate the errors S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees regression Variable importance Is calculated for each variable individually Take s into account the role of a variable in different splits Is accumulated across all splits and normalized S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees regression Response variable is split into several bins Minimize variance in each node S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions viapesnyary.ru S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees ensembles Ensemble learning — combine several models together A group of week learners can perform better when together decrease variance, make prediction more stable and reliable S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Decision trees ensembles Bagging Create N random subsets (sampling with replacement) Train model for every subset (parallel) Use simple average for prediction Random forest Boosting Train a model from a random subset Make N better models by using new subsets (sequential) Use weighted average for prediction Randomly with replacement Gradient boosting S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions www.foodsafety.com.au S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Prediction of fat content in chopped meat samples by NIR spectra Tecator Prediction of fat content in chopped meat samples by NIR spectra http://lib.stat.cmu.edu/datasets/tecator 100 predictors (NIR spectra by Tecator Infratec Food and Feed Analyzer, 850–1050 nm) 215 measurements (172 for calibration and 43 for test) S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — predictions S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — the tree S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — variable importance S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — variable selection S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — variable selection S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Random forest — predictions S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Random forests — importance of variables S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Random forests — variable selection S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions www.hunterolives.asn.au S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Olives S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — the tree and splits S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Single tree — classification S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Random forest — classification S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Variable importance S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

What decision trees are? Decision trees ensembles Cases Outline Why decision trees? What decision trees are? Decision trees ensembles Cases Tecator Olives Conclusions www.foodsafety.com.au S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

Conclusions ”The bottom line is: You can spend 3 hours playing with the data, generating features and interaction variables and get a 77% r-squared; and I can “from sklearn.ensemble import RandomForestRegressor” and in 3 minutes get an 82% r-squared.” S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

IASIM-2016 S. Kucheryavskiy, WSC-11, Saint Petersburg 2018

IASIM-2018, June 17-20 2018, Seattle, WA, USA 12 March — for student scholarship April 5 — for abstract www.iasim18.iasim.net S. Kucheryavskiy, WSC-11, Saint Petersburg 2018