Playing with features for learning and prediction Jongmin Kim Seoul National University.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Stochastic Neural Networks Deep Learning and Neural Nets Spring 2015.
CS590M 2008 Fall: Paper Presentation
Advanced topics.
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Atul Kanaujia, CBIM, Rutgers Cristian Sminchisescu, TTI-C Dimitris Metaxas,CBIM, Rutgers.
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Deep Learning.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Submitted by:Supervised by: Ankit Bhutani Prof. Amitabha Mukerjee (Y )Prof. K S Venkatesh.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Learning Programs Danielle and Joseph Bennett (and Lorelei) 4 December 2007.
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Feature Creation and Selection.
Machine Learning CS 165B Spring 2012
Predicting Post-Operative Gait of Cerebral Palsy Patients
Nantes Machine Learning Meet-up 2 February 2015 Stefan Knerr CogniTalk
Learning: Nearest Neighbor Artificial Intelligence CMSC January 31, 2002.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
EM and expected complete log-likelihood Mixture of Experts
A shallow introduction to Deep Learning
Transfer Learning Task. Problem Identification Dataset : A Year: 2000 Features: 48 Training Model ‘M’ Testing 98.6% Training Model ‘M’ Testing 97% Dataset.
CSC Lecture 8a Learning Multiplicative Interactions Geoffrey Hinton.
Andrew Ng Feature learning for image classification Kai Yu and Andrew Ng.
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CSC Lecture 6a Learning Multiplicative Interactions Geoffrey Hinton.
CSC2515 Lecture 10 Part 2 Making time-series models with RBM’s.
Introduction to Deep Learning
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Foundational Issues Machine Learning 726 Simon Fraser University.
Predicting Post-Operative Patient Gait Jongmin Kim Movement Research Lab. Seoul National University.
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Learning Kernel Classifiers 1. Introduction Summarized by In-Hee Lee.
Deep learning Tsai bing-chen 10/22.
연구 진행 상황 이경호 목차 연구 진행 상황 –GCD to Motion Data –Predict outcome of surgery.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Xintao Wu University of Arkansas Introduction to Deep Learning 1.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Unsupervised Learning of Video Representations using LSTMs
Semi-Supervised Clustering
Learning Deep Generative Models by Ruslan Salakhutdinov
CS 388: Natural Language Processing: LSTM Recurrent Neural Networks
Deep Learning Amin Sobhani.
New Machine Learning in Medical Imaging Journal Club
Learning Deep L0 Encoders
School of Computer Science & Engineering
Article Review Todd Hricik.
Matt Gormley Lecture 16 October 24, 2016
Restricted Boltzmann Machines for Classification
Neural Networks for Machine Learning Lecture 1e Three types of learning Geoffrey Hinton with Nitish Srivastava Kevin Swersky.
The Elements of Statistical Learning
Neural networks (3) Regularization Autoencoder
Azure Machine Learning Noam Brezis Madeira Data Solutions
Deep learning and applications to Natural language processing
Training Techniques for Deep Neural Networks
AV Autonomous Vehicles.
Structure learning with deep autoencoders
Unsupervised Learning and Autoencoders
Machine Learning Feature Creation and Selection
Department of Electrical and Computer Engineering
Computer Vision James Hays
Deep learning Introduction Classes of Deep Learning Networks
ECE 599/692 – Deep Learning Lecture 9 – Autoencoder (AE)
Model generalization Brief summary of methods
Neural networks (3) Regularization Autoencoder
Machine Learning – a Probabilistic Perspective
Presentation transcript:

Playing with features for learning and prediction Jongmin Kim Seoul National University

Problem statement Predicting outcome of surgery

Ideal approach.. ? Training Data Predicting outcome surgery

Predicting outcome of surgery Initial approach –Predicting partial features Predict witch features?

Predicting outcome of surgery 4 Surgery –DHL+RFT+TAL+FDO flexion of the knee ( min / max ) dorsiflexion of the ankle ( min ) rotation of the foot ( min / max )

Predicting outcome of surgery Is it good features? Number of Training data –DHL+RFT+TAL : 35 data –FDO+DHL+TAL+RFT : 33 data

Machine learning and feature Data Feature representation Learning algorithm Feature representation Learning algorithm

Joint position / angle Velocity / acceleration Distance between body parts Contact status … Features in motion

Features in computer vision SIFT Spin image HoGRIFT Textons GLOH

Machine learning and feature

Outline Feature selection - Feature ranking - Subset selection: wrapper, filter, embedded - Recursive Feature Elimination - Combination of weak prior (Boosting) - ADAboosting(clsf) / joint boosting (clsf)/ Gradientboost (regression) Prediction result with feature selection Feature learning?

Feature selection Alleviating the effect of the curse of dimensionality Improve the prediction performance Faster and more cost-effective Providing a better understanding of the data

Subset selection Wrapper Filter Embedded

Feature learning? Can we automatically learn a good feature representation? Known as: unsupervised feature learning, feature learning, deep learning, representation learning, etc. Hand-designed features (by human): 1. need expert knowledge 2. requires time-consuming hand-tuning. When it’s unclear how to hand design features: automatically learned features (by machine)

Learning Feature Representations Key idea: – Learn statistical structure or correlation of the data from unlabeled data –The learned representations can be used as features in supervised and semi-supervised settings

Learning Feature Representations Encoder Decoder Input (Image/ Features) Output Features e.g. Feed-back / generative / top-down path Feed-forward / bottom-up path

Learning Feature Representations σ(Wx) Dz Input Patch x Sparse Features z e.g. Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09] Encoder filt ers W Sigmoid fu nction σ(.) Decoder fi lters D L 1 Spars ity

Stacked Auto-Encoders Encoder Decoder Input Image Class label Features Encoder Decoder Features Encoder Decoder [Hinton & Salakhutdinov Science ‘06]

At Test Time Encoder Input Image Class label Features Encoder Features Encoder [Hinton & Salakhutdinov Science ‘06] Remove decoders Use feed-forward path Gives standard(Convol utional) Neural Network Can fine-tune with bac kprop

Status & plan Data 파악 / learning technique survey… Plan : 11 월 실험 끝 12 월 논문 writing 1 월 시그랩 submit 8 월에 미국에서 발표 But before all of that….

Deep neural net vs. boosting Deep Nets: - single highly non-linear system - “deep” stack of simpler modules - all parameters are subject to learning Boosting & Forests: - sequence of “weak” (simple) classifiers that are linearly combined to produce a powerful classifier - subsequent classifiers do not exploit representations of earlier classifiers, it's a “shallow” linear mixture - typically features are not learned

Deep neural net vs. boosting

Feature learning for motion data Learning representations of temporal data - Model complex, nonlinear dynamics such as style Restricted Boltzmann machine - didn’t understand the concept.. - the result is not impressive

Restricted Boltzmann machine Model complex, nonlinear dynamics Easily and exactly infer the latent binary state given the observations