Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References.

Slides:



Advertisements
Similar presentations
A Survey on Transfer Learning Sinno Jialin Pan Department of Computer Science and Engineering The Hong Kong University of Science and Technology Joint.
Advertisements

+ Multi-label Classification using Adaptive Neighborhoods Tanwistha Saha, Huzefa Rangwala and Carlotta Domeniconi Department of Computer Science George.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
On-line learning and Boosting
Data Mining Classification: Alternative Techniques
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Discriminative and generative methods for bags of features
Self Taught Learning : Transfer learning from unlabeled data Presented by: Shankar B S DMML Lab Rajat Raina et al, CS, Stanford ICML 2007.
Transfer Learning for WiFi-based Indoor Localization
Artificial Intelligence Statistical learning methods Chapter 20, AIMA (only ANNs & SVMs)
2D1431 Machine Learning Boosting.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Learning from Multiple Outlooks Maayan Harel and Shie Mannor ICML 2011 Presented by Minhua Chen.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Introduction to domain adaptation
Learning to Learn By Exploiting Prior Knowledge
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Issues with Data Mining
Comparing the Parallel Automatic Composition of Inductive Applications with Stacking Methods Hidenao Abe & Takahira Yamaguchi Shizuoka University, JAPAN.
1 Introduction to Transfer Learning (Part 2) For 2012 Dragon Star Lectures Qiang Yang Hong Kong University of Science and Technology Hong Kong, China
Transfer Learning Part I: Overview Sinno Jialin Pan Sinno Jialin Pan Institute for Infocomm Research (I2R), Singapore.
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Chapter 7 Neural Networks in Data Mining Automatic Model Building (Machine Learning) Artificial Intelligence.
Xiaoxiao Shi, Qi Liu, Wei Fan, Philip S. Yu, and Ruixin Zhu
Transfer Learning Task. Problem Identification Dataset : A Year: 2000 Features: 48 Training Model ‘M’ Testing 98.6% Training Model ‘M’ Testing 97% Dataset.
@delbrians Transfer Learning: Using the Data You Have, not the Data You Want. October, 2013 Brian d’Alessandro.
Modern Topics in Multivariate Methods for Data Analysis.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Source-Selection-Free Transfer Learning
BOOSTING David Kauchak CS451 – Fall Admin Final project.
Multi-Task Learning for Boosting with Application to Web Search Ranking Olivier Chapelle et al. Presenter: Wei Cheng.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Ensemble Methods: Bagging and Boosting
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
CLASSIFICATION: Ensemble Methods
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
MODEL ADAPTATION FOR PERSONALIZED OPINION ANALYSIS MOHAMMAD AL BONI KEIRA ZHOU.
Conditional Random Fields for ASR Jeremy Morris July 25, 2006.
HAITHAM BOU AMMAR MAASTRICHT UNIVERSITY Transfer for Supervised Learning Tasks.
Iterative similarity based adaptation technique for Cross Domain text classification Under: Prof. Amitabha Mukherjee By: Narendra Roy Roll no: Group:
Wenyuan Dai, Ou Jin, Gui-Rong Xue, Qiang Yang and Yong Yu Shanghai Jiao Tong University & Hong Kong University of Science and Technology.
NTU & MSRA Ming-Feng Tsai
Virtual Examples for Text Classification with Support Vector Machines Manabu Sassano Proceedings of the 2003 Conference on Emprical Methods in Natural.
… Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.
 Effective Multi-Label Active Learning for Text Classification Bishan yang, Juan-Tao Sun, Tengjiao Wang, Zheng Chen KDD’ 09 Supervisor: Koh Jia-Ling Presenter:
Transfer and Multitask Learning Steve Clanton. Multiple Tasks and Generalization “The ability of a system to recognize and apply knowledge and skills.
Coached Active Learning for Interactive Video Search Xiao-Yong Wei, Zhen-Qun Yang Machine Intelligence Laboratory College of Computer Science Sichuan University,
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Parsing Natural Scenes and Natural Language with Recursive Neural Networks INTERNATIONAL CONFERENCE ON MACHINE LEARNING (ICML 2011) RICHARD SOCHER CLIFF.
New Trends In Machine Learning and Data Science Ricardo Vilalta Dept
Big data classification using neural network
Chapter 7. Classification and Prediction
Data Mining, Neural Network and Genetic Programming
Classification: Logistic Regression
Transfer Learning in Astronomy: A New Machine Learning Paradigm
Conditional Random Fields for ASR
Introductory Seminar on Research: Fall 2017
Asymmetric Gradient Boosting with Application to Spam Filtering
An Introduction to Support Vector Machines
Combining Base Learners
Distributed Representation of Words, Sentences and Paragraphs
Artificial Intelligence Chapter 3 Neural Networks
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Artificial Intelligence Chapter 3 Neural Networks
Somi Jacob and Christian Bach
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References

The goal is to transfer knowledge gathered from previous experience. Also called Inductive Transfer or Learning to Learn. Example: Invariant transformations across tasks. Transfer Learning

Motivation for transfer learning Similar to self-adaptation: once a predictive model is built, there are reasons to believe the model will cease to be valid at some point in time. The difference is that now source and target domains can be completely different. Motivation Transfer Learning

Traditional Approach to Classification DB1DB2DBn Learning System

Transfer Learning DB1DB2 DB new Learning System Knowledge Source domain Target domain

Transfer Learning Scenarios: 1.Labeling in a new domain is costly. DB1 (labeled) Classification of Cepheids DB2 (unlabeled) Classification of LPV

Transfer Learning Scenarios: 2. Data is outdated. Model created with one survey but a new survey is now available. Survey 1 Learning System Survey 2 ?

Types of Transfer Learning Figure obtained from Brazdil, et. Al. Metalearning: Applications to Data Mining, Chapter 7, Springer, 2009.

Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References

Input nodes Internal nodes Output nodes LeftStraightRight Functional Transfer: Multitask Learning

Given example X, compute the output of every node until we reach the output nodes: Input nodes Internal nodes Output nodes Example X Compute sigmoid function function Functional Transfer in Neural Networks

Train in Parallel with Combined Architecture Figure obtained from Brazdil, et. Al. Metalearning: Applications to Data Mining, Chapter 7, Springer, 2009.

Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References

Knowledge of Parameters Assume prior distribution of parameters Source domain Learn parameters and adjust prior distribution Target domain Learn parameters using the source prior distribution.

P(y|x) = P(x|y) P(y) / P(x) Parameter Similarity Task A  Parameter A Task B  Parameter B ~ A Assume hyper-distribution with low variance. Assume Parameter Similarity

Knowledge of Parameters Find coefficients w s using SVMs Find coefficients w T using SVMs initializing the search with w s

Feature Transfer Feature Transfer: Target domain Source domain Shared representation across tasks Minimize Loss-Function( y, f(x)) The minimization is done over multiple tasks (multiple regions on Mars).

Feature Transfer Identify common Features to all tasks

Coded divided into pieces New Solution Add pieces of code from previous tasks Start a new solution from scratch Meta-Searching for Problem Solvers

Exploitation: Maximize reward vs Exploration: Maximize long-term success.

Learn to keep the ball away from the opponent. First Task Learn to score the opponent. Second Task Transfer Learning in Robotics

Instance Transfer Learning Instance Transfer: Learning System Target domain Source domain Filter samples Larger target dataset New program called TrAdaboost

Instance Transfer Learning New program called TrAdaboost The main idea is to have a methodology to deal with a changing distribution. Examples in the source domain that look as belonging to a diff. distribution are discarded. Examples in the source domain that look similar to the target domain are added to the training set.

Boosting DB Incorrectly classified instances  increase weight

Boosting DB Combine all hypotheses to produce final weighted function: w 1 f 1 + w 2 f 2 + … + w n f n

Automatic Instance Transfer Boosting Source domain Target domain Learning System (Boosting) Incorrectly classified instances  decrease weight Incorrectly classified instances  increase weight Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Automatic Instance Transfer Boosting for Transfer Learning, Wenyuan Dai, et. al. ICML 2007

Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References

Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10): , Oct Brazdil, P. et. al. Metalearning: Applications to Data Mining. Springer, Dai, W., Boosting for Transfer Learning, Proceedings of ICML Video on transfer learning References

Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10): , Oct Brazdil, P. et. al. Metalearning: Applications to Data Mining. Springer, Dai, W., Boosting for Transfer Learning, Proceedings of ICML Video on transfer learning References

Robot learns to flip pancakes Robot learns to stack pancakes Videos