1 Introduction to Machine Learning Chapter 1. cont.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
ChooseMove=16  19 V =30 points. LegalMoves= 16  19, or SimulateMove = 11  15, or …. 16  19,
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE: Intro to Machine Learning.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio Machine Learning: Decision Trees.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18.
CS Machine Learning.
1 Machine Learning Introduction Paola Velardi. 2 Course material Slides (partly) from: 91L/
1er. Escuela Red ProTIC - Tandil, de Abril, Introduction How to program computers to learn? Learning: Improving automatically with experience.
Amit Sethi, EEE, IIT Cepstrum, Oct 16,
x – independent variable (input)
Decision Tree Rong Jin. Determine Milage Per Gallon.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
1 Some rules  No make-up exams ! If you miss with an official excuse, you get average of your scores in the other exams – at most once.  WP only-if you.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Adversarial Search: Game Playing Reading: Chess paper.
CS 391L: Machine Learning Introduction
1 Reinforcement Learning: Learning algorithms Function Approximation Yishay Mansour Tel-Aviv University.
Lecture 2: Introduction to Machine Learning
Machine Learning Chapter 13. Reinforcement Learning
Data Mining Joyeeta Dutta-Moscato July 10, Wherever we have large amounts of data, we have the need for building systems capable of learning information.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Reinforcement Learning
CpSc 810: Machine Learning Design a learning system.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
1 Machine Learning Introduction Paola Velardi. 2 Course material Slides (partly) from: 91L/
Machine Learning An Introduction. What is Learning?  Herbert Simon: “Learning is any process by which a system improves performance from experience.”
Classification / Regression Neural Networks 2
1 Mining in geographic data Original slides:Raymond J. Mooney University of Texas at Austin.
Introduction Introduction Dr. Khaled Wassif Spring Machine Learning.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
1 Machine Learning 1.Where does machine learning fit in computer science? 2.What is machine learning? 3.Where can machine learning be applied? 4.Should.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
Linear Discrimination Reading: Chapter 2 of textbook.
Week 1 - An Introduction to Machine Learning & Soft Computing
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
Chapter 1: Introduction. 2 목 차목 차 t Definition and Applications of Machine t Designing a Learning System  Choosing the Training Experience  Choosing.
Linear Models for Classification
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Machine Learning Introduction. Class Info Office Hours –Monday:11:30 – 1:00 –Wednesday:10:00 – 1:00 –Thursday:11:30 – 1:00 Course Text –Tom Mitchell:
CS 2750: Machine Learning Machine Learning Basics + Matlab Tutorial
Introduction Machine Learning: Chapter 1. Contents Types of learning Applications of machine learning Disciplines related with machine learning Well-posed.
Introduction to Machine Learning © Roni Rosenfeld,
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
Learning Kernel Classifiers 1. Introduction Summarized by In-Hee Lee.
Machine Learning Lecture 1: Intro + Decision Trees Moshe Koppel Slides adapted from Tom Mitchell and from Dan Roth.
Machine Learning & Datamining CSE 454. © Daniel S. Weld 2 Project Part 1 Feedback Serialization Java Supplied vs. Manual.
1 Machine Learning Patricia J Riddle Computer Science 367 6/26/2016Machine Learning.
Supervise Learning Introduction. What is Learning Problem Learning = Improving with experience at some task – Improve over task T, – With respect to performance.
Data Mining Lecture 3.
Introduction to Machine Learning
Classification: Logistic Regression
Spring 2003 Dr. Susan Bridges
Mastering the game of Go with deep neural network and tree search
Perceptrons Lirong Xia.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Classification with Perceptrons Reading:
Classification / Regression Neural Networks 2
The Pennsylvania State University
Instance Based Learning
STUDI KASUS MACHINE LEARNING DAN DATA MINING
Why Machine Learning Flood of data
Model generalization Brief summary of methods
Machine learning overview
Perceptrons Lirong Xia.
CAP 5610: Introduction to Machine Learning Spring 2011 Dr
Presentation transcript:

1 Introduction to Machine Learning Chapter 1. cont.

Review: Learning Definition Well-posed learning problem: – Improve on task, T, with respect to performance metric, P, based on experience, E. The main issue of Learning: – Finding a general function from specific training examples

Examples T: Playing checkers P: Percentage of games won against an arbitrary opponent E: Playing practice games against itself T: Recognizing hand-written words P: Percentage of words correctly classified E: Database of human-labeled images of handwritten words T: Driving on four-lane highways using vision sensors P: Average distance traveled before a human-judged error E: A sequence of images and steering commands recorded while observing a human driver. T: Categorize messages as spam or legitimate. P: Percentage of messages correctly classified. E: Database of s, some with human-given labels

Designing a learning system 1.Choosing the training experience (data set) 2.Choosing the target function 3.Choosing a representation for the target function 4.Choosing a function approximation algorithm 5.The final design

Choosing the Training Experience – Sometimes straightforward Text classification, disease diagnosis – Sometimes not so straightforward Chess playing, checkers (indirect information is available)

Training Experience Attributes How the training experience is controlled by the learner? – Is it provided by a human process outside the learner’s control? – Does learner collect the training examples by autonomously exploring its environment? How well it represent the distribution of the examples? – Playing checkers: Playing practice games against itself

Designing a learning system 1.Choosing the training experience (data set) 2.Choosing the target function 3.Choosing a representation for the target function 4.Choosing a function approximation algorithm 5.The final design

Choosing the Target Function For checkers: – Could learn a function: ChooseMove(board, legal-moves) → best-move – Or could learn an evaluation function, V(board) → R, Where R is a real value representing how favorable the board is.

Ideal definition of V(b) If b is a final winning board, then V(b) = 100 If b is a final losing board, then V(b) = –100 If b is a final draw board, then V(b) = 0 Otherwise, then V(b) = V(b’), where b’ is the highest scoring final board position that is achieved starting from b and playing optimally until the end of the game (assuming the opponent plays optimally as well). This definition is non-operational => Approximation of the ideal function

Linear Function for Representing V(b) – bp(b): number of black pieces on board b – rp(b): number of red pieces on board b – bk(b): number of black kings on board b – rk(b): number of red kings on board b – bt(b): number of black pieces threatened (i.e. which can be immediately taken by red on its next turn) – rt(b): number of red pieces threatened

A win board example, 100> (win for black) Training Examples: { }

Designing a learning system 1.Choosing the training experience (data set) 2.Choosing the target function 3.Choosing a representation for the target function 4.Choosing a function approximation algorithm 5.The final design

Examples of Value Functions Linear Regression – Input: feature vectors – Output: Logistic Regression Input: feature vectors Output:

Examples of Classifiers Linear Classifier – Input: feature vectors – Output:

Examples of Classifiers Rule Classifier Decision tree A tree with nodes representing condition testing and leaves representing classes Decision list If condition 1 then class 1 elseif condition 2 then class 2 elseif ….

Designing a learning system 1.Choosing the training experience (data set) 2.Choosing the target function 3.Choosing a representation for the target function 4.Choosing a function approximation algorithm 5.The final design

Learning Approximating the weights using the data set

Least Mean Square, Gradient Discent MSE (mean squared error):

Gradient Descent

Designing a learning system 1.Choosing the training experience (data set) 2.Choosing the target function 3.Choosing a representation for the target function 4.Choosing a function approximation algorithm 5.The final design

Design