1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)

Slides:



Advertisements
Similar presentations
Machine learning Overview
Advertisements

© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Artificial Intelligence 12. Two Layer ANNs
Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Artificial Neural Networks Some slides adapted from Geoffrey Hinton.
General Information Course Id: COSC6342 Machine Learning Time: Tuesdays and Thursdays 2:30 PM – 4:00 PM Professor: Ricardo Vilalta
1 Machine Learning Spring 2010 Rong Jin. 2 CSE847 Machine Learning Instructor: Rong Jin Office Hour: Tuesday 4:00pm-5:00pm Thursday 4:00pm-5:00pm Textbook.
ChooseMove=16  19 V =30 points. LegalMoves= 16  19, or SimulateMove = 11  15, or …. 16  19,
5 x4. 10 x2 9 x3 10 x9 10 x4 10 x8 9 x2 9 x4.
Constraint Optimization We are interested in the general non-linear programming problem like the following Find x which optimizes f(x) subject to gi(x)
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18.
EXPERT SYSTEMS apply rules to solve a problem. –The system uses IF statements and user answers to questions in order to reason just like a human does.
1er. Escuela Red ProTIC - Tandil, de Abril, Introduction How to program computers to learn? Learning: Improving automatically with experience.
An Introduction to Machine Learning In the area of AI (earlier) machine learning took a back seat to Expert Systems Expert system development usually consists.
Machine Learning Bob Durrant School of Computer Science
Machine Learning (Extended) Dr. Ata Kaban
1 Machine Learning Spring 2010 Rong Jin. 2 CSE847 Machine Learning  Instructor: Rong Jin  Office Hour: Tuesday 4:00pm-5:00pm Thursday 4:00pm-5:00pm.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
1 Some rules  No make-up exams ! If you miss with an official excuse, you get average of your scores in the other exams – at most once.  WP only-if you.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
A Brief Survey of Machine Learning
Introduction to Machine Learning Approach Lecture 5.
Chapter 5 Data mining : A Closer Look.
Introduction to machine learning
Rohit Ray ESE 251. What are Artificial Neural Networks? ANN are inspired by models of the biological nervous systems such as the brain Novel structure.
Artificial Intelligence (AI) Addition to the lecture 11.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Short Introduction to Machine Learning Instructor: Rada Mihalcea.
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
CpSc 810: Machine Learning Design a learning system.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 13 September.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Introduction to machine learning and data mining 1 iCSC2014, Juan López González, University of Oviedo Introduction to machine learning Juan López González.
Lecture 10: 8/6/1435 Machine Learning Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
1 Machine Learning (Extended) Dr. Ata Kaban Algorithms to enable computers to learn –Learning = ability to improve performance automatically through experience.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
1 Machine Learning: Lecture 8 Computational Learning Theory (Based on Chapter 7 of Mitchell T.., Machine Learning, 1997)
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
1 Machine Learning 1.Where does machine learning fit in computer science? 2.What is machine learning? 3.Where can machine learning be applied? 4.Should.
Computing & Information Sciences Kansas State University Wednesday, 12 Sep 2007CIS 530 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 12 September.
CSC Intro. to Computing Lecture 22: Artificial Intelligence.
Learning from observations
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
Chapter 1: Introduction. 2 목 차목 차 t Definition and Applications of Machine t Designing a Learning System  Choosing the Training Experience  Choosing.
1 CHUKWUEMEKA DURUAMAKU.  Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
1 Introduction to Machine Learning Chapter 1. cont.
Introduction Machine Learning: Chapter 1. Contents Types of learning Applications of machine learning Disciplines related with machine learning Well-posed.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
Learning Kernel Classifiers 1. Introduction Summarized by In-Hee Lee.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Overfitting, Bias/Variance tradeoff. 2 Content of the presentation Bias and variance definitions Parameters that influence bias and variance Bias and.
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
1 Machine Learning Patricia J Riddle Computer Science 367 6/26/2016Machine Learning.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Supervise Learning Introduction. What is Learning Problem Learning = Improving with experience at some task – Improve over task T, – With respect to performance.
Data Mining Lecture 3.
Spring 2003 Dr. Susan Bridges
Done Done Course Overview What is AI? What are the Major Challenges?
Machine Learning Today: Reading: Maria Florina Balcan
Artificial Intelligence Methods
3.1.1 Introduction to Machine Learning
Why Machine Learning Flood of data
Presentation transcript:

1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)

2 Machine Learning: A Definition Definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.

3 Examples of Successful Applications of Machine Learning §Learning to recognize spoken words (Lee, 1989; Waibel, 1989). §Learning to drive an autonomous vehicle (Pomerleau, 1989). §Learning to classify new astronomical structures (Fayyad et al., 1995). §Learning to play world-class backgammon (Tesauro 1992, 1995).

4 Why is Machine Learning Important? §Some tasks cannot be defined well, except by examples (e.g., recognizing people). §Relationships and correlations can be hidden within large amounts of data. Machine Learning/Data Mining may be able to find these relationships. §Human designers often produce machines that do not work as well as desired in the environments in which they are used.

5 §The amount of knowledge available about certain tasks might be too large for explicit encoding by humans (e.g., medical diagnostic). §Environments change over time. §New knowledge about tasks is constantly being discovered by humans. It may be difficult to continuously re-design systems by hand. Why is Machine Learning Important (Contd)?

6 Areas of Influence for Machine Learning §Statistics: How best to use samples drawn from unknown probability distributions to help decide from which distribution some new sample is drawn? §Brain Models: Non-linear elements with weighted inputs (Artificial Neural Networks) have been suggested as simple models of biological neurons. §Adaptive Control Theory: How to deal with controlling a process having unknown parameters that must be estimated during operation?

7 Areas of Influence for Machine Learning (Contd) §Psychology: How to model human performance on various learning tasks? §Artificial Intelligence: How to write algorithms to acquire the knowledge humans are able to acquire, at least, as well as humans? §Evolutionary Models: How to model certain aspects of biological evolution to improve the performance of computer programs?

8 Designing a Learning System: An Example 1. Problem Description 2. Choosing the Training Experience 3. Choosing the Target Function 4. Choosing a Representation for the Target Function 5. Choosing a Function Approximation Algorithm 6. Final Design

9 1. Problem Description: A Checker Learning Problem §Task T: Playing Checkers §Performance Measure P: Percent of games won against opponents §Training Experience E: To be selected ==> Games Played against itself

10 2. Choosing the Training Experience §Direct versus Indirect Experience [Indirect Experience gives rise to the credit assignment problem and is thus more difficult] §Teacher versus Learner Controlled Experience [the teacher might provide training examples; the learner might suggest interesting examples and ask the teacher for their outcome; or the learner can be completely on its own with no access to correct outcomes] §How Representative is the Experience? [Is the training experience representative of the task the system will actually have to solve? It is best if it is, but such a situation cannot systematically be achieved]

11 3. Choosing the Target Function §Given a set of legal moves, we want to learn how to choose the best move [since the best move is not necessarily known, this is an optimization problem] §ChooseMove: B --> M is called a Target Function [ChooseMove, however, is difficult to learn. An easier and related target function to learn is V: B --> R, which assigns a numerical score to each board. The better the board, the higher the score.] §Operational versus Non-Operational Description of a Target Function [An operational description must be given] §Function Approximation [The actual function can often not be learned and must be approximated]

12 4. Choosing a Representation for the Target Function § Expressiveness versus Training set size [The more expressive the representation of the target function, the closer to the truth we can get. However, the more expressive the representation, the more training examples are necessary to choose among the large number of representable possibilities.] §Example of a representation: l x1/x2 = # of black/red pieces on the board l x3/x4 = # of black/red king on the board l x5/x6 = # of black/red pieces threatened by red/black V(b) = w0+w1.x1+w2.x2+w3.x3+w4.x4+w5.x5+w6.x6 wis are adjustable or learnable coefficients ^

13 5. Choosing a Function Approximation Algorithm § Generating Training Examples of the form [e.g. <x1=3, x2=0, x3=1, x4=0, x5=0, x6=0, +100 (=blacks won)] l Useful and Easy Approach: Vtrain(b) <- V(Successor(b)) § Training the System l Defining a criterion for success [What is the error that needs to be minimized?] l Choose an algorithm capable of finding weights of a linear function that minimize that error [e.g. the Least Mean Square (LMS) training rule]. ^

14 6. Final Design for Checkers Learning § The Performance Module: Takes as input a new board and outputs a trace of the game it played against itself. § The Critic: Takes as input the trace of a game and outputs a set of training examples of the target function § The Generalizer: Takes as input training examples and outputs a hypothesis which estimates the target function. Good generalization to new cases is crucial. §The Experiment Generator: Takes as input the current hypothesis (currently learned function) and outputs a new problem (an initial board state) for the performance system to explore In this course, we are mostly concerned with the generalizer

15 Issues in Machine Learning (i.e., Generalization) §What algorithms are available for learning a concept? How well do they perform? §How much training data is sufficient to learn a concept with high confidence? §When is it useful to use prior knowledge? §Are some training examples more useful than others? §What are best tasks for a system to learn? §What is the best way for a system to represent its knowledge?