Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning 참고 자료 2 Learning Definition Learning is the improvement of performance in some environment through the acquisition of knowledge resulting.

Similar presentations


Presentation on theme: "Machine Learning 참고 자료 2 Learning Definition Learning is the improvement of performance in some environment through the acquisition of knowledge resulting."— Presentation transcript:

1

2 Machine Learning 참고 자료

3 2 Learning Definition Learning is the improvement of performance in some environment through the acquisition of knowledge resulting from experience in that environment.

4 3 Machine Learning: Tasks Supervised Learning Learn f w from training set D={(x,y)} s.t. Classification: y is discrete Regression: y is continuous Unsupervised Learning Learn f w from D={(x)} s.t. Density Estimation Compression, Clustering

5 4 Machine Learning: Methods Symbolic Learning Version Space Learning Neural Learning Multilayer Perceptrons (MLPs) Evolutionary Learning Genetic Algorithms Probabilistic Learning Bayesian Networks (BNs) Other Machine Learning Methods Decision Trees (DTs)

6 5 Applications of Machine Learning Driving an autonomous vehicle 무인 자동차 운전, 센서기반 제어 등에도 응용 Classifying new astronomical structures 천체 물체 분류, Decision tree learning 기법 사용 Playing world-class Backgammon 실제 게임을 통해서 전략을 학습, 탐색공간 문제에 응용

7 6 A Definition of Learning : Well-posed Learning Problems Definition A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. A class of tasks T Experience E Performance measure P

8 7 Checkers Problem (1/2) 말은 대각선으로만 움직일 수 있다. 맞은편 끝까지 가기 전에는 앞으로만 진행할 수 있다. 대각선에 상대편 말이 있을 경우 그 말을 없앨수 있다. 게임은 한편 말이 모두 없어지면 끝난다.

9 8 Checkers Problem (2/2) homepage http://www.geocities.com/Heartland/7134/Green/grprechecker.htm http://www.acfcheckers.com

10 9 A Checkers Learning Problem Three Features: 학습문제의 정의 The class of tasks The measure of performance to be improved The source of experience Example Task T: playing checkers Performance measure P: percent of games won against opponent Training experience E: playing practice games against itself

11 10 Designing a Learning System Choosing the Training Experience Choosing the Target Function Choosing a Representation for the Target Function Choosing a Function Approximation Algorithm

12 11 Choosing the Training Experience (1/2) Key Attributes Direct/indirect feedback Direct feedback: checkers state and correct move Indirect feedback: move sequence and final outcomes Degree of controlling the sequence of training example Learner 가 학습 정보를 얻을 때 teacher 의 도움을 받는 정도

13 12 Choosing the Training Experience (2/2) Distribution of examples 시스템의 성능을 평가하는 테스트의 예제 분포 를 잘 반영해야 함

14 13 Choosing the Target Function (1/2) A function that chooses the best move M for any B ChooseMove : B  M Difficult to learn It is useful to reduce the problem of improving performance P at task T to the problem of learning some particular target function.

15 14 Choosing the Target Function (2/2) An evaluation function that assigns a numerical score to any B V : B  R

16 15 Target Function for the Checkers Problem Algorithm If b is a final state that is won, then V(b) = 100 ……. that is lost, then V(b)=-100 ……. that is drawn, then V(b)=0 If b is not a final state, then V(b)=V(b’), where b’ is the best final board state

17 16 Choosing a Representation for the Target Function Describing the function Tables Rules Polynomial functions Neural nets Trade-off in choice Expressive power Size of training data

18 17 Linear Combination as Representation (b) = w 0 + w 1 x 1 + w 2 x 2 + w 3 x 3 +w 4 x 4 + w 5 x 5 + w 6 x 6 x 1 : # of black pieces on the board x 2 : # of red pieces on the board x 3 : # of black kings on the board x 4 : # of red kings on the board x 5 : # of black pieces threatened by red x 6 : # of red pieces threatened by black w 1 - w 6 : weights

19 18 Partial Design of a Checkers Learning Program Task T: playing checkers Performance measure P: Percent of games won in the world tournament Training experience E: games played against itself Target function V: Board  R Target function representation (b) = w 0 + w 1 x 1 + w 2 x 2 + w 3 x 3 + w 4 x 4 + w 5 x 5 + w 6 x 6

20 19 Choosing a Function Approximation Algorithm A training example is represented as an ordered pair b: board state V train (b): training value for b Instance: “black has won the game (x 2 = 0), +100>

21 20 Choosing a Function Approximation Algorithm Estimating training values for intermediate board states V train (b)  (Successor(b)) : current approximation to V Successor(b): the next board state

22 21 Adjusting the Weights (1/2) Choosing w i to best fit the training examples Minimize the squared error

23 22 Adjusting the Weights (2/2) LMS Weight Update Rule For each training example 1. Use the current weights to calculate V’(b) 2. For each weight w i, update it as

24 23 Sequence of Design Choices Determine Type of Training Experience Determine Target Function Determine Representation Of Learned Function Determine Learning Algorithm Table of correct moves Games against experts Games against self Board  move Board  value Polynomial Linear function of six features Arfiticial NN Gradient descent Complete Design Linear Programming

25 24 Perspectives in ML “Learning as search in a space of possible hypotheses” Representations for hypotheses Linear functions Logical descriptions Decision trees Neural networks

26 25 Perspectives in ML Learning methods are characterized by their search strategies and by the underlying structure of the search spaces.

27 26 Summary 기계학습은 다양한 응용분야에서 실용적 가치 가 크다. 많은 데이터로부터 규칙성을 발견하는 문제 (data mining) 문제의 성격 규명이 어려워 효과적인 알고리즘을 개발할 지식이 없는 문제 영역 (human face recognition) 변화하는 환경에 동적으로 적응하여야 하는 문제 영역 (manufacturing process control)

28 27 Summary 기계학습은 다양한 다른 학문 분야와 밀접히 관련된다. 인공지능, 확률통계, 정보이론, 계산이론, 심리학, 신경과학, 제어이론, 철학 잘 정의된 학습 문제는 다음을 요구한다. 문제 (task) 의 명확한 기술, 성능평가 기준, 훈련경험 을 위한 사례

29 28 Summary 기계학습 시스템의 설계 시에는 다음 사항을 고려 하여야 한다. 훈련경험의 유형 선택 학습할 목표함수 목표함수에 대한 표현 훈련 예로부터 목표함수를 학습하기 위한 알고리 즘

30 29 Summary 학습은 가능한 가설 공간에서 주어진 훈련 예 와 다른 배경지식을 가장 잘 반영하는 하나의 가설을 탐색하는 탐색이다. 다양한 학습 방법은 서로 다른 가설공간의 형태와 이 공간 내에서 탐색을 수행하는 전략에 의해 규정 지어진다.

31 Neural Networks

32 31 Biological motivation Neuron receives signals from other neurons through its dendrites Transmits signals generated by its cell body along the axon Network of Neuron

33 32 Neural Network Representations The primitive unit(e.g. perceptron) N input signals  weighted sum  threshold function  generate an output A learning process in the ANN Learning process involves choosing values for the weights w 0, …, w n Learning rules How network weights are updated?

34 33 Gradient descent and the delta rule The delta rule Linear unit for which the output o is given by Measure for the training error of a hypothesis d : the set of traing examples t d : the target output for training example d o d : the output of the linear unit for training example d We can characterize E as a function of

35 34 Gradient descent and the delta rule

36 35 Gradient descent and the delta rule Derivation of the gradient descent rule Direction of steepest descent along the error space Derivative E with respect to each component of The negative of this vector therefore gives the direction of steepest decrease

37 36 Gradient descent and the delta rule Training rule for gradient descent w i ← w i +  w i where, Efficient way of calculating the gradient So,

38 37 Gradient descent and the delta rule If  is too large, the gradient descent search runs the risk of overstepping the minimum gradually reduce the value of 

39 38 Multilayer Networks Why multilayer network? Single perceptrons can only express linear decision surfaces So, add an extra(hidden) layer between the inputs and outputs E.g.) the speech recognition task

40 39 Multilayer Networks Sigmoid function

41 40 E defined as a sum of the squared errors over all the output units k for all the training examples d. Error Function for BP

42 41 BP Algorithm

43 42 After a fixed number of iterations (epochs) Once the error falls below some threshold Once the validation error meets some criterion Learning Until…

44 Self Organizing Map

45 44 Introduction Unsupervised Learning SOM (Self Organizing Map) Visualization Abstraction

46 45 SOM structures Neighborhood Input Layer Output Layer

47 46 Data to be clustered

48 47 After 100 iterations

49 48 After 500 iterations

50 49 After 2000 iterations

51 50 After 10000 iterations


Download ppt "Machine Learning 참고 자료 2 Learning Definition Learning is the improvement of performance in some environment through the acquisition of knowledge resulting."

Similar presentations


Ads by Google