Presentation is loading. Please wait.

Presentation is loading. Please wait.

机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. Concept Learning Reference : Ch2 in Mitchell’s book 1. Concepts: Inductive learning hypothesis General-to-specific.

Similar presentations


Presentation on theme: "机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. Concept Learning Reference : Ch2 in Mitchell’s book 1. Concepts: Inductive learning hypothesis General-to-specific."— Presentation transcript:

1 机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心

2 Concept Learning Reference : Ch2 in Mitchell’s book 1. Concepts: Inductive learning hypothesis General-to-specific ordering of hypotheses Consistent hypothesis Version space Inductive bias, e.g. what is the inductive bias of Candidate- Elimination algorithm? 2. Algorithms: Find-S Candidate-Elimination 3. Section 2.6.2 & 2.6.3

3 Concept Learning (2)  Sample problems: 2.2, 2.4, 2.7.

4 Decision Tree Reference: Ch3 in Mitchell’s book 1. Concepts & Facts Entropy and information gain (Approximate) inductive bias of ID3 Overfitting Ockham’s razor 2. ID3 algorithm for decision tree learning Sample problem: 3.4

5 Artificial Neural Network Reference: Ch 4.3—4.6 in Mitchell’s book Focus on the following:  Perceptron (Fig 4.2)  Gradient Descent (Sec 4.4.3, exercise 4.5)  Understand how a backpropagation algorithm works  Representation power of perceptron and more generally, feedforward network Sample problem: Given an error function, derive the gradient descent update rule, such as 4.10.

6 Evaluating Hypotheses Reference: Ch5 in Mitchell’s book 1. Concepts & Theorem Sample error, true error Confidence intervals for observed hypothesis error, in particular, section 5.2.2 in the book Estimator, Bias, and variance (Section 5.3.4) Binomial distribution, normal distribution Sample problems: 5.3 & 5.4 (just computation), 5.6 (concept)

7 Bayesian Learning Reference: Ch6 in Mitchell’s book 1. Concepts and Theorem Bayes Theorem ML and MAP hypotheses Minimum Description Length Principle m-estimate of probability (6.22) 2. Algorithm Bayes Optimal Classifier Naive Bayes Classifier Sample problem: 6.1, 6.5, and the PlayTennis example in 6.9.1

8 Computational Learning Theory Reference: Ch7 in Mitchell’s book 1. Concepts and Theorems PAC-learnable, consistent learner, ε-exhausted Theorem 7.1 Theorem 7.2 (it provides an example of PAC-learnable) VC-dimension: concept and computation, such as exercise 7.5 Mistake bound, focus estimating mistake bound on some simple algorithms, such as those discussed in 7.5.1 and 7.5.2 Sample problems: 7.2, 7.5, 7.8

9 Kernel Methods Reference: Sec 3.1, 6.1 & 6.2 in Bishop’s book 1. Sec 6.1 Linear regression model, regularized least square Dual representation, Gram matrix, kernel function 2. Sec 6.2 Study everything except Fisher kernel Sample problem Exercise 6.10, 6.12 Verify some function is indeed a kernel, such as (6.23)

10 Maximum Margin Classifier Covers up to (7.37) + basic knowledge of later part of Sec 7.1 of Bishop’s book  Should have an overall picture of the main steps in the procedure of deriving the maximum margin classifier, including that for handling overlapping class distribution  Understand the following concepts Maximum margin, KKT condition, support vector  Should be able to solve problems with level of difficulty same to 7.3 & 7.4

11 Probabilistic Graphic Network Reference: Sec 8.1.1, 8.1.3, 8.2 + basic knowledge of Sec 8.3 & 8.4 of Bishop’s book, with focus on Sec 8.2  Concepts: Conditional independence, D-separation, clique, factor graph, tree, polytree.  Sample problems: Burglary example, “out-of-fuel” example, exercise 8.11

12 Hidden Markov Model Reference: course slides Sample problem: homework assignment of Ch11.


Download ppt "机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. Concept Learning Reference : Ch2 in Mitchell’s book 1. Concepts: Inductive learning hypothesis General-to-specific."

Similar presentations


Ads by Google