Download presentation
Presentation is loading. Please wait.
1
Sigmoid and logistic regression
2
One-hot encoding One-hot: Encode n states using n flip-flops
Assign a single “1” for each state Example: 0001, 0010, 0100, 1000 Propagate a single “1” from one flip-flop to the next All other flip-flop outputs are “0”
5
Multilayer Neural Network for Classification
6
softmax
8
One hot encoding and softmax function
9
Error representation 방식
Classification error Mean squared error (MSE) Average Cross entropy error (ACE error)
10
Example case
11
Classification error Classification error = 1/3
12
Mean squared error Mean squared error = Mean squared error =
13
Cross entropy The cross entropy for two distributions p and q over the same discrete probability space is defined as follows: H(p,q) = - x p(x) log(q(x))
14
Average Cross Entropy (ACE) error
15
MSE vs. ACE 속도는 ACE가 좋음 학습 정도는 경우에 따라 다름
Classification에는 ACE, regression 에는 MSE 사용하는 경우가 많음
19
Rectified Linear Unit
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.