Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course: Autonomous Machine Learning

Similar presentations


Presentation on theme: "Course: Autonomous Machine Learning"— Presentation transcript:

1 Course: Autonomous Machine Learning
Chapter 4: Bayesian Filtering for State Estimation of the Environment Cognitive Dynamic Systems, S.Haykin Course: Autonomous Machine Learning Nguyen Duc Lam Social and Computer Networks Lab School of Computer Science and Engineering Seoul National University

2 Outline Introduction Bayesian Filter Conclusion 9/22/2018 IoT & SDN

3 Outline Introduction What is Bayesian ? Problem Statement
Bayesian Filter Conclusion 9/22/2018 IoT & SDN

4 Introduction [1/3] What is Bayesian theory ? use Bayes' Theorem to find the conditional probability of an event P(A | B), when the "reverse" conditional probability P(B | A) is the probability that is known Where A and B are events and P(B) ≠ 0 P(A | B), a conditional probability, is the probability of observing event A given that B is true. P(A) and P(B) are the probabilities of observing A and B without regard to each other P(B | A) is the probability of observing event B given that A is true. “In probability theory and statistics, Bayes’ theorem describes the probability of an event, based on conditions that might be related to the event”. 9/22/2018 IoT & SDN

5 Introduction [2/3] What is Bayesian ? Likelihood Prior Posterior
Given: Likelihood: A doctor knows that flu causes stiff neck 50% of the time Prior : Probability of any patient having flu is 1/50000 Evidence: Probability of any patient having stiff neck is 1/20 if a patient has stiff neck What is the probability he/she has flu ? 𝑃 𝑀 𝑆 = 𝑃 𝑆 𝑀 ∗ 𝑃 𝑀 𝑃(𝑆) = 0.5 ∗1/ /20 =0.0002 Likelihood Prior Posterior Evidence 9/22/2018 IoT & SDN

6 Introduction [3/3] Given a state-space model of the environment
A system equation A measurement equation Practical issues: The state of the environment is hidden from the observer Evolution of the state across time and measurements on the environment are both corrupted by the unavoidable presence of physical uncertainties in the environment. Solutions: Bayesian Framework Goal : Develop algorithms for solving the state-estimation problem. The state of the environment is hidden from the observer, with information about the state being available only indirectly through dependence of the observables (measurements) on the state. 9/22/2018 IoT & SDN

7 Outline Introduction Bayesian Filter Conclusion State-Space Model
Sequential Extended Kalman Filter Conclusion 9/22/2018 IoT & SDN

8 Bayesian Filter [1/8] State-space Model the state of a dynamic system.
System equation: n denotes discrete time. xn the vector denotes the current value of the state. xn+1 denotes its immediate future value. vector 𝜔 𝑛 denotes system noise an(.,.) is a vector function of its two arguments, representing transition from state xn to state xn+1 Measurement equation: vector yn denotes a set of measurements (observables) vector vn denotes measurement noise. and bn(.,.) denotes another vector function. The material on Bayesian inference presented previos provides the right background for Bayesian filtering, which is aimed at estimating the state of a dynamic system. The state of a dynamic system is defined as the minimal amount of information about the effects past inputs applied to the system, such that it is sufficient to completely describe the future behavior. Typically, the state is not measurable directly, as it is hidden from the perceptor. Rather, in an indirect manner, the state makes its effect on the environment (outside world) to be estimatable through a set of observables. As such, characterization of the dynamic system is described by a state-space model, which embodies a pair of equations System equation: which, formulated as a first-order Markov chain, describes the evolution of the state as a function of time. 9/22/2018 IoT & SDN

9 Bayesian Filter [2/8] State-space Model Assumptions:
The initial state x0 is uncorrelated with the system noise 𝜔 𝑛 for all n The two sources of noise, vn and vn, are statistically independent of each other Generic state-space model of a time-varying, nonlinear dynamic system, where Z-1 denotes a block of time-unit delays. Although, indeed, the state is hidden from the perceptor, the environment does provide information about the state through measurements (observables), which prompts us to make the following statement: Given a record of measurements, consisting of y1, y2, …, yn, the requirement is to compute an estimate of the unknown state xk that is optimal in some statistical sense, with the estimation being performed in a sequential manner. In a way, this statement embodies two systems: • The unknown dynamic system, whose observable yn is a function of the hidden state. • The sequential state-estimator or fi lter, which exploits information about the state that is contained in the observables. This equation is a sufficient condition for independence when vn and 𝜔 𝑛 are jointly Gaussian. 9/22/2018 IoT & SDN

10 Bayesian Filter [3/8] Sequential State Estimation problem
The State-estimation problem Prediction : k > n Filtering : k=n Smoothing : k <n The state-estimation problem is commonly referred to as prediction if k > n, filtering if k = n, and smoothing if k < n. Typically, a smoother is statistically more accurate than both the predictor and filter, as it uses more observables, past and present. On the other hand, both prediction and filtering can be performed in real time, whereas smoothing cannot 9/22/2018 IoT & SDN

11 Bayesian Filters [4/8] Framework Given:
Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief: 9/22/2018 IoT & SDN

12 Bayesian Filters [5/8] Markov Assumption Underlying Assumptions
Static world Independent noise Perfect model, no approximation errors

13 Bayesian Filters [6/8] z = observation u = action x = state Bayes
Markov Total prob. Markov 9/22/2018 IoT & SDN

14 Bayesian Filter [7/8] The Bayesian Filter Optimal of Bayesian Filter
The adoption of a Bayesian fi lter to solve the state estimation of a dynamic system, be it linear or nonlinear, is motivated by the fact that it provides a general unifying framework for sequential state estimation, at least in a conceptual sense. The fi lter operates in a recursive manner by propagating the posterior p(xn| Yn) from one recursion to the next Knowledge about the state xn, extracted from the entire observations process Yn by the fi lter, is completely contained in the posterior p(xn | Yn), which is the “best” that can be achieved, at least in a conceptual sense. The time update and measurement update are both carried out at every time step throughout the computation of the Bayesian fi lter. In effect, they constitute a computational recursion of the fi lter, as depicted in Figure 4.6; the factor Zn has been left out in the fi gure for convenience of presentation 9/22/2018 IoT & SDN

15 Bayesian Filter [8/8] Approximation of the Bayesian Filter
Direct numerical approximation of the posterior Kalman Filter Theory Indirect numerical approximation of the posterior Monte Carlo Particle filters 9/22/2018 IoT & SDN

16 Outline Introduction Bayesian Filter Conclusion 9/22/2018 IoT & SDN

17 Conclusion [1/1] Overview of Bayesian Theorem State-Space Model
Bayesian Filter for state estimation 9/22/2018 IoT & SDN

18 THANK YOU Q&a 9/22/2018 IoT & SDN

19 Appendix Time update Measurement Update
Time update. The fi rst update involves computing the predictive distribution of xn given the observations sequence Yn−1, as shown by 9/22/2018 IoT & SDN


Download ppt "Course: Autonomous Machine Learning"

Similar presentations


Ads by Google