Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction of Probabilistic Reasoning and Bayesian Networks

Similar presentations


Presentation on theme: "Introduction of Probabilistic Reasoning and Bayesian Networks"— Presentation transcript:

1 Introduction of Probabilistic Reasoning and Bayesian Networks
Hongtao Du Group Presentation

2 Outline Uncertain Reasoning Probabilistic Reasoning
Bayesian Network (BN) Dynamic Bayesian Network (DBN)

3 Reasoning The activity of guessing the state of the domain from prior knowledge and observations. Causal reasoning Diagnostic reasoning Combinations of these two

4 Uncertain Reasoning (Guessing)
Some aspects of the domain are often unobservable and must be estimated indirectly through other observations. The relationships among domain events are often uncertain, particularly the relationship between the observables and non-observables.

5 The observations themselves may be unreliable.
Even though observable, very often we do not have sufficient resource to observe all relevant events. Even though events relations are certain, very often it is impractical to analyze all of them

6 Probabilistic Reasoning
Methodology founded on the Bayesian probability theory. Events and objects in the real world are represented by random variables. Probabilistic models: Bayesian reasoning Evidence theory Robust statistics Recursive operators

7 Graphical Model A tool that visually illustrate conditional independence among variables in a given problem. Consisting of nodes (Random variables or States) and edges (Connecting two nodes, directed or undirected). The lack of edge represents conditional independence between variables.

8 Chain, Path, Cycle, Directed Acyclic Graph (DAG), Parents and Children

9 Bayesian Network (BN) Probabilistic network, belief network, causal network. A specific type of graphical model that is represented as a Directed Acyclic Graph. X Z Y U V A B

10 BN consists of Assumptions variables (nodes) V={1, 2, …, k}
A set of dependencies (edges) D A set of probability distribution functions (pdf) of each variable P Assumptions P(X)=1 if and only if X is certain If X and Y are mutually exclusive, then P(X v Y) = P(X) + P(Y) Joint probability P(X, Y)= P(X|Y) P(Y)

11 X represents hypothesis
Y represents evidence P(Y|X) is likelihood P(X|Y) is the posterior probability If X, Y are conditionally independent P(X|Z, Y) = P(X|Z)

12 Given some certain evidence, BN operates by propagating beliefs throughout the network.
P(Z, Y, U, V) = P(Z) * P(Y|Z) * P(U|Y) * P(V|U) where is the parents of node Explaining away If a node is observed, its parents become dependent. Two causes (parents) compete to explain the observed data (child).

13 Tasks in Bayesian Network
Inference Learning

14 Inference Inference is the task of computing the probability of each state of a node in a BN when other variables are known. Method: dividing set of BN nodes into non-overlapping subsets of conditional independent nodes.

15 Example Given Y is the observed variable.
Goal: find the conditional pdf over Case 1:

16 Case 2:

17 Learning Goal: completing the missing beliefs in the network.
Adjusting the parameters of the Bayesian network so that the pdfs defined by the network sufficiently describes statistical behavior of the observed data.

18 M: a BN model : Parameter of probability of distribution : Observed data Goal: Estimating to maximize the posterior probability

19 Assume is highly peaked around maximum likelihood estimates

20 Dynamic Bayesian Network (DBN)
Bayesian network with time-series to represent temporal dependencies. Dynamically changing or evolving over time. Directed graphical model of stochastic processes. Especially aiming at time series modeling. Satisfying the Markovian condition: The state of a system at time t depends only on its immediate past state at time t-1.

21 Representation Time slice t1 t2 tk
The transition matrix that represent these time dependencies is called Conditional Probability Table (CPT).

22 Description T: time boundary we are investigating : observable variables : hidden-state variables : state transition pdfs, specifying time dependencies between states. : observation pdfs, specifying dependencies of observation nodes regarding to other nodes at time slice t. : initial state distribution.

23 Tasks in DBN Inference Decoding Learning Pruning

24 Inference Estimating the pdf of unknown states through given observations and initial probability distributions. Goal: finding : a finite set of T consecutive observations : the set of corresponding hidden variables

25 Decoding Finding the best-fitting probability values for the hidden states that have generated the known observations. Goal: determine the sequence of hidden states with highest probabilities.

26 Learning Given a number of observations, estimating parameters of DBN that best fit the observed data. Goal: Maximizing the joint probability distribution. : the model parameter vector

27 Pruning An important but difficult task in DBN.
Distinguishing which nodes are important for inference, and removing the unimportant nodes. Actions: Deleting states from a particular node Removing the connection between nodes Removing a node from the network

28 Time slice t : designated world nodes, a subset of the nodes, representing the part we want to inspect. , If state of is known, , then are no longer relevant to the overall goal of the inference. Thus, (1) delete all nodes (2) incorporate knowledge that

29 Future work Probabilistic reasoning in multiagent systems.
Different DBNs and applications. Discussion of DBN problems.


Download ppt "Introduction of Probabilistic Reasoning and Bayesian Networks"

Similar presentations


Ads by Google