Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 32 of 42 Wednesday, 08 November.

Slides:



Advertisements
Similar presentations
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Advertisements

Lauritzen-Spiegelhalter Algorithm
Exact Inference in Bayes Nets
IMPORTANCE SAMPLING ALGORITHM FOR BAYESIAN NETWORKS
Introduction of Probabilistic Reasoning and Bayesian Networks
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Overview Full Bayesian Learning MAP learning
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Bayes Nets Rong Jin. Hidden Markov Model  Inferring from observations (o i ) to hidden variables (q i )  This is a general framework for representing.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Introduction to Graphical Models.
Machine Learning CMPT 726 Simon Fraser University
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, March 6, 2000 William.
1er. Escuela Red ProTIC - Tandil, de Abril, Bayesian Learning 5.1 Introduction –Bayesian learning algorithms calculate explicit probabilities.
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Kansas State University Department of Computing and Information Sciences Kansas State University KDD Lab ( Graphical.
Computing & Information Sciences Kansas State University Lecture 28 of 42 CIS 530 / 730 Artificial Intelligence Lecture 28 of 42 William H. Hsu Department.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, March 1, 2000 William.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 26 of 41 Friday, 22 October.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, March 27, 2000 William.
Computing & Information Sciences Kansas State University Lecture 27 of 42 CIS 530 / 730 Artificial Intelligence Lecture 27 of 42 William H. Hsu Department.
Kansas State University Department of Computing and Information Sciences Kansas State University KDD Lab ( Permutation.
Computing & Information Sciences Kansas State University Lecture 30 of 42 CIS 530 / 730 Artificial Intelligence Lecture 30 of 42 William H. Hsu Department.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 25 Wednesday, 20 October.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Tuesday 08 October 2002 William.
Aprendizagem Computacional Gladys Castillo, UA Bayesian Networks Classifiers Gladys Castillo University of Aveiro.
Unsupervised Learning: Clustering Some material adapted from slides by Andrew Moore, CMU. Visit for
1 Instance-Based & Bayesian Learning Chapter Some material adapted from lecture notes by Lise Getoor and Ron Parr.
Bayesian Learning Chapter Some material adapted from lecture notes by Lise Getoor and Ron Parr.
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
Computing & Information Sciences Kansas State University Monday, 29 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 25 of 42 Wednesday, 29 October.
Problem Introduction Chow’s Problem Solution Example Proof of correctness.
Computing & Information Sciences Kansas State University Wednesday, 22 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 22 of 42 Wednesday, 22 October.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 28 of 41 Friday, 22 October.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Friday, 29 October 2004 William.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, March 10, 2000 William.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 25 of 41 Monday, 25 October.
Computing & Information Sciences Kansas State University Wednesday, 25 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 26 of 42 Wednesday. 25 October.
Computing & Information Sciences Kansas State University Data Sciences Summer Institute Multimodal Information Access and Synthesis Learning and Reasoning.
Computing & Information Sciences Kansas State University Monday, 06 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 31 of 42 Monday, 06 November.
Chapter 6 Bayesian Learning
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Computing & Information Sciences Kansas State University Friday, 27 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 27 of 42 Friday, 27 October.
Computing & Information Sciences Kansas State University Monday, 23 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 25 of 42 Monday, 23 October.
Kansas State University Department of Computing and Information Sciences CIS 798: Intelligent Systems and Machine Learning Thursday, 08 March 2007 William.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Graphical Models of Probability.
1 Machine Learning: Lecture 6 Bayesian Learning (Based on Chapter 6 of Mitchell T.., Machine Learning, 1997)
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 24 of 41 Monday, 18 October.
Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners
Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 32 of 42 Wednesday, 08 November.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 28 February 2007.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, March 8, 2000 Jincheng.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Classification COMP Seminar BCB 713 Module Spring 2011.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Computing & Information Sciences Kansas State University Friday, 03 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 30 of 42 Friday, 03 November.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Monday, 01 February 2016 William.
Computing & Information Sciences Kansas State University Wednesday, 25 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 26 of 42 Wednesday. 25 October.
Computing & Information Sciences Kansas State University Friday, 31 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 26 of 42 Friday, 31 October.
Computing & Information Sciences Kansas State University Wednesday, 01 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 29 of 42 Wednesday, 01 November.
Data Mining Lecture 11.
Exact Inference Continued
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
Presentation transcript:

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 32 of 42 Wednesday, 08 November 2006 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: Course web site: Instructor home page: Reading for Next Class: Chapter 15, Russell & Norvig 2 nd edition Advanced Topics in Uncertain Reasoning Discussion: Relational Models, PS7

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture Outline Wednesday’s Reading: Sections 14.6 – 14.8, R&N 2e, Chapter 15 Friday: Sections 18.1 – 18.2, R&N 2e Today: Inference in Graphical Models  Bayesian networks and causality  Inference and learning  BNJ interface (  Causality

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence BNJ Visualization: Pseudo-Code Annotation (Code Page) © 2004 KSU BNJ Development Team ALARM Network

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence General-Case BBN Structure Learning: Use Inference to Compute Scores Optimal Strategy: Bayesian Model Averaging  Assumption: models h  H are mutually exclusive and exhaustive  Combine predictions of models in proportion to marginal likelihood Compute conditional probability of hypothesis h given observed data D i.e., compute expectation over unknown h for unseen cases Let h  structure, parameters   CPTs Posterior ScoreMarginal Likelihood Prior over StructuresLikelihood Prior over Parameters Other Topics in Graphical Models [2]: Learning Structure from Data

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Propagation Algorithm in Singly-Connected Bayesian Networks – Pearl (1983) C1C1 C2C2 C3C3 C4C4 C5C5 C6C6 Upward (child-to- parent) messages  ’ (C i ’ ) modified during message-passing phase Downward  messages P ’ (C i ’ ) is computed during  message-passing phase Adapted from Neapolitan (1990), Guo (2000) Multiply-connected case: exact, approximate inference are #P-complete (counting problem is #P-complete iff decision problem is NP-complete)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Clustering [1]: Graph Operations (Moralization, Triangulation, Maximal Cliques) Adapted from Neapolitan (1990), Guo (2000) A D BE G C H F Bayesian Network (Acyclic Digraph) A D BE G C H F Moralize A1A1 D8D8 B2B2 E3E3 G5G5 C4C4 H7H7 F6F6 Triangulate Clq6 D8D8 C4C4 G5G5 H7H7 C4C4 Clq5 G5G5 F6F6 E3E3 Clq4 G5G5 E3E3 C4C4 Clq3 A1A1 B2B2 Clq1 E3E3 C4C4 B2B2 Clq2 Find Maximal Cliques

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Clustering [2]: Junction Tree – Lauritzen & Spiegelhalter (1988) Input: list of cliques of triangulated, moralized graph G u Output: Tree of cliques Separators nodes S i, Residual nodes R i and potential probability  (Clq i ) for all cliques Algorithm: 1. S i = Clq i  (Clq 1  Clq 2  …  Clq i-1 ) 2. R i = Clq i - S i 3. If i >1 then identify a j < i such that Clq j is a parent of Clq i 4. Assign each node v to a unique clique Clq i that v  c(v)  Clq i 5. Compute  (Clq i ) =  f(v) Clqi = P(v | c(v)) {1 if no v is assigned to Clq i } 6. Store Clq i, R i, S i, and  (Clq i ) at each vertex in the tree of cliques Adapted from Neapolitan (1990), Guo (2000)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Clustering [3]: Clique-Tree Operations Clq6 D8D8 C4C4G5G5 H7H7 C4C4 Clq5 G5G5 F6F6 E3E3 Clq4 G5G5 E3E3 C4C4 Clq3 A1A1 B2B2 Clq1 E3E3 C4C4 B2B2 Clq2  (Clq5) = P(H|C,G)  (Clq2) = P(D|C) Clq 1 Clq3 = {E,C,G} R3 = {G} S3 = { E,C } Clq1 = {A, B} R1 = {A, B} S1 = {} Clq2 = {B,E,C} R2 = {C,E} S2 = { B } Clq4 = {E, G, F} R4 = {F} S4 = { E,G } Clq5 = {C, G,H} R5 = {H} S5 = { C,G } Clq6 = {C, D} R5 = {D} S5 = { C}  (Clq 1 ) = P(B|A)P(A)  (Clq2) = P(C|B,E)  (Clq3) = 1  (Clq4) = P(E|F)P(G|F)P(F) AB BEC ECG EGF CGH CD B EC CGEG C R i : residual nodes S i : separator nodes  (Clq i ): potential probability of Clique i Clq 2 Clq 3 Clq 4 Clq 5 Clq 6 Adapted from Neapolitan (1990), Guo (2000)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Loop Cutset Conditioning Split vertex in undirected cycle; condition upon each of its state values Number of network instantiations: Product of arity of nodes in minimal loop cutset Posterior: marginal conditioned upon cutset variable values X3X3 X4X4 X5X5 Exposure-To- Toxins Smoking Cancer X6X6 Serum Calcium X2X2 Gender X7X7 Lung Tumor X 1,1 Age = [0, 10) X 1,2 Age = [10, 20) X 1,10 Age = [100,  ) Deciding Optimal Cutset: NP-hard Current Open Problems  Bounded cutset conditioning: ordering heuristics  Finding randomized algorithms for loop cutset optimization

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Variable Elimination [1]: Intuition Adapted from slides by S. Russell, UC Berkeley

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference by Variable Elimination [2]: Factoring Operations Adapted from slides by S. Russell, UC Berkeley

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence [2] Representation Evaluator for Learning Problems Genetic Wrapper for Change of Representation and Inductive Bias Control D: Training Data : Inference Specification D train (Inductive Learning) D val (Inference) [1] Genetic Algorithm α Candidate Representation f(α) Representation Fitness Optimized Representation Genetic Algorithms for Parameter Tuning in Bayesian Network Structure Learning

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence A Graphical View of Simple (Naïve) Bayes  x i  {0, 1} for each i  {1, 2, …, n}; y  {0, 1}  Given: P(x i | y) for each i  {1, 2, …, n}; P(y)  Assume conditional independence  i  {1, 2, …, n}  P(x i | x  i, y)  P(x i | x 1, x 2, …, x i-1, x i+1, x i+2, …, x n, y) = P(x i | y) NB: this assumption entails the Naïve Bayes assumption Why?  Can compute P(y | x) given this info  Can also compute the joint pdf over all n + 1 variables Inference Problem for a (Simple) Bayesian Network  Use the above model to compute the probability of any conditional event  Exercise: P(x 1, x 2, y | x 3, x 4 ) Using Graphical Models y x1x1 x2x2 xnxn x3x3 P(x 1 | y) P(x 2 | y) P(x 3 | y)P(x n | y)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence In-Class Exercise: Probabilistic Inference Inference Problem for a (Simple) Bayesian Network  Model: Naïve Bayes  Objective: compute the probability of any conditional event Exercise  Given P(x i | y), i  {1, 2, 3, 4} P(y)  Want: P( x 1, x 2, y | x 3, x 4 )

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Unsupervised Learning and Conditional Independence Given: (n + 1)-Tuples(x 1, x 2, …, x n, x n+1 )  No notion of instance variable or label  After seeing some examples, want to know something about the domain Correlations among variables Probability of certain events Other properties Want to Learn: Most Likely Model that Generates Observed Data  In general, a very hard problem  Under certain assumptions, have shown that we can do it Assumption: Causal Markovity  Conditional independence among “effects”, given “cause”  When is the assumption appropriate?  Can it be relaxed? Structure Learning  Can we learn more general probability distributions?  Examples: automatic speech recognition (ASR), natural language, etc. y x1x1 x2x2 xnxn x3x3 P(x 1 | y) P(x 2 | y) P(x 3 | y)P(x n | y)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Polytrees  aka singly-connected Bayesian networks  Definition: a Bayesian network with no undirected loops  Idea: restrict distributions (CPTs) to single nodes  Theorem: inference in singly-connected BBN requires linear time Linear in network size, including CPT sizes Much better than for unrestricted (multiply-connected) BBNs Tree Dependent Distributions  Further restriction of polytrees: every node has at one parent  Now only need to keep 1 prior, P(root), and n - 1 CPTs (1 per node)  All CPTs are 2-dimensional: P(child | parent) Independence Assumptions  As for general BBN: x is independent of non-descendants given (single) parent z  Very strong assumption (applies in some domains but not most) Tree Dependent Distributions x z root

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Inference in Trees Inference in Tree-Structured BBNs (“Trees”)  Generalization of Naïve Bayes to model of tree dependent distribution  Given: tree T with all associated probabilities (CPTs)  Evaluate: probability of a specified event, P(x) Inference Procedure for Polytrees  Recursively traverse tree Breadth-first, source(s) to sink(s) Stop when query value P(x) is known  Perform inference at each node  NB: for trees, proceed root to leaves (e.g., breadth-first or depth-first)  Simple application of Bayes’s rule (more efficient algorithms exist) Y2Y2 X Y1Y1

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Learning Tree Distributions: Example P(x1)P(x1) P(x 2 | x 1 ) P(x 3 | x 1 ) P(x 4 | x 1 ) T1T1 P(x1)P(x1) P(x 2 | x 1 )P(x 3 | x 1 ) P(x 4 | x 3 ) T2T2 Candidate Models: Tree-Structured BBNs Learning Problem  Given: sample D ~ distribution D  Return: most likely tree T that generated D  i.e., search for MAP hypothesis MAP Estimation over BBNs  Assuming uniform priors on trees, h MAP  h ML  T ML  Maximization program  Try this for Naïve Bayes…

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence In-Class Exercise: Learning Distributions [1] Input: D  {1011, 1001, 0100} ~ D CPT for P 1 CPT for P 2 CPT for P 3 Candidate Models (Trees): h 1  P 1, h 2  T(P 2 ), h 3  T(P 3 ) Tree-Structured BBN Learning P(x1)P(x1) P(x 2 | x 1 ) P(x 3 | x 1 ) P(x 4 | x 1 ) T

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence In-Class Exercise: Learning Tree Distributions [2] Input Data: D  {1011, 1001, 0100} Candidate Models: CPTs for Table (T 1 ), T 2, T 3 Results: Likelihood Estimation  P(D | T 1 ) = P(1011 | T 1 ) · P(1011 | T 1 ) · P(1011 | T 1 ) = 0.0 · 0.0 · 0.1 = 0.0  P(D | T 2 ) = P(1011 | T 2 ) · P(1001 | T 2 ) · P(0100 | T 2 ) P(1011 | T 2 ) = P(x 4 = 1) · P(x 1 = 1 | x 4 = 1) · P(x 2 = 0 | x 4 = 1) · P(x e = 1 | x 4 = 1) = 1/2 · 1/2 · 1/3 · 5/6 = 5/72 P(1001 | T 2 ) = 1/2 · 1/2 · 2/3 · 5/6 = 10/72 P(0100 | T 2 ) = 1/2 · 1/2 · 2/3 · 5/6 = 10/72 P(D | T 2 ) = 500 /   P(D | T 3 ) = 1/27  Likelihood ( T 1 ) < Likelihood (T 2 ) < Likelihood (T 3 ) Notes  Conclusion: of candidate models, T 3 is most likely to have produced D  Looked at 3 fixed distributions (NB and T, a tree with fixed structure)

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Learning Distributions: Objectives Learning The Target Distribution  What is the target distribution?  Can’t use “the” target distribution Case in point: suppose target distribution was P 1 (collected over 20 examples) Using Naïve Bayes would not produce an h close to the MAP/ML estimate  Relaxing CI assumptions: expensive MLE becomes intractable; BOC approximation, highly intractable Instead, should make judicious CI assumptions  As before, goal is generalization Given D (e.g., {1011, 1001, 0100}) Would like to know P(1111) or P(11**)  P(x 1 = 1, x 2 = 1) Several Variants  Known or unknown structure  Training examples may have missing values  Known structure and no missing values: as easy as training Naïve Bayes

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Learning Bayesian Networks: Partial Observability Suppose Structure Known, Variables Partially Observable  Example Can observe ForestFire, Storm, BusTourGroup, Thunder Can’t observe Lightning, Campfire  Similar to training artificial neural net with hidden units Causes: Storm, BusTourGroup Observable effects: ForestFire, Thunder Intermediate variables: Lightning, Campfire Learning Algorithm  Can use gradient learning (as for ANNs)  Converge to network h that (locally) maximizes P(D | h) Analogy: Medical Diagnosis  Causes: diseases or diagnostic findings  Intermediates: hidden causes or hypothetical inferences (e.g., heart rate)  Observables: measurements (e.g., from medical instrumentation) Bus TourGroup Storm LightningCampfire ForestFireThunder

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Learning Bayesian Networks: Gradient Ascent Algorithm Train-BN (D)  Let w ijk denote one entry in the CPT for variable Y i in the network w ijk = P(Y i = y ij | parents(Y i ) = ) e.g., if Y i  Campfire, then (for example) u ik   WHILE termination condition not met DO// perform gradient ascent Update all CPT entries w ijk using training data D Renormalize w ijk to assure invariants: Applying Train-BN  Learns CPT values  Useful in case of known structure  Next: learning structure from data Bus TourGroup Storm LightningCampfire ForestFireThunder

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Tree Dependent Distributions: Learning The Structure Problem Definition: Find Most Likely T Given D Brute Force Algorithm  FOR each tree T DO Compute the likelihood of T:  RETURN the maximal T Is This Practical?  Typically not… (|H| analogous to that of ANN weight space)  What can we do about it? Solution Approaches  Use criterion (scoring function): Kullback-Leibler (K-L) distance  Measures how well a distribution P approximates a distribution P’  aka K-L divergence, aka cross-entropy, aka relative entropy

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Tree Dependent Distributions: Maximum Weight Spanning Tree (MWST) Input: m Measurements (n-Tuples), i.i.d. ~ P Algorithm Learn-Tree-Structure (D)  FOR each variable X DO estimate P(x)// binary variables: n numbers  FOR each pair (X, Y) DO estimate P(x, y)// binary variables: n 2 numbers  FOR each pair DO compute the mutual information (measuring the information X gives about Y) with respect to this empirical distribution  Build a complete undirected graph with all the variables as vertices  Let I(X; Y) be the weight of edge (X, Y)  Build a Maximum Weight Spanning Tree (MWST)  Transform the resulting undirected tree into a directed tree (choose a root, and set the direction of all edges away from it)  Place the corresponding CPTs on the edges (gradient learning)  RETURN: a tree-structured BBN with CPT values

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Tree Dependent Distributions: Example Input: D  {1011, 1001, 0100} Estimation of Model Parameters  P( x 1 = 1) = 2/3, P( x 2 = 1) = 1/3, P( x 3 = 1) = 1/3, P( x 4 = 1) = 2/3  Pairs: 00, 01, 10, 11 P(x 1 x 2 ) = 0; 1/3; 2/3; 0P(x 1 x 2 ) / P(x 1 ) · P(x 2 ) = 0; 3; 3/2; 0 P(x 1 x 3 ) = 1/3; 0; 1/3; 1/3P(x 1 x 3 ) / P(x 1 ) · P(x 3 ) = 3/2; 0; 3/4; 3/2 P(x 1 x 4 ) = 1/3; 0; 0; 2/3P(x 1 x 4 ) / P(x 1 ) · P(x 4 ) = 3; 0; 0; 3/2 P(x 2 x 3 ) = 1/3; 1/3; 1/3; 0P(x 2 x 3 ) / P(x 2 ) · P(x 3 ) = 3/4; 3/2; 3/2; 0 P(x 2 x 4 ) = 0; 2/3; 1/3; 0P(x 2 x 4 ) / P(x 2 ) · P(x 4 ) = 0; 3; 3/2; 0 P(x 3 x 4 ) = 1/3; 1/3; 0; 1/3P(x 3 x 4 ) / P(x 3 ) · P(x 4 ) = 3/2; 3/4; 0; 3/2  Use these CPTs as input to MWST and gradient learning MWST Algorithms  MWST algorithms: Kruskal’s algorithm, Prim’s algorithm (quick sketch next time)  Complexity:  (n 2 lg n)  See [Cormen, Leiserson, and Rivest, 1990]

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Applications of Bayesian Networks Inference: Decision Support Problems  Diagnosis Medical [Heckerman, 1991] Equipment failure  Pattern recognition Image identification: faces, gestures Automatic speech recognition Multimodal: speechreading, emotions  Prediction: more applications later…  Simulation-based training [Grois, Hsu, Wilkins, and Voloshin, 1998]  Control automation Navigation with a mobile robot Battlefield reasoning [Mengshoel, Goldberg, and Wilkins, 1998] Learning: Acquiring Models for Inferential Applications Missile Fore Missile Mid Missile Aft Fore Port Fore Starboard Mid Port Mid Starboard Aft Port Aft Starboard Chill Water Rupture Electric 1GeneratorElectric 2 AC 1AC 2 Radar Overheat Pump Chill Water Pressure Radar Down Structural Damage

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Related Work in Bayesian Networks BBN Variants, Issues Not Covered Yet  Temporal models Markov Decision Processes (MDPs) Partially Observable Markov Decision Processes (POMDPs) Useful in reinforcement learning  Influence diagrams Decision theoretic model Augments BBN with utility values and decision nodes  Unsupervised learning (EM, AutoClass)  Feature (subset) selection: finding relevant attributes Current Research Topics Not Addressed in This Course  Hidden variables (introduction of new variables not observed in data)  Incremental BBN learning: modifying network structure online (“on the fly”)  Structure learning for stochastic processes  Noisy-OR Bayesian networks: another simplifying restriction

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Tools for Building Graphical Models Commercial Tools: Ergo, Netica, TETRAD, Hugin Bayes Net Toolbox (BNT) – Murphy (1997-present)  Distribution page  Development group Bayesian Network tools in Java (BNJ) – Hsu et al. (1999-present)  Distribution page  Development group  Current (re)implementation projects for KSU KDD Lab Continuous state: Minka (2002) – Hsu, Guo, Li Formats: XML BNIF (MSBN), Netica – Barber, Guo Space-efficient DBN inference – Meyer Bounded cutset conditioning – Chandak

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence References: Graphical Models and Inference Algorithms Graphical Models  Bayesian (Belief) Networks tutorial – Murphy (2001)  Learning Bayesian Networks – Heckerman (1996, 1999) Inference Algorithms  Junction Tree (Join Tree, L-S, Hugin): Lauritzen & Spiegelhalter (1988)  (Bounded) Loop Cutset Conditioning: Horvitz & Cooper (1989)  Variable Elimination (Bucket Elimination, ElimBel): Dechter (1986)  Recommended Books Neapolitan (1990) – out of print; see Pearl (1988), Jensen (2001) Castillo, Gutierrez, Hadi (1997) Cowell, Dawid, Lauritzen, Spiegelhalter (1999)  Stochastic Approximation

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Terminology Introduction to Reasoning under Uncertainty  Probability foundations  Definitions: subjectivist, frequentist, logicist  (3) Kolmogorov axioms Bayes’s Theorem  Prior probability of an event  Joint probability of an event  Conditional (posterior) probability of an event Maximum A Posteriori (MAP) and Maximum Likelihood (ML) Hypotheses  MAP hypothesis: highest conditional probability given observations (data)  ML: highest likelihood of generating the observed data  ML estimation (MLE): estimating parameters to find ML hypothesis Bayesian Inference: Computing Conditional Probabilities (CPs) in A Model Bayesian Learning: Searching Model (Hypothesis) Space using CPs

Computing & Information Sciences Kansas State University Wednesday, 08 Nov 2006CIS 490 / 730: Artificial Intelligence Summary Points Introduction to Probabilistic Reasoning  Framework: using probabilistic criteria to search H  Probability foundations  Definitions: subjectivist, objectivist; Bayesian, frequentist, logicist  Kolmogorov axioms Bayes’s Theorem  Definition of conditional (posterior) probability  Product rule Maximum A Posteriori (MAP) and Maximum Likelihood (ML) Hypotheses  Bayes’s Rule and MAP  Uniform priors: allow use of MLE to generate MAP hypotheses  Relation to version spaces, candidate elimination Next Week: Chapter 14, Russell and Norvig  Later: Bayesian learning: MDL, BOC, Gibbs, Simple (Naïve) Bayes  Categorizing text and documents, other applications