Presentation is loading. Please wait.

Presentation is loading. Please wait.

. Introduction to Bayesian Networks Instructor: Dan Geiger Web page:

Similar presentations


Presentation on theme: ". Introduction to Bayesian Networks Instructor: Dan Geiger Web page:"— Presentation transcript:

1 . Introduction to Bayesian Networks Instructor: Dan Geiger Web page: www.cs.technion.ac.il/~dang/courseBN http://webcourse.cs.technion.ac.il/236372 www.cs.technion.ac.il/~dang/courseBN Email: dang@cs.technion.ac.il dang@cs.technion.ac.il Phone: 829 4339 Office: Taub 616. על מה המהומה ? נישואים מאושרים בין תורת ההסתברות ותורת הגרפים. הילדים המוצלחים: אלגוריתמים לגילוי תקלות, קודים לגילוי שגיאות, מודלים למערכות מורכבות. שימושים במגוון רחב של תחומים. קורס ממבט אישי למדי על נושא מחקריי לאורך שנים רבות.

2 2 What is it all about ? How to use graphs to represent probability distributions over thousands of random variables ? How to encode conditional independence in directed and undirected graphs ? How to use such representations for efficient computations of the probability of events of interest ? How to learn such models from data ?

3 3 Course Information Meetings:  Lecture: Mondays 8:30 –10:30  Tutorial: Mondays 12:30 – 13:30 Grade: u 50% in 5 question sets. These questions sets are obligatory. Each contains mostly theoretical problems. Submit in pairs before due time (three weeks). u 40% Bochan on January 7. (Maybe 20%-20% with a lecture). u 10% Attending at least 12 lectures and recitation classes for a **passing grade**. (In very special circumstances 2% per missing item). u Prerequisites: u Data structure 1 (cs234218) u Algorithms 1 (cs234247) u Probability (any course) Information and handouts: u http://webcourse.cs.technion.ac.il/236372 http://webcourse.cs.technion.ac.il/236372 u http://www.cs.technion.ac.il/~dang/courseBN/ (Only lecture slides) http://www.cs.technion.ac.il/~dang/courseBN/

4 4 Relations to Some Other Courses u Introduction to Artificial Intelligence (cs236501) u Introduction to Machine Learning (cs236756) u Introduction to Neural Networks (cs236950) u Algorithms in Computational Biology (cs236522) u Error correcting codes u Data mining אמור לי מי חבריך ואומר לך מי אתה.

5 5 = Student lectures (8) = TENTATIVE Student lectures (7)

6 6 u Mathematical Foundations (4 weeks including students’ lectures, based on Pearl’s Chapter 3 + papers). 1.Properties of Conditional Independence (Soundness and completeness of marginal independence, graphoid axioms and their interpretation as “irrelevance”, incompleteness of conditional independence, no disjunctive axioms possible.) 2.Properties of graph separation (Paz and Pearl 85, Theorem 3), soundness and completeness of saturated independence statements. Undirected Graphs as I-maps of probability distributions. Markov-Blankets, Pairwise independence basis. Representation theorems (Pearl and Paz, from each basis to I-maps). Markov networks, HC representation theorem, Completeness theorem. Markov chains 3.Bayesian Networks, d-separation, Soundness, Completeness. 4.Chordal Graphs as the intersection of BN and Markov networks. Equivalence of their 4 definitions. u Combinatorial Optimiziation of Exact Inference in Graphical models (3 weeks including students lectures). 1.HMMs 2.Exact inference and their combinatorial optimization. 3.Clique tree algorithm. Conditioning. 4.Tree-width. Feedback Vertex Set. u Learning (5 weeks including students lectures). 1.Introduction to Bayesian statistics 2.Learning Bayesian networks 3.Chow and Liu’s algorithm; the TAN model. 4.Structural EM 5.Searching for Bayesian networks u Applications (2 weeks including student lectures).

7 7 Homeworks HMW #1. Read Chapter 3.1 & 3.2.1. Answer Questions 3.1, 3.2, Prove Eq 3.5b, and fully expand/fix the proof of Theorem 2. Submit in pairs no later than 5/11/12 (Two weeks from now). HMW #2. Read fully Chapter 3. Answer additional 5 questions of choice at the end. Submit in pairs no later than 19/11/12. HMW #3. Read Chapter 2 in Pearl’s book and answer 6 questions of choice at the end. Submit in pairs no later than 3/12/12. HMW #4. Submit in pairs 24/12/12 HMW #5. Submit in pairs 14/1/12 Pearl’s book contains all the notations that I happen not to define in these slides – consult it often – it is also a very unique and interesting classic text book.

8 8 The Traditional View of Probability in Text Books Probability theory provides the impression that we need to literally represent a joint distribution explicitly as P(x 1,…,x n ) on all propositions and their combinations. It is consistent and exhaustive. This representation stands in sharp contrast to human reasoning: It requires exponential computations to compute marginal probabilities like P(x 1 ) or conditionals like P(x 1 |x 2 ). Humans judge pairwise conditionals swiftly while conjunctions are judged hesitantly. Numerical calculations do not reflect simple reasoning tasks.

9 9 The Traditional View of Probability in Text Books Given ? Computed ? Given ? Estimated or Computed ? P(e | h) חישבו על hכמחלה ויראלית נדירה ועל e כחום גבוה.

10 10 The Qualitative Notion of Dependence Marginal independence is defined numerically as P(x,y)=P(x) P(y). The truth of this equation is hard to judge by humans, while judging whether X and Y are dependent is often easy. “Burglary within a day” and “nuclear war within five years” Likewise, people tend to judge x Y Z

11 11 The notions of relevance and dependence are far more basic than the numerical values. In a resonating system it should be asserted once and not be sensitive to numerical changes. Acquisition of new facts may destroy existing dependencies as well as creating new once. Learning child’s age Z destroys the dependency between height X and reading ability Y. Learning symptoms Z of a patient creates dependencies between the diseases X and Y that could account for it. Probability theory provides in principle such a device via P(X | Y, K) = P(X |K) But can we model the dynamics of dependency changes based on logic, without reference to numerical quantities ?

12 12 Definition of Marginal Independence Definition: I P (X,Y) iff for all x  D X and y  D y Pr(X=x, Y=y) = Pr(X=x) Pr(Y=y) Comments: u Each Variable X has a domain D X with value (or state) x in D X. u We often abbreviate via P(x, y) = P(x) P(y). u When Y is the emptyset, we get Pr(X=x) = Pr(X=x).  Alternative notations to I P (X,Y) such as: I(X,Y) or X  Y u Next few slides on properties of marginal independence are based on “Axioms and algorithms for inferences involving probabilistic independence.”

13 13 Properties of Marginal Independence Trivial Independence: I p (X,  ) Symmetry: I p (X,Y)  I p (Y,X) Decomposition: I p (X,YW)  I p (X,Y) Mixing: I p (X,Y) and I p (XY,W)  I p (X,YW) Proof (Soundness). Trivial independence and Symmetry follow from the definition. Decomposition: Given P(x,y,w) = P(x) P(y,w), simply sum over w on both sides of the given equation. Mixing: Given: P(x,y) = P(x) P(y) and P(x,y,w) = P(x,y) P(w). Hence, P(x,y,w) = P(x) P(y) P(w) = P(x) P(y,w).

14 14 Properties of Marginal Independence Are there more such independent properties of independence ? No. There are none ! Horn axioms are of the form  1 & … &  n   where each statement  stands for an independence statement. We use the symbol  for a set of independence statements. Namely:  is derivable from  via these properties if and only if  is entailed by  (i.e.,  holds in all probability distributions that satisfy  ). Put differently: For every set  and a statement  not derivable from , there exists a probability distribution P  that satisfies  and not .

15 15 Properties of Marginal Independence Can we use these properties to infer a new independence statements  from a set of given independence statements  in polynomial time ? YES. The “membership algorithm” and completeness proof in Recitation class (Paper P2). Comment. The question “does  entail  ” could in principle be undecidable, drops to being decidable via a complete set of axioms, and then drops to polynomial with this claim.

16 16 Properties of Marginal Independence u Can we check consistency of a set  + independence plus a set  - of negated independence statements ? The membership algorithm in previous slide applies only for G that includes one negated statement – simply use the algorithm to check that it is not entailed from  +. But another property of independence called “Amstrong Relation” guarantees that consistency is indeed verified by checking separately (in isolation) that each statement in  - is not entailed from  +.

17 17 Definitions of Conditional Independence I p (X,Z,Y) if and only if whenever P(Y=y,Z=z) >0 (3.1) Pr(X=x | Y=y, Z=z) = Pr(X=x |Z=z)

18 18 Properties of Conditional Independence Same Properties of conditional independence: Symmetry: I(X,Z,Y)  I(Y,Z,X) Decomposition: I(X,Z,YW)  I(X,Z,Y) Mixing: I(X,Z,Y) and I(XY,Z,W)  I(X,Z,YW) BAD NEWS. Are there more properties of independence ? Yes, infinitely many independent Horn axioms. No answer to the membership problem, nor to the consistency problem.

19 19 Important Properties of Conditional Independence Recall there are some more notations.

20 20 Graphical Interpretation

21 21 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”, “connected ideas”, “far-fetched arguments”. Still, capturing the essence of dependence is not an easy task. When modeling causation, association, and relevance, it is hard to distinguish between direct and indirect neighbors. If we just connect “dependent variables” we will get cliques.

22 22 Undirected Graphs can represent Independence Let G be an undirected graph (V,E). Define I G (X,Z,Y) for disjoint sets of nodes X,Y, and Z if and only if all paths between a node in X and a node in Y pass via a node in Z. In the text book another notation used is G.

23 23 M = { I G (M 1,{F 1,F 2 },M 2 ), I G (F 1,{M 1,M 2 },F 2 ) + symmetry } Other semantics. The color of each pixel depends on its neighbor.

24 24 Dependency Models – abstraction of Probability distributions A dependency model M over a finite set of elements U is a rule that assigns truth values to the predicate I M (X,Z,Y) where X,Y, Z are (disjoint) subsets of U.

25 25 Definitions: 1. G=(U,E) is an I-map of a model M over U if I G (X,Z,Y)  I M (X,Z,Y) for all disjoint subsets X,Y, Z of U. 2. G is a D-map of M if I M (X,Z,Y)  I G (X,Z,Y) for all disjoint subsets X,Y, Z of U. 3. G is a perfect map of M if I G (X,Z,Y)  I M (X,Z,Y) for all disjoint subsets X,Y, Z of U. 4. M is graph-isomorph if there exists a graph G such that G is a perfect map of M.

26 26

27 27 Representation of independencies with Undirected graphs can not always be perfect Strong Union: I G (X,Z,Y)  I G (X,ZW,Y) If G is an I-map of P it can represent I P (X,Z,Y) but can not represent the negation  I P (X,ZW,Y). If G is a D-map of P it can represent  I P (X,ZW,Y) but can not represent I P (X,Z,Y). This property holds for graph separation but not for conditional independence in probability.


Download ppt ". Introduction to Bayesian Networks Instructor: Dan Geiger Web page:"

Similar presentations


Ads by Google