Presentation is loading. Please wait.

Presentation is loading. Please wait.

From Variable Elimination to Junction Trees

Similar presentations


Presentation on theme: "From Variable Elimination to Junction Trees"— Presentation transcript:

1 From Variable Elimination to Junction Trees
Yaniv Hamo and Mark Silberstein

2 Variable Elimination – what is it and why we need it
exists 0.1 not exists 0.9 R Reference exists not exists yes 0.8 0.4 no 0.2 0.6 S Submit HW yes no pass 0.9 0.5 fail 0.1 P Pass course Variable elimination is needed for answering questions such as “so, do I pass this course or not?”

3 So, do I pass this course or not?
We want to compute P(p) By definition: In our case (chain): P(p) = 0.1*(0.8* *0.5)+0.9*(0.4* *0.5) = 0.676 We essentially eliminated nodes R and S

4 The General Case – Inference
Network describes a unique probability distribution P We use inference as a name for the process of computing answers to queries about P There are many types of queries we might ask. Most of these involve evidence An evidence e is an assignment of values to a set E variables in the domain Without loss of generality E = { Xk+1, …, Xn } Simplest query: compute probability of evidence This is often referred to as computing the likelihood of the evidence

5 Another example of Variable Elimination
The “Asia” network: Visit to Asia Smoking Lung Cancer Tuberculosis Abnormality in Chest Bronchitis X-Ray Dyspnea

6 We are interested in P(d) - Need to eliminate: v,s,x,t,l,a,b
Initial factors: Brute force: V S L T A B X D

7 Eliminate variables in order:
Initial factors: V S L T A B X D [ Note: fv(t) = P(t) In general, result of elimination is not necessarily a probability term ]

8 Eliminate variables in order:
Initial factors: V S L T A B X D [ Note: result of elimination may be a function of several variables ]

9 Eliminate variables in order:
Initial factors: V S L T A B X D [ Note: fx(a) = 1 for all values of a ]

10 Eliminate variables in order:
Initial factors: V S L T A B X D

11 Eliminate variables in order:
Initial factors: V S L T A B X D

12 Eliminate variables in order:
Initial factors: V S L T A B X D

13 Eliminate variables in order:
Initial factors: V S L T A B X D

14 Intermediate factors In our previous example: With a different ordering: V S L T A B X D Complexity is exponential in the size of these factors!

15 Notes about variable elimination
Actual computation is done in the elimination steps Computation depends on the order of elimination For each query we need to compute everything again! Many redundant calculations

16 The idea Compute joint over partitions of U
􀂄 small subset of U (typically made of a variable and its parents) - clusters not necessary disjoint Calculate To compute P(X) need far less operations:

17 Junction Trees The junction tree algorithms generalize Variable Elimination to the efficient, simultaneous execution of a large class of queries. Theoretical background was shown in the previous lecture

18 Constructing Junction Trees
Moralize the graph (if directed) Choose a node ordering and find the cliques generated by variable elimination. This gives a triangulation of the graph Build a junction graph from the eliminated cliques Find an appropriate spanning tree

19 Step 1: Moralization G = ( V , E ) GM 1. For all w  V:
b c d e f g h a b c d e f g h a b c d e f g h G = ( V , E ) GM 1. For all w  V: • For all u,vpa(w) add an edge e=u-v. 2. Undirect all edges.

20 Step 2: Triangulation GM GT
b c d e f g h GM GT Add edges to GM such that there is no cycle with length  4 that does not contain a chord. NO YES

21 Step 2: Triangulation (cont.)
Each elimination ordering triangulates the graph, not necessarily in the same way: A H B D F C E G A H B D F C E G A H B D F C E G A H B D F C E G A H B D F C E G A H B D F C E G A A A H B D F C E G B D F C E G B C D E F G H H

22 Step 2: Triangulation (cont.)
Intuitively, triangulations with as few fill-ins as possible are preferred Leaves us with small cliques (small probability tables) A common heuristic: Repeat until no nodes remain: Find the node whose elimination would require the least number of fill-ins (may be zero). Eliminate that node, and note the need for a fill-in edge between any two non-adjacent neighbors. Add the fill-in edges to the original graph.

23 GT a b c d e f g h 1 h egh - 2 g ceg - 3 f def - 4 c ace a-e
Eliminate the vertex that requires least number of edges to be added. a a b c d e f g a b c d e f b c g a b c d e f g h d e h f GM a b c d e a b d e a d e a e a GT vertex induced added removed clique edges 1 h egh - 2 g ceg - 3 f def - 4 c ace a-e removed clique edges 5 b abd a-d 6 d ade - 7 e ae - 8 a a -

24 Step 3: Junction Graph A junction graph for an undirected graph G is an undirected, labeled graph. The nodes are the cliques in G. If two cliques intersect, they are joined in the junction graph by an edge labeled with their intersection.

25 a b c d e f g h a b c d e f g h a b c d e f g h a b d c e f g h
Bayesian Network G = ( V , E ) Moral graph GM Triangulated graph GT a b d c e f g h abd a ace ad ae ce ade e ceg e de e eg seperators def e egh Cliques e.g. ceg  egh = eg Junction graph GJ (not complete)

26 Step 4: Junction Tree A junction tree is a sub-graph of the junction graph that Is a tree Contains all the cliques (spanning tree) Satisfies the running intersection property: for each pair of nodes U, V, all nodes on the path between U and V contain (as seen in the previous part of the lecture)

27 Step 4: Junction Tree (cont.)
Theorem: An undirected graph is triangulated if and only if its junction graph has a junction tree Definition: The weight of a link in a junction graph is the number of variable in the label. The weight of a junction tree is the sum of weights of the labels. Theorem: A sub-tree of the junction graph of a triangulated graph is a junction tree if and only if it is a spanning of maximal weight

28 Junction tree GJT There are several methods to find MST.
Kruskal’s algorithm: choose successively a link of maximal weight unless it creates a cycle. abd ade ace ceg egh def ad ae ce de eg abd ade ace ceg egh def ad ae ce de eg e a Junction tree GJT Junction graph GJ (not complete)

29 Another example Compute the elimination cliques (the order here is f, d, e, c, b, a). Form the complete junction graph over the maximal elimination cliques and find a maximum-weight spanning tree.

30 Junction Trees and Elimination Order
We can use different orderings in variable elimination - affects efficiency. Each ordering corresponds to a junction tree. Just as some elimination orderings are more efficient than others, some junction trees are better than others. (Recall our mention of heuristics for triangulation.)

31 OK, I have this tree, now what?
L T A B X V S D A separator S divides the remaining variables into two groups Variables in each group appear on one side in the cluster tree T,V A,L,T B,L,S X,A A,L,B A,B,D A A,B B,L T A,L Examples: {A,B}: {L, S, T, V} & {D, X} {A,L}: {T, V} & {B,D,S,X} {B,L}: {S} & {A, D,T, V, X} {A}: {X} & {B,D,L, S, T, V} {T}; {V} & {A, B, D, K, S, X}

32 Elimination in Junction Trees
Let X and Y be the partition induced by S Observation: Eliminating all variables in X results in a factor fX(S) Proof: Since S is a separator only variables in S are adjacent to variables in X Note:The same factor would result, regardless of the elimination order x y A B S fX(S) fY(S)

33 Recursive Elimination in Junction Trees
How do we compute fX(S) ? By recursive decomposition along cluster tree Let X1 and X2 be the disjoint partitioning of X \ C implied by the separators S1 and S2 Eliminate X1 to get fX1(S1) Eliminate X2 to get fX2(S2) Eliminate variables in C \ S to get fX(S) C S S2 S1 x1 x2 y


Download ppt "From Variable Elimination to Junction Trees"

Similar presentations


Ads by Google