Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shared Features in Log-Linear Models

Similar presentations


Presentation on theme: "Shared Features in Log-Linear Models"— Presentation transcript:

1 Shared Features in Log-Linear Models
Representation Probabilistic Graphical Models Template Models Shared Features in Log-Linear Models

2 Modeling Repetition

3 Intelligence G(s1)‏ I(s1)‏ G(s2)‏ I(s2)‏ Grade Students s

4 Nested Plates Difficulty Grade Intelligence D(c1)‏ G(s1,c1)‏ I(s1,c1)‏
Courses c Difficulty Grade Intelligence Students s D(c1)‏ G(s1,c1)‏ I(s1,c1)‏ G(s2,c1)‏ I(s2,c1)‏ D(c2)‏ I(s1,c2)‏ I(s2,c2)‏

5 Overlapping Plates Difficulty Intelligence Grade D(c1)‏ G(s1,c1)‏
Courses c Students s D(c1)‏ G(s1,c1)‏ I(s1)‏ D(c2)‏ I(s2)‏ G(s1,c2)‏ G(s2,c1)‏ G(s2,c2)‏

6 Explicit Parameter Sharing
D I G D(c1)‏ G(s1,c1)‏ I(s1)‏ D(c2)‏ I(s2)‏ G(s1,c2)‏ G(s2,c1)‏ G(s2,c2)‏

7 Collective Inference CS101 C A low high Geo101 easy / hard low / high
Welcome to CS101 C A low high Welcome to Geo101 easy / hard low / high This web of influence has interesting ramifications from the perspective of the types of reasoning patterns that it supports. Consider Forrest Gump. A priori, we believe that he is pretty likely to be smart. Evidence about two classes that he took changes our probabilities only very slightly. However, we see that most people who took CS101 got A’s. In fact, even people who did fairly poorly in other classes got an A in CS101. Therefore, we believe that CS101 is probably an easy class. To get a C in an easy class is unlikely for a smart student, so our probability that Forrest Gump is smart goes down substantially.

8 Plate Dependency Model
For a template variable A(U1,…,Uk): Template parents B1(U1),…,Bm(Um) CPD P(A | B1,…, Bm)

9 Ground Network A(U1,…,Uk) with parents B1(U1),…,Bm(Um)

10 Plate Dependency Model
For a template variable A(U1,…,Uk): Template parents B1(U1),…, Bm(Um)

11 Summary Template for an infinite set of BNs, each induced by a different set of domain objects Parameters and structure are reused within a BN and across different BNs Models encode correlations across multiple objects, allowing collective inference Multiple “languages”, each with different tradeoffs in expressive power


Download ppt "Shared Features in Log-Linear Models"

Similar presentations


Ads by Google