Presentation is loading. Please wait.

Presentation is loading. Please wait.

How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with.

Similar presentations


Presentation on theme: "How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with."— Presentation transcript:

1 How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with Noah Goodman, Chris Baker, Rebecca Saxe, Tomer Ullman, Peter Battaglia, Jess Hamrick and others.

2 Core of common-sense reasoning Human thought is structured around a basic understanding of physical objects, intentional agents, and their relations. “Core knowledge” (Spelke, Carey, Leslie, Baillargeon, Gergely…) Intuitive theories (Carey, Gopnik, Wellman, Gelman, Gentner, Forbus, McCloskey…) Primitives of lexical semantics (Pinker, Jackendoff, Talmy, Pustejovsky) Visual scene understanding (Everyone here…) The key questions: (1) What is the form and content of human common-sense theories of the physical world, intentional agents, and their interaction? (2) How are these theories used to parse visual experience into representations that support reasoning, planning, communication? From scenes to stories…

3 A developmental perspective A 3 year old and her dad: Dad: “What's this a picture of?” Sarah: “A bear hugging a panda bear.”... Dad: “What is the second panda bear doing?” Sarah: “It's trying to hug the bear.” Dad:“What about the third bear?” Sarah: “It’s walking away.” But this feels too hard to approach now, so what about looking at younger children (e.g.12 months or younger)?

4 Common sense in infancy 1980’s-90s’: Wynn, Spelke, Baillargeon,…

5 Heider and Simmel, 1944 Southgate and Csibra, 2009 (13 month olds) Intuitive physics and psychology

6 Intuitive physics (Whiting et al) (Gupta, Efros, Hebert)

7 Intuitive psychology

8 Probabilistic generative models early 1990’s-early 2000’s –Bayesian networks: model the causal processes that give rise to observations; perform reasoning, prediction, planning via probabilistic inference. –The problem: not sufficiently flexible, expressive.

9 Scene understanding as an inverse problem The “inverse Pixar” problem: World state (t) Image (t) graphics

10 World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) physics graphics …… Scene understanding as an inverse problem The “inverse Pixar” problem:

11 Probabilistic programs Probabilistic models a la Laplace. –The world is fundamentally deterministic (described by a program), and perfectly predictable if we could observe all relevant variables. –Observations are always incomplete or indirect, so we put probability distributions on what we can’t observe. Compare with Bayesian networks. –Thick nodes. Programs defined over unbounded sets of objects, their properties, states and relations, rather than traditional finite- dimensional random variables. –Thick arrows. Programs capture fine-grained causal processes unfolding over space and time, not simply directed statistical dependencies. –Recursive. Probabilistic programs can be arbitrarily manipulated inside other programs. (e.g. perceptual inferences about entities that make perceptual inferences, entities with goals and plans re: other agents’ goals and plans.) Compare with grammars or logic programs.

12 Laplace’s demon We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. —Pierre Simon Laplace, A Philosophical Essay on Probabilities [ [

13 Probabilistic programs for “inverse pixar” scene understanding World state: CAD++ Graphics –Approximate Rendering Simple surface primitives Rasterization rather than ray tracing (for each primitive, which pixels does it affect?) Image features rather than pixels –Probabilities: Image noise, image features Unseen objects (e.g., due to occlusion)

14 Probabilistic programs for “inverse pixar” scene understanding World state: CAD++ Graphics Physics –Approximate Newton (physical simulation toolkit, e.g. ODE) Collision detection: zone of interaction Collision response: transient springs Dynamics simulation: only for objects in motion –Probabilities: Latent properties (e.g., mass, friction) Latent forces

15 Modeling stability judgments

16 World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) physics graphics ……

17 Modeling stability judgments World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) physics Prob. approx. rendering ……

18 Modeling stability judgments World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) …… physics Prob. approx. rendering

19 Modeling stability judgments World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) Prob. approx. Newton …… Prob. approx. rendering

20 Modeling stability judgments World state (t-1) World state (t)World state (t+1) Image (t-1) Image (t)Image (t+1) …… Prob. approx. rendering Prob. approx. Newton  = perceptual uncertainty

21 Perception: Approximate posterior with block positions normally distributed around ground truth, subject to global stability. Reasoning : Draw multiple samples from perception. Simulate forward with deterministic approx. Newton (ODE) Decision: Expectations of various functions evaluated on simulation outputs. (Hamrick, Battaglia, Tenenbaum, Cogsci 2011) Modeling stability judgments

22

23

24

25

26

27

28 Results Model prediction (expected proportion of tower that will fall) Mean human stability judgment

29 Simpler alternatives?

30 The flexibility of common sense (“infinite use of finite means”, “visual Turing test”) Which way will the blocks fall? How far will the blocks fall? If this tower falls, will it knock that one over? If you bump the table, will more red blocks or yellow blocks fall over? If this block had (not) been present, would the tower (still) have fallen over? Which of these blocks is heavier or lighter than the others? …

31 Direction of fall

32 Direction and distance of fall

33 If you bump the table…

34

35

36

37

38

39

40

41 Model prediction (expected proportion of red vs. yellow blocks that fall) Mean human judgment If you bump the table… (Battaglia, & Tenenbaum, in prep)

42 Experiment 1: Cause/ Prevention Judgments (Gerstenberg, Tenenbaum, Goodman, et al., in prep)

43 Modeling people’s cause/prevention judgments Physics Simulation Model p(B|A) – p(B| not A) p(B|A) 0 if ball misses 1 if ball goes in p(B| not A): assume sparse latent Gaussian perturbations on B’s velocity.

44 Simulation Model

45 Heider and Simmel, 1944 Intuitive psychology Beliefs (B) Desires (D) Actions (A)

46 Heider and Simmel, 1944 Intuitive psychology Beliefs (B) Desires (D) Actions (A) Beliefs (B)… Desires (D) … Pr(A|B,D) Actions (A) …

47 Heider and Simmel, 1944 Intuitive psychology Beliefs (B) Desires (D) Actions (A) Probabilistic approximate planning Probabilistic program

48 Intuitive psychology Beliefs (B) Desires (D) Actions (A) Probabilistic approximate planning Probabilistic program “Inverse economics” “Inverse optimal control” “Inverse reinforcement learning” “Inverse Bayesian decision theory” (Lucas & Griffiths; Jern & Kemp; Tauber & Steyvers; Rafferty & Griffiths; Goodman & Baker; Goodman & Stuhlmuller; Bergen, Evans & Tenenbaum … Ng & Russell; Todorov; Rao; Ziebart, Dey & Bagnell…) Actions i States j In state j, choose action i* =

49 Goal inference as inverse probabilistic planning (Baker, Tenenbaum & Saxe, Cognition, 2009) constraintsgoals actions rational planning (MDP) Agent Model People r =

50 Theory of mind: Joint inferences about beliefs and preferences (Baker, Saxe & Tenenbaum, CogSci 2011) BeliefsPreferences Actions Environment Agent state rational perception rational planning Agent PreferencesInitial Beliefs Food truck scenarios:

51 Goal inference with multiple agents (Baker, Goodman & Tenenbaum, CogSci 2008, in prep) Southgate & Csibra: Model People constraintsgoals actions rational planning (MDP) Agent constraintsgoals actions rational planning (MDP) Agent

52 Inferring social goals (Baker, Goodman & Tenenbaum, Cog Sci 2008; Ullman, Baker, Evans, Macindoe & Tenenbaum, NIPS 2009) constraintsgoals actions rational planning (MDP) Agent constraintsgoals actions rational planning (MDP) Agent Hamlin, Kuhlmeier, Wynn & Bloom: Model prediction Model prediction Subject ratings Subject ratings

53 Conclusions From scenes to stories… What contents of stories are routinely accessed through visual scenes? How can we represent that content for reasoning, communication, prediction and planning? Focus on core knowledge present in preverbal infants: intuitive physics, intuitive psychology. Representations using probabilistic programs: thick nodes (e.g. CAD++), thick arrows (physics, graphics, planning), recursive (inference about inference, goals about goals). Challenges for future work: (1) Integrating physics and psychology. (2) Efficient inference. (3) Learning.


Download ppt "How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with."

Similar presentations


Ads by Google